id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
1326839
https://en.wikipedia.org/wiki/Octodon
Octodon
Octodon is a genus of octodontid rodents native to South America, in particular in the Chilean Andes. The best-known member is the common degu, O. degus, which is kept as a pet in various countries. Two of the four species of degus are nocturnal. Classification This genus was first described in 1832 by the British zoologist Edward Turner Bennett. Taxonomy The genus name Octodon comes from the Latin octo, eight, with reference to their teeth, molars and premolars having the shape of the number 8. The full list of species is: O. bridgesii, Bridges's degu, found in central Chile O. degus, the common degu or degu, found in central Chile O. lunatus, the moon-toothed degu, found in central Chile O. pacificus, the Pacific degu or Mocha Island degu, found exclusively on Mocha Island, Chile O. ricardojeda, Ricardo Ojeda's degu, found in western Argentina and Chile Distribution In the wild, all species of degus live in the Andes, mainly in the mountains of Chile, to which most of them are endemic, aside from a few populations of O. ricardojeda in the neighboring province of Neuquén, Argentina. They are found between 0 and 1,800 m in altitude. Description They are medium-sized octodontids. Their total body length varies between 200 and 390 mm, with a tail which measures between 81 and 170 mm and represents 70 to 80% of the head + body length. The coat color is grayish, or dull with orange highlights, and turns creamy yellow on the belly. The tail is carried slightly curved and is the same color as the body, and ends with a tuft of black hairs, with the extent of black depending on the species. The ears are quite large and protrude widely from the head, except in O. pacificus . The lower limbs are suitable for jumping, with pads on the soles of the paws that prevent slipping. The forelegs have four clawed fingers, along with a poorly developed fifth finger with a nail. The glans of the penis is characterized by a variable number of spikes, 5 or more in number, on each side. Lifestyle They are both nocturnal and diurnal. Degus are primarily folivorous herbivores. Their diet varies according to annual vegetation cycles. The consumption of their own droppings (coprophagia) is practiced during times of scarcity. This provides them with a nutritional supplement thanks to the microbial fermentation that takes place in the cecum and optimizes the digestion of fibrous foods. These social animals dig a burrow made of a series of tunnels where they live in small to large groups composed of both males and females. Conservation status O. bridgesii (assessed as conspecific with O. ricardojeda) is considered vulnerable by the IUCN Red List, while O. pacificus is considered critically endangered. The other two species are considered of least concern.
Biology and health sciences
Rodents
Animals
1328116
https://en.wikipedia.org/wiki/Neutrino%20oscillation
Neutrino oscillation
Neutrino oscillation is a quantum mechanical phenomenon in which a neutrino created with a specific lepton family number ("lepton flavor": electron, muon, or tau) can later be measured to have a different lepton family number. The probability of measuring a particular flavor for a neutrino varies between three known states, as it propagates through space. First predicted by Bruno Pontecorvo in 1957, neutrino oscillation has since been observed by a multitude of experiments in several different contexts. Most notably, the existence of neutrino oscillation resolved the long-standing solar neutrino problem. Neutrino oscillation is of great theoretical and experimental interest, as the precise properties of the process can shed light on several properties of the neutrino. In particular, it implies that the neutrino has a non-zero mass outside the Einstein-Cartan torsion, which requires a modification to the Standard Model of particle physics. The experimental discovery of neutrino oscillation, and thus neutrino mass, by the Super-Kamiokande Observatory and the Sudbury Neutrino Observatories was recognized with the 2015 Nobel Prize for Physics. Observations A great deal of evidence for neutrino oscillation has been collected from many sources, over a wide range of neutrino energies and with many different detector technologies. The 2015 Nobel Prize in Physics was shared by Takaaki Kajita and Arthur B. McDonald for their early pioneering observations of these oscillations. Neutrino oscillation is a function of the ratio, where is the distance traveled and is the neutrino's energy. (Details in below.) All available neutrino sources produce a range of energies, and oscillation is measured at a fixed distance for neutrinos of varying energy. The limiting factor in measurements is the accuracy with which the energy of each observed neutrino can be measured. Because current detectors have energy uncertainties of a few percent, it is satisfactory to know the distance to within 1%. Solar neutrino oscillation The first experiment that detected the effects of neutrino oscillation was Ray Davis' Homestake experiment in the late 1960s, in which he observed a deficit in the flux of solar neutrinos with respect to the prediction of the Standard Solar Model, using a chlorine-based detector. This gave rise to the solar neutrino problem. Many subsequent radiochemical and water Cherenkov detectors confirmed the deficit, but neutrino oscillation was not conclusively identified as the source of the deficit until the Sudbury Neutrino Observatory provided clear evidence of neutrino flavor change in 2001. Solar neutrinos have energies below 20 MeV. At energies above 5 MeV, solar neutrino oscillation actually takes place in the Sun through a resonance known as the MSW effect, a different process from the vacuum oscillation described later in this article. Atmospheric neutrino oscillation Following the theories that were proposed in the 1970s suggesting unification of electromagnetic, weak, and strong forces, a few experiments on proton decay followed in the 1980s. Large detectors such as IMB, MACRO, and Kamiokande II have observed a deficit in the ratio of the flux of muon to electron flavor atmospheric neutrinos (see muon decay). The Super-Kamiokande experiment provided a very precise measurement of neutrino oscillation in an energy range of hundreds of MeV to a few TeV, and with a baseline of the diameter of the Earth; the first experimental evidence for atmospheric neutrino oscillations was announced in 1998. Reactor neutrino oscillation Many experiments have searched for oscillation of electron anti-neutrinos produced in nuclear reactors. No oscillations were found until a detector was installed at a distance 1–2 km. Such oscillations give the value of the parameter . Neutrinos produced in nuclear reactors have energies similar to solar neutrinos, of around a few MeV. The baselines of these experiments have ranged from tens of meters to over 100 km (parameter ). Mikaelyan and Sinev proposed to use two identical detectors to cancel systematic uncertainties in reactor experiment to measure the parameter . In December 2011, the Double Chooz experiment found that Then, in 2012, the Daya Bay experiment found thatwith a significance of These results have since been confirmed by RENO. Beam neutrino oscillation Neutrino beams produced at a particle accelerator offer the greatest control over the neutrinos being studied. Many experiments have taken place that study the same oscillations as in atmospheric neutrino oscillation using neutrinos with a few GeV of energy and several-hundred-km baselines. The MINOS, K2K, and Super-K experiments have all independently observed muon neutrino disappearance over such long baselines. Data from the LSND experiment appear to be in conflict with the oscillation parameters measured in other experiments. Results from the MiniBooNE appeared in Spring 2007 and contradicted the results from LSND, although they could support the existence of a fourth neutrino type, the sterile neutrino. In 2010, the INFN and CERN announced the observation of a tauon particle in a muon neutrino beam in the OPERA detector located at Gran Sasso, 730 km away from the source in Geneva. T2K, using a neutrino beam directed through 295 km of earth and the Super-Kamiokande detector, measured a non-zero value for the parameter in a neutrino beam. NOνA, using the same beam as MINOS with a baseline of 810 km, is sensitive to the same. Theory Neutrino oscillation arises from mixing between the flavor and mass eigenstates of neutrinos. That is, the three neutrino states that interact with the charged leptons in weak interactions are each a different superposition of the three (propagating) neutrino states of definite mass. Neutrinos are emitted and absorbed in weak processes in flavor eigenstates but travel as mass eigenstates. As a neutrino superposition propagates through space, the quantum mechanical phases of the three neutrino mass states advance at slightly different rates, due to the slight differences in their respective masses. This results in a changing superposition mixture of mass eigenstates as the neutrino travels; but a different mixture of mass eigenstates corresponds to a different mixture of flavor states. For example, a neutrino born as an electron neutrino will be some mixture of electron, mu, and tau neutrino after traveling some distance. Since the quantum mechanical phase advances in a periodic fashion, after some distance the state will nearly return to the original mixture, and the neutrino will be again mostly electron neutrino. The electron flavor content of the neutrino will then continue to oscillate – as long as the quantum mechanical state maintains coherence. Since mass differences between neutrino flavors are small in comparison with long coherence lengths for neutrino oscillations, this microscopic quantum effect becomes observable over macroscopic distances. In contrast, due to their larger masses, the charged leptons (electrons, muons, and tau leptons) have never been observed to oscillate. In nuclear beta decay, muon decay, pion decay, and kaon decay, when a neutrino and a charged lepton are emitted, the charged lepton is emitted in incoherent mass eigenstates such as because of its large mass. Weak-force couplings compel the simultaneously emitted neutrino to be in a "charged-lepton-centric" superposition such as which is an eigenstate for a "flavor" that is fixed by the electron's mass eigenstate, and not in one of the neutrino's own mass eigenstates. Because the neutrino is in a coherent superposition that is not a mass eigenstate, the mixture that makes up that superposition oscillates significantly as it travels. No analogous mechanism exists in the Standard Model that would make charged leptons detectably oscillate. In the four decays mentioned above, where the charged lepton is emitted in a unique mass eigenstate, the charged lepton will not oscillate, as single mass eigenstates propagate without oscillation. The case of (real) W boson decay is more complicated: W boson decay is sufficiently energetic to generate a charged lepton that is not in a mass eigenstate; however, the charged lepton would lose coherence, if it had any, over interatomic distances (0.1 nm) and would thus quickly cease any meaningful oscillation. More importantly, no mechanism in the Standard Model is capable of pinning down a charged lepton into a coherent state that is not a mass eigenstate, in the first place; instead, while the charged lepton from the W boson decay is not initially in a mass eigenstate, neither is it in any "neutrino-centric" eigenstate, nor in any other coherent state. It cannot meaningfully be said that such a featureless charged lepton oscillates or that it does not oscillate, as any "oscillation" transformation would just leave it the same generic state that it was before the oscillation. Therefore, detection of a charged lepton oscillation from W boson decay is infeasible on multiple levels. Pontecorvo–Maki–Nakagawa–Sakata matrix The idea of neutrino oscillation was first put forward in 1957 by Bruno Pontecorvo, who proposed that neutrino–antineutrino transitions may occur in analogy with neutral kaon mixing. Although such matter–antimatter oscillation had not been observed, this idea formed the conceptual foundation for the quantitative theory of neutrino flavor oscillation, which was first developed by Maki, Nakagawa, and Sakata in 1962 and further elaborated by Pontecorvo in 1967. One year later the solar neutrino deficit was first observed, and that was followed by the famous article by Gribov and Pontecorvo published in 1969 titled "Neutrino astronomy and lepton charge". The concept of neutrino mixing is a natural outcome of gauge theories with massive neutrinos, and its structure can be characterized in general. In its simplest form it is expressed as a unitary transformation relating the flavor and mass eigenbasis and can be written as where is a neutrino with definite flavor = (electron), (muon) or (tauon), is a neutrino with definite mass the superscript asterisk () represents a complex conjugate; for antineutrinos, the complex conjugate should be removed from the first equation and inserted into the second. The symbol represents the Pontecorvo–Maki–Nakagawa–Sakata matrix (also called the PMNS matrix, lepton mixing matrix, or sometimes simply the MNS matrix). It is the analogue of the CKM matrix describing the analogous mixing of quarks. If this matrix were the identity matrix, then the flavor eigenstates would be the same as the mass eigenstates. However, experiment shows that it is not. When the standard three-neutrino theory is considered, the matrix is 3×3. If only two neutrinos are considered, a 2×2 matrix is used. If one or more sterile neutrinos are added (see later), it is 4×4 or larger. In the 3×3 form, it is given by where and The phase factors and are physically meaningful only if neutrinos are Majorana particles—i.e. if the neutrino is identical to its antineutrino (whether or not they are is unknown)—and do not enter into oscillation phenomena regardless. If neutrinoless double beta decay occurs, these factors influence its rate. The phase factor is non-zero only if neutrino oscillation violates CP symmetry; this has not yet been observed experimentally. If experiment shows this 3×3 matrix to be not unitary, a sterile neutrino or some other new physics is required. Propagation and interference Since are mass eigenstates, their propagation can be described by plane wave solutions of the form where quantities are expressed in natural units and is the energy of the mass-eigenstate , is the time from the start of the propagation, is the three-dimensional momentum, is the current position of the particle relative to its starting position In the ultrarelativistic limit, we can approximate the energy as where is the energy of the wavepacket (particle) to be detected. This limit applies to all practical (currently observed) neutrinos, since their masses are less than 1 eV and their energies are at least 1 MeV, so the Lorentz factor, , is greater than in all cases. Using also where is the distance traveled and also dropping the phase factors, the wavefunction becomes Eigenstates with different masses propagate with different frequencies. The heavier ones oscillate faster compared to the lighter ones. Since the mass eigenstates are combinations of flavor eigenstates, this difference in frequencies causes interference between the corresponding flavor components of each mass eigenstate. Constructive interference causes it to be possible to observe a neutrino created with a given flavor to change its flavor during its propagation. The probability that a neutrino originally of flavor will later be observed as having flavor is This is more conveniently written as where The phase that is responsible for oscillation is often written as (with and restored) where 1.27 is unitless. In this form, it is convenient to plug in the oscillation parameters since: The mass differences, , are known to be on the order of  eV = ( eV) Oscillation distances, , in modern experiments are on the order of kilometers Neutrino energies, , in modern experiments are typically on order of MeV or GeV. If there is no CP-violation ( is zero), then the second sum is zero. Otherwise, the CP asymmetry can be given as In terms of Jarlskog invariant the CP asymmetry is expressed as Two-neutrino case The above formula is correct for any number of neutrino generations. Writing it explicitly in terms of mixing angles is extremely cumbersome if there are more than two neutrinos that participate in mixing. Fortunately, there are several meaningful cases in which only two neutrinos participate significantly. In this case, it is sufficient to consider the mixing matrix Then the probability of a neutrino changing its flavor is Or, using SI units and the convention introduced above This formula is often appropriate for discussing the transition in atmospheric mixing, since the electron neutrino plays almost no role in this case. It is also appropriate for the solar case of where is a mix (superposition) of and These approximations are possible because the mixing angle is very small and because two of the mass states are very close in mass compared to the third. Classical analogue of neutrino oscillation The basic physics behind neutrino oscillation can be found in any system of coupled harmonic oscillators. A simple example is a system of two pendulums connected by a weak spring (a spring with a small spring constant). The first pendulum is set in motion by the experimenter while the second begins at rest. Over time, the second pendulum begins to swing under the influence of the spring, while the first pendulum's amplitude decreases as it loses energy to the second. Eventually all of the system's energy is transferred to the second pendulum and the first is at rest. The process then reverses. The energy oscillates between the two pendulums repeatedly until it is lost to friction. The behavior of this system can be understood by looking at its normal modes of oscillation. If the two pendulums are identical then one normal mode consists of both pendulums swinging in the same direction with a constant distance between them, while the other consists of the pendulums swinging in opposite (mirror image) directions. These normal modes have (slightly) different frequencies because the second involves the (weak) spring while the first does not. The initial state of the two-pendulum system is a combination of both normal modes. Over time, these normal modes drift out of phase, and this is seen as a transfer of motion from the first pendulum to the second. The description of the system in terms of the two pendulums is analogous to the flavor basis of neutrinos. These are the parameters that are most easily produced and detected (in the case of neutrinos, by weak interactions involving the W boson). The description in terms of normal modes is analogous to the mass basis of neutrinos. These modes do not interact with each other when the system is free of outside influence. When the pendulums are not identical the analysis is slightly more complicated. In the small-angle approximation, the potential energy of a single pendulum system is , where g is the standard gravity, L is the length of the pendulum, m is the mass of the pendulum, and x is the horizontal displacement of the pendulum. As an isolated system the pendulum is a harmonic oscillator with a frequency of . The potential energy of a spring is where k is the spring constant and x is the displacement. With a mass attached it oscillates with a period of . With two pendulums (labeled a and b) of equal mass but possibly unequal lengths and connected by a spring, the total potential energy is This is a quadratic form in xa and xb, which can also be written as a matrix product: The 2×2 matrix is real symmetric and so (by the spectral theorem) it is orthogonally diagonalizable. That is, there is an angle θ such that if we define then where λ1 and λ2 are the eigenvalues of the matrix. The variables x1 and x2 describe normal modes which oscillate with frequencies of and . When the two pendulums are identical (La = Lb), θ is 45°. The angle θ is analogous to the Cabibbo angle (though that angle applies to quarks rather than neutrinos). When the number of oscillators (particles) is increased to three, the orthogonal matrix can no longer be described by a single angle; instead, three are required (Euler angles). Furthermore, in the quantum case, the matrices may be complex. This requires the introduction of complex phases in addition to the rotation angles, which are associated with CP violation but do not influence the observable effects of neutrino oscillation. Theory, graphically Two neutrino probabilities in vacuum In the approximation where only two neutrinos participate in the oscillation, the probability of oscillation follows a simple pattern: The blue curve shows the probability of the original neutrino retaining its identity. The red curve shows the probability of conversion to the other neutrino. The maximum probability of conversion is equal to sin22θ. The frequency of the oscillation is controlled by Δm2. Three neutrino probabilities If three neutrinos are considered, the probability for each neutrino to appear is somewhat complex. The graphs below show the probabilities for each flavor, with the plots in the left column showing a long range to display the slow "solar" oscillation, and the plots in the right column zoomed in, to display the fast "atmospheric" oscillation. The parameters used to create these graphs (see below) are consistent with current measurements, but since some parameters are still quite uncertain, some aspects of these plots are only qualitatively correct. The illustrations were created using the following parameter values: sin2(2θ13) = 0.10 (Determines the size of the small wiggles.) sin2(2θ23) = 0.97 sin2(2θ12) = 0.861 δ = 0 (If the actual value of this phase is large, the probabilities will be somewhat distorted, and will be different for neutrinos and antineutrinos.) Normal mass hierarchy: m1 ≤ m2 ≤ m3 Δm = Δm ≈ Δm = Observed values of oscillation parameters  . PDG combination of Daya Bay, RENO, and Double Chooz results.  . This corresponds to θsol (solar), obtained from KamLand, solar, reactor and accelerator data. at 90% confidence level, corresponding to (atmospheric) (normal mass hierarchy) and the sign of are currently unknown. Solar neutrino experiments combined with KamLAND have measured the so-called solar parameters and Atmospheric neutrino experiments such as Super-Kamiokande together with the K2K and MINOS long baseline accelerator neutrino experiment have determined the so-called atmospheric parameters and The last mixing angle, 13, has been measured by the experiments Daya Bay, Double Chooz and RENO as For atmospheric neutrinos the relevant difference of masses is about and the typical energies are ; for these values the oscillations become visible for neutrinos traveling several hundred kilometres, which would be those neutrinos that reach the detector traveling through the earth, from below the horizon. The mixing parameter 13 is measured using electron anti-neutrinos from nuclear reactors. The rate of anti-neutrino interactions is measured in detectors sited near the reactors to determine the flux prior to any significant oscillations and then it is measured in far detectors (placed kilometres from the reactors). The oscillation is observed as an apparent disappearance of electron anti-neutrinos in the far detectors (i.e. the interaction rate at the far site is lower than predicted from the observed rate at the near site). From atmospheric and solar neutrino oscillation experiments, it is known that two mixing angles of the MNS matrix are large and the third is smaller. This is in sharp contrast to the CKM matrix in which all three angles are small and hierarchically decreasing. The CP-violating phase of the MNS matrix is as of April 2020 to lie somewhere between −2 and −178 degrees, from the T2K experiment. If the neutrino mass proves to be of Majorana type (making the neutrino its own antiparticle), it is then possible that the MNS matrix has more than one phase. Since experiments observing neutrino oscillation measure the squared mass difference and not absolute mass, one might claim that the lightest neutrino mass is exactly zero, without contradicting observations. This is however regarded as unlikely by theorists. Origins of neutrino mass The question of how neutrino masses arise has not been answered conclusively. In the Standard Model of particle physics, fermions only have intrinsic mass because of interactions with the Higgs field (see Higgs boson). These interactions require both left- and right-handed versions of the fermion (see chirality). However, only left-handed neutrinos have been observed so far. Neutrinos may have another source of mass through the Majorana mass term. This type of mass applies for electrically neutral particles since otherwise it would allow particles to turn into anti-particles, which would violate conservation of electric charge. The smallest modification to the Standard Model, which only has left-handed neutrinos, is to allow these left-handed neutrinos to have Majorana masses. The problem with this is that the neutrino masses are surprisingly smaller than the rest of the known particles (at least 600,000 times smaller than the mass of an electron), which, while it does not invalidate the theory, is widely regarded as unsatisfactory as this construction offers no insight into the origin of the neutrino mass scale. The next simplest addition would be to add into the Standard Model right-handed neutrinos that interact with the left-handed neutrinos and the Higgs field in an analogous way to the rest of the fermions. These new neutrinos would interact with the other fermions solely in this way and hence would not be directly observable, so are not phenomenologically excluded. The problem of the disparity of the mass scales remains. Seesaw mechanism The most popular conjectured solution currently is the seesaw mechanism, where right-handed neutrinos with very large Majorana masses are added. If the right-handed neutrinos are very heavy, they induce a very small mass for the left-handed neutrinos, which is proportional to the reciprocal of the heavy mass. If it is assumed that the neutrinos interact with the Higgs field with approximately the same strengths as the charged fermions do, the heavy mass should be close to the GUT scale. Because the Standard Model has only one fundamental mass scale, all particle masses must arise in relation to this scale. There are other varieties of seesaw and there is currently great interest in the so-called low-scale seesaw schemes, such as the inverse seesaw mechanism. The addition of right-handed neutrinos has the effect of adding new mass scales, unrelated to the mass scale of the Standard Model, hence the observation of heavy right-handed neutrinos would reveal physics beyond the Standard Model. Right-handed neutrinos would help to explain the origin of matter through a mechanism known as leptogenesis. Other sources There are alternative ways to modify the standard model that are similar to the addition of heavy right-handed neutrinos (e.g., the addition of new scalars or fermions in triplet states) and other modifications that are less similar (e.g., neutrino masses from loop effects and/or from suppressed couplings). One example of the last type of models is provided by certain versions supersymmetric extensions of the standard model of fundamental interactions, where R parity is not a symmetry. There, the exchange of supersymmetric particles such as squarks and sleptons can break the lepton number and lead to neutrino masses. These interactions are normally excluded from theories as they come from a class of interactions that lead to unacceptably rapid proton decay if they are all included. These models have little predictive power and are not able to provide a cold dark matter candidate. Oscillations in the early universe During the early universe when particle concentrations and temperatures were high, neutrino oscillations could have behaved differently. Depending on neutrino mixing-angle parameters and masses, a broad spectrum of behavior may arise including vacuum-like neutrino oscillations, smooth evolution, or self-maintained coherence. The physics for this system is non-trivial and involves neutrino oscillations in a dense neutrino gas.
Physical sciences
Particle physics: General
Physics
1328487
https://en.wikipedia.org/wiki/Masai%20giraffe
Masai giraffe
The Masai giraffe (Giraffa tippelskirchi), also spelled Maasai giraffe, and sometimes called the Kilimanjaro giraffe, is a species or subspecies of giraffe. It is native to East Africa. The Masai giraffe can be found in central and southern Kenya and in Tanzania. It has distinctive jagged, irregular leaf-like blotches that extend from the hooves to its head. The Masai giraffe is currently the national animal of Tanzania. Taxonomy The IUCN currently recognizes only one species of giraffe with nine subspecies The Masai giraffe was described and given the binomial name Giraffa tippelskirchi by German zoologist Paul Matschie in 1898, but current taxonomy refers to Masai giraffe as Giraffa camelopardalis tippelskirchi. The Masai giraffe was named in honor of Herr von Tippelskirch, who was a member of a German scientific expedition in German East Africa to what is now northern Tanzania in 1896. Tippelskirch brought back the skin of a female Masai giraffe from near Lake Eyasi which was later on identified as Giraffa tippelskirchi. Alternative taxonomic hypotheses have proposed Masai giraffe may be its own species. Description The Masai giraffe is distinguished by jagged and irregular spots on its body. Its geographic range includes various parts of eastern Africa. It is the largest-bodied giraffe species, making it the tallest land animal on Earth. Bulls are generally larger and heavier than cows, weighing close to 1,300 kilograms (2,900 pounds) and growing up to 5.5 meters (18 feet) in height. In the wild, individuals can live to be around 30 years of age, and in most cases can live longer in captivity. The Masai giraffe's most famous feature, its neck, contains seven vertebrae and makes up roughly one third of its body height. Its long and muscular tongue, which can be up to 50 centimeters (20 inches) in length, is prehensile and allows it to grab leaves from tall trees that are inaccessible to other animals. The tongue's darker pigment is believed to function as a natural sunscreen and prevent sunburn. On top of the head are two bony structures called ossicones which are covered by thick skin and have dark hair on the tips. These can be used during fights to club its opponent. Bulls usually have an extra ossicone present between the eyes. When galloping, the Masai giraffe has been recorded to reach speeds of almost 64 kilometers per hour (40 miles per hour). Conservation Masai giraffes are considered endangered by the IUCN, and the Masai giraffe population declined 52% in recent decades due to poaching and habitat loss. The population amounts to 32,550 in the wild. Demographic studies of wild giraffes living inside and outside protected areas suggest low adult survival outside protected areas due to poaching and low calf survival inside protected areas due to predation; these are the primary influences on population growth rates. Survival of giraffe calves is influenced by the season of birth and the seasonal local presence or absence of long-distance migratory herds of wildebeest and zebra. Metapopulation analysis indicated protected areas were important for keeping giraffes in the larger landscape. In situ conservation of Masai giraffes is being done by several government agencies, including the Kenya Wildlife Service, Tanzania National Parks, Zambia Wildlife Authority; and non-governmental organizations including PAMS Foundation and the Wild Nature Institute. Community-based wildlife conservation areas have also been shown to be effective at protecting giraffes. Over 100 Masai giraffe live under human care in AZA accredited zoos in the United States. At several zoos, Masai giraffe cows have become pregnant and successfully given birth. Masai giraffes can suffer from giraffe skin disease, which is a disorder of unknown etiology that causes lesion on the forelimbs. This disorder is being further investigated to better understand mortality in this species. Gallery
Biology and health sciences
Giraffidae
Animals
8485448
https://en.wikipedia.org/wiki/Integrated%20modular%20avionics
Integrated modular avionics
Integrated modular avionics (IMA) are real-time computer network airborne systems. This network consists of a number of computing modules capable of supporting numerous applications of differing criticality levels. In opposition to traditional federated architectures, the IMA concept proposes an integrated architecture with application software portable across an assembly of common hardware modules. An IMA architecture imposes multiple requirements on the underlying operating system. History It is believed that the IMA concept originated with the avionics design of the fourth-generation jet fighters. It has been in use in fighters such as F-22 and F-35, or Dassault Rafale since the beginning of the '90s. Standardization efforts were ongoing at this time (see ASAAC or STANAG 4626), but no final documents were issued then. Architecture IMA modularity simplifies the development process of avionics software: As the structure of the modules network is unified, it is mandatory to use a common API to access the hardware and network resources, thus simplifying the hardware and software integration. IMA concept also allows the Application developers to focus on the Application layer, reducing the risk of faults in the lower-level software layers. As modules often share an extensive part of their hardware and lower-level software architecture, maintenance of the modules is easier than with previous specific architectures. Applications can be reconfigured on spare modules if the primary module that supports them is detected faulty during operations, increasing the overall availability of the avionics functions. Communication between the modules can use an internal high speed Computer bus, or can share an external network, such as ARINC 429 or ARINC 664 (part 7). However, much complexity is added to the systems, which thus require novel design and verification approaches since applications with different criticality levels share hardware and software resources such as CPU and network schedules, memory, inputs and outputs. Partitioning is generally used in order to help segregate mixed criticality applications and thus ease the verification process. ARINC 650 and ARINC 651 provide general purpose hardware and software standards used in an IMA architecture. However, parts of the API involved in an IMA network has been standardized, such as: ARINC 653 for the software avionics partitioning constraints to the underlying Real-time operating system (RTOS), and the associated API Certification considerations RTCA DO-178C and RTCA DO-254 form the basis for flight certification today, while DO-297 gives specific guidance for Integrated modular avionics. ARINC 653 contributes by providing a framework that enables each software building block (called a partition) of the overall Integrated modular avionics to be tested, validated, and qualified independently (up to a certain measure) by its supplier. The FAA CAST-32A position paper provides information (not official guidance) for certification of multicore systems, but does not specifically address IMA with multicore. A research paper by VanderLeest and Matthews addresses implementation of IMA principles for multicore" Examples of IMA architecture Examples of aircraft avionics that uses IMA architecture: Airbus A220 : Rockwell Collins Pro Line Fusion Airbus A350 Airbus A380 Airbus A400M ATR 42 ATR 72 BAE Hawk (Hawk 128 AJT) Boeing 777 : includes AIMS avionics from Honeywell Aerospace Boeing 777X: will include the Common Core System from GE Aviation Boeing 787 : GE Aviation Systems (formerly Smiths Aerospace) IMA architecture is called Common Core System Bombardier Global 5000 / 6000 : Rockwell Collins Pro Line Fusion COMAC C919 Dassault Falcon 900, Falcon 2000, and Falcon 7X : Honeywell's IMA architecture is called MAU (Modular Avionics Units), and the overall platform is called EASy F-22 Raptor Gulfstream G280: Rockwell Collins Pro Line Fusion Gulfstream G400, G500, G600, G700, G800, Data Concentration Network (DCN) Rafale : Thales IMA architecture is called MDPU (Modular Data Processing Unit) Sukhoi Superjet 100
Technology
Aircraft components
null
4967925
https://en.wikipedia.org/wiki/Mugwort
Mugwort
Mugwort is a common name for several species of aromatic flowering plants in the genus Artemisia. In Europe, mugwort most often refers to the species Artemisia vulgaris, or common mugwort. In East Asia the species Artemisia argyi is often called "Chinese mugwort" in the context of traditional Chinese medicine, Ngai Chou in Cantonese or () for the whole plant in Mandarin, and () for the leaf, which is used specifically in the practice of moxibustion. Artemisia princeps is a mugwort known in Korea as () and in Japan as (). While other species are sometimes referred to by more specific common names, they may be called simply "mugwort" in many contexts. Etymology The Anglo-Saxon Nine Herbs Charm mentions . A folk etymology, based on coincidental sounds, derives from the word "mug"; more certainly, it has been used in flavoring drinks at least since the early Iron Age. Other sources say mugwort is derived from the Old Norse (meaning "marsh") and German (wort in English, originally meaning "root"), which refers to its use since ancient times to repel insects, especially moths. The Old English word for mugwort is where , could be a variation of the Old English word for "midge": . Wort comes from the Old English (root/herb/plant), which is related to the Old High German (root) and the Old Norse (plant). Species Species in the genus Artemisia called mugwort include: Artemisia absinthium L. — wormwood, traditionally used in the production of absinthe Artemisia argyi H.Lév. & Vaniot — Chinese mugwort, used in traditional Chinese medicine Artemisia douglasiana Besser ex Besser — Douglas mugwort or California mugwort, native to western North America Artemisia glacialis L. — alpine mugwort Artemisia granatensis Boiss. - Sierra Nevadan chamomille, endemic to the Sierra Nevada mountain range in the Iberian peninsula Artemisia indica Willd. — Oriental mugwort Artemisia japonica Thunb. — Japanese mugwort Artemisia lactiflora Wall. ex DC. — white mugwort Artemisia norvegica Fr. — Norwegian mugwort Artemisia princeps Pamp. — Korean mugwort (ssuk), Japanese mugwort (yomogi), used as a culinary herb and in traditional Chinese medicine Artemisia stelleriana Bess. — hoary mugwort Artemisia verlotiorum Lamotte — Chinese mugwort Artemisia vulgaris L. — common mugwort, used as a culinary herb and medicinally throughout the world Uses Mugwort has seen continuous use in many cultures throughout the world as a medicinal, spiritual, and culinary ingredient since at least the Iron Age. In contemporary culture mugwort is commonly found in foods and drinks, and remains a common ingredient in Chinese, Japanese, and Korean traditional medicine, where the leaves are used directly as a food, or to obtain oil extracts, tinctures, or burned in what is called moxibustion. The mugwort plant has been used as an anthelminthic, so it is sometimes confused with wormwood (Artemisia absinthium). The downy hairs on the underside of the leaves can be scraped off and used as effective tinder. Mugwort has also been used therapeutically to relieve sleeplessness. Food Aromatic and slightly bitter leaves, as well as young spring shoots, can be eaten raw or cooked. The leaves and buds, best picked shortly before mugwort flowers in July to September, can be used as a bitter flavoring agent to season fat, meat and fish. Mugwort was used to flavor beer before the introduction of hops. Essential oil The composition of mugwort essential oil can vary depending on the genus of plant selected, its habitat, as well as the part of the plant extracted and the season of its harvest. Its main components can include camphor, cineole, α- and β-thujone, artemisia ketone (CAS: 546-49-6), borneol and bornyl acetate as well as a wide variety of other phenols, terpenes, and aliphatic compounds. The presence and concentration of thujone varies largely by species, as well as the climactic, and soil conditions where the plant is grown. Insecticide All parts of the plant contain essential oils with all-purpose insecticidal properties (especially in the killing of insect larvae). This is best used in a weak infusion, but use on garden plants is not recommended, as it also reduces plant growth. Cultural Medieval Europe In the European Middle Ages, mugwort was used as a magical protective herb. Mugwort was used to repel insects – especially moths – from gardens. Mugwort has also been used from ancient times as a remedy against fatigue and to protect travelers against evil spirits and wild animals. Roman soldiers put mugwort in their sandals to protect their feet against fatigue and cramps. Mugwort is one of the nine herbs invoked in the pagan Anglo-Saxon Nine Herbs Charm, recorded in the 10th century in the Lacnunga. Grieve's Modern Herbal (1931) states that "in the Middle Ages, the plant was known as , it being believed that John the Baptist wore a girdle of it in the wilderness...a crown made from its sprays was worn on St. John's Eve to gain security from evil possession, and in Holland and Germany one of its names is 'St. John's plant', because of the belief that – if gathered on St. John's Eve – it gave protection against diseases and misfortunes." In the Isle of Man, mugwort is known as bollan bane, and is still worn on the lapel at the Tynwald Day celebrations, which also have strong associations with St. John. China There are several references to the Chinese using mugwort in cuisine. The famous Chinese poet Su Shi in the 11th century mentioned it in one of his poems. There are even older poems and songs that can be tracked back to 3 BC. It was often called () or () in Mandarin. Mugwort can be prepared as a cold dish or can be stir-fried with fresh or smoked meat. The Hakka Taiwanese also use it to make chhú-khak-ké (), doughy sweet dumplings. Mugwort is also used as a flavoring and colorant for a seasonal rice dish. In traditional Chinese medicine, mugwort is used in a pulverized and aged form – called in English (from Japanese ) – to perform moxibustion, that is, to burn on specific acupuncture points on the patient's body to achieve therapeutic effects. There is a belief that moxibustion of mugwort is effective at increasing the cephalic positioning of fetuses who were in a breech position before the intervention. A Cochrane review in 2012 found that moxibustion may be beneficial in reducing the need for ECV, but stressed a need for well-designed randomised controlled trials to evaluate this usage. Germany In Germany, known as , it is mainly used to season goose, especially the roast goose traditionally eaten for Christmas. India The plant, called nāgadamanī in Sanskrit, is used in Ayurveda for cardiac complaints as well as feelings of unease, unwellness, and general malaise. Japan Mugwort – or – is used in a number of Japanese dishes, including yōkan, a dessert, or kusa mochi, also known as yomogi mochi. Mugwort rice cakes, or kusa mochi are used for Japanese sweets called daifuku (literally 'great luck'). To make these, take a small amount of mochi and stuff it or wrap it round a filling of fruit or sweetened adzuki (red bean) paste. Traditional daifuku can be pale green, white or pale pink and are covered in a fine layer of potato starch to prevent sticking. Mugwort is a vital ingredient of kusa mochi (rice cake with mugwort) and hishi mochi (lozenge rice cake), which is served at the Doll Festival in March. In addition, the fuzz on the underside of the mugwort leaves is gathered and used in moxibustion. In some regions in Japan, there is an ancient custom of hanging yomogi and iris leaves together outside homes in order to keep evil spirits away. It is said that evil spirits dislike their smell. The juice is said to be effective at stopping bleeding, lowering fevers and purging the stomach of impurities. It can also be boiled and taken to relieve colds and coughs. The famous Japanese poet Matsuo Bashō rubs Moxa cream into his knees to strengthen them before embarking on his Oku no Hosomichi "Journey to the North" Korea In both North and South Korea, mugwort – () – is used in soups and salads. A traditional soup containing mugwort and clams is (), made in spring from the young plants just before they bloom. Another dish is named (), in which the mugwort is mixed with rice flour, sugar, salt and water and is then steamed. It is a common ingredient in rice cakes, teas, soups, and pancakes. North America Indigenous peoples of North America used mugwort for a number of medicinal purposes. Strong, bitter-tasting pasture sagewort tea was taken to treat colds and fevers. Mugwort was used in washes and salves to treat bruises, itching, sores, poison ivy, eczema, and underarm or foot odour. The leaves were dried, crushed, and used as a snuff to relieve congestion, nosebleeds, and headaches. Frequently, to improve taste and absorption, Mugwort Tea is made by crushing the leaves, and steeping with other ingredients. Tarragon plants were boiled to make washes and poultices for treating swollen feet and legs and snow blindness. Some tribes called western mugwort 'women's sage' because the leaf tea was taken to correct menstrual irregularity. It was taken to relieve indigestion, coughs, and chest infections. Western mugwort smoke was used to disinfect contaminated areas and revive patients from comas. Northern wormwood tea was taken to relieve difficulties with urination or bowel movements, to ease delivery of babies, and to cause abortions. Side effects Allergies Mugwort pollen is one of the main sources of hay fever and allergic asthma in North Europe, North America, and parts of Asia. Mugwort pollen generally travels less than 2,000 meters. The highest concentration of mugwort pollen is generally found between 9 and 11 am. The Finnish allergy association recommends tearing as a method of eradicating mugwort. Tearing mugwort is known to lessen the effect of the allergy, since the pollen flies only a short distance. Cut flowers before they bloom to avoid allergens and reproduction of the plant. Toxicity Mugwort often contains the neurotoxin compound thujone, though this varies greatly by species and the environmental conditions where the plant is grown. Toxicity to humans is believed to be weak, though some studies have linked high concentrations of thujone to seizures and an abortive effect. The Botanical Safety Handbook suggests the Mugwort not be used during pregnancy unless it is under the supervision of a medical expert. In rare cases, minor allergic skin reactions have been recorded in relation to moxibustion, or the burning of dried Mugwort.
Biology and health sciences
Herbs and spices
Plants
4968799
https://en.wikipedia.org/wiki/Sky%20brightness
Sky brightness
Sky brightness refers to the visual perception of the sky and how it scatters and diffuses light. The fact that the sky is not completely dark at night is easily visible. If light sources (e.g. the Moon and light pollution) were removed from the night sky, only direct starlight would be visible. The sky's brightness varies greatly over the day, and the primary cause differs as well. During daytime, when the Sun is above the horizon, the direct scattering of sunlight is the overwhelmingly dominant source of light. During twilight (the duration after sunset or before sunrise until or since, respectively, the full darkness of night), the situation is more complicated, and a further differentiation is required. Twilight (both dusk and dawn) is divided into three 6° segments that mark the Sun's position below the horizon. At civil twilight, the center of the Sun's disk appears to be between 1/4° and 6° below the horizon. At nautical twilight, the Sun's altitude is between –6° and –12°. At astronomical twilight, the Sun is between –12° and –18°. When the Sun's depth is more than 18°, the sky generally attains its maximum darkness. Sources of the night sky's intrinsic brightness include airglow, indirect scattering of sunlight, scattering of starlight, and light pollution. Airglow When physicist Anders Ångström examined the spectrum of the aurora borealis, he discovered that even on nights when the aurora was absent, its characteristic green line was still present. It was not until the 1920s that scientists were beginning to identify and understand the emission lines in aurorae and of the sky itself, and what was causing them. The green line Angstrom observed is in fact an emission line with a wavelength of 557.7 nm, caused by the recombination of oxygen in the upper atmosphere. Airglow is the collective name of the various processes in the upper atmosphere that result in the emission of photons, with the driving force being primarily UV radiation from the Sun. Several emission lines are dominant: a green line from oxygen at 557.7 nm, a yellow doublet from sodium at 589.0 and 589.6 nm, and red lines from oxygen at 630.0 and 636.4 nm. The sodium emissions come from a thin sodium layer approximately 10 km thick at an altitude of 90–100 km, above the mesopause and in the D-layer of the ionosphere. The red oxygen lines originate at altitudes of about 300 km, in the F-layer. The green oxygen emissions are more spatially distributed. How sodium gets to mesospheric heights is not yet well understood, but it is believed to be a combination of upward transport of sea salt and meteoritic dust. In daytime, sodium and red oxygen emissions are dominant and roughly 1,000 times as bright as nighttime emissions because in daytime, the upper atmosphere is fully exposed to solar UV radiation. The effect is however not noticeable to the human eye, since the glare of directly scattered sunlight outshines and obscures it. Indirect scattering of sunlight Indirectly scattered sunlight comes from two directions. From the atmosphere itself, and from outer space. In the first case, the Sun has just set but still illuminates the upper atmosphere directly. Because the amount of scattered sunlight is proportional to the number of scatterers (i.e. air molecules) in the line of sight, the intensity of this light decreases rapidly as the Sun drops further below the horizon and illuminates less of the atmosphere. When the Sun's altitude is < -6° 99% of the atmosphere in zenith is in the Earth's shadow and second order scattering takes over. At the horizon, however, 35% of the atmosphere along the line of sight is still directly illuminated, and continues to be until the sun reaches -12°. From -12° to -18° only the uppermost parts of the atmosphere along the horizon, directly above the spot where the Sun is, is still illuminated. After that, all direct illumination ceases and astronomical darkness sets in. A second source sunlight is the zodiacal light, which is caused by reflection and scattering of sunlight on interplanetary dust. Zodiacal light varies quite a lot in intensity depending on the position of the Earth, location of the observer, time of year, and composition and distribution of the reflecting dust. Scattered light from extraterrestrial sources Not only sunlight is scattered by the molecules in the air. Starlight and the diffuse light of the Milky Way are also scattered by the air, and it is found that stars up to V magnitude 16 contribute to the diffuse scattered starlight. Other sources such as galaxies and nebulae don't contribute significantly. The total brightness of all the stars was first measured by Burns in 1899, with a calculated result that the total brightness reaching earth was equivalent to that of 2,000 first-magnitude stars with subsequent measurements by others. Light pollution Light pollution is an ever-increasing source of sky brightness in urbanized areas. In densely populated areas that do not have stringent light pollution control, the entire night sky is regularly 5 to 50 times brighter than it would be if all lights were switched off, and very often the influence of light pollution is far greater than natural sources (including moonlight). With urbanization and light pollution, one third of humanity, and the majority of those in developed countries, cannot see the Milky Way. Twilight When the Sun has just set, the brightness of the sky decreases rapidly, thereby enabling the viewing of the airglow that is caused from such high altitudes that they are still fully sunlit until the Sun drops more than about 12° below the horizon. During this time, yellow emissions from the sodium layer and red emissions from the 630 nm oxygen lines are dominant, and contribute to the purplish color sometimes seen during civil and nautical twilight. After the Sun has also set for these altitudes at the end of nautical twilight, the intensity of light emanating from earlier mentioned lines decreases, until the oxygen-green remains as the dominant source. When astronomical darkness has set in, the green 557.7 nm oxygen line is dominant, and atmospheric scattering of starlight occurs. Differential refraction causes different parts of the spectrum to dominate, producing a golden hour and a blue hour. Relative contributions The following table gives the relative and absolute contributions to night sky brightness at zenith on a perfectly dark night at middle latitudes without moonlight and in the absence of any light pollution. (The S10 unit is defined as the surface brightness of a star whose V-magnitude is 10 and whose light is smeared over one square degree, or 27.78 mag arcsec−2.) The total sky brightness in zenith is therefore ~220 S10 or 21.9 mag/arcsec² in the V-band. Note that the contributions from Airglow and Zodiacal light vary with the time of year, the solar cycle, and the observer's latitude roughly as follows: where S is the solar 10.7 cm flux in MJy, and various sinusoidally between 0.8 and 2.0 with the 11-year solar cycle, yielding an upper contribution of ~270 S10 at solar maximum. The intensity of zodiacal light depends on the ecliptic latitude and longitude of the point in the sky being observed relative to that of the Sun. At ecliptic longitudes differing from the Sun's by > 90 degrees, the relation is where β is the ecliptic latitude and is smaller than 60°, when larger than 60 degrees the contribution is that given in the table. Along the ecliptic plane there are enhancements in the zodiacal light where it is much brighter near the Sun and with a secondary maximum opposite the Sun at 180 degrees longitude (the gegenschein). In extreme cases natural zenith sky brightness can be as high as ~21.0 mag/arcsec², roughly twice as bright as nominal conditions.
Physical sciences
Basics
Astronomy
4971262
https://en.wikipedia.org/wiki/Computer%20port%20%28hardware%29
Computer port (hardware)
A computer port is a hardware piece on a computer where an electrical connector can be plugged to link the device to external devices, such as another computer, a peripheral device or network equipment.This is a non-standard term. Electronically, the several conductors where the port and cable contacts connect, provide a method to transfer data signals between devices. Bent pins are easier to replace on a cable than on a connector attached to a computer, so it was common to use female connectors for the fixed side of an interface. Computer ports in common use cover a wide variety of shapes such as round (PS/2, etc.), rectangular (FireWire, etc.), square (Telephone plug), trapezoidal (D-Sub — the old printer port was a DB-25), etc. There is some standardization to physical properties and function. For instance, most computers have a keyboard port (currently a Universal Serial Bus USB-like outlet referred to as USB Port), into which the keyboard is connected. Physically identical connectors may be used for widely different standards, especially on older personal computer systems, or systems not generally designed according to the current Microsoft Windows compatibility guides. For example, a 9-pin D-subminiature connector on the original IBM PC could have been used for monochrome video, color analog video (in two incompatible standards), a joystick interface, or a MIDI musical instrument digital control interface. The original IBM PC also had two identical 5 pin DIN connectors, one used for the keyboard, the second for a cassette recorder interface; the two were not interchangeable. The smaller mini-DIN connector has been variously used for the keyboard and two different kinds of mouse; older Macintosh family computers used the mini-DIN for a serial port or for a keyboard connector with different standards than the IBM-descended systems. Electrical signal transfer Electronically, hardware ports can almost always be divided into two groups based on the signal transfer: Analog ports Digital ports: Parallel ports send multiple bits at the same time over several sets of wires. Serial ports send and receive one bit at a time via a single wire pair (Ground and +/-). After ports are connected, they typically require handshaking, where transfer type, transfer rate, and other necessary information is shared before data is sent. Hot-swappable ports can be connected while equipment is running. Almost all ports on personal computers are hot-swappable. Plug-and-play ports are designed so that the connected devices automatically start handshaking as soon as the hot-swapping is done. USB ports and FireWire ports are plug-and-play. Auto-detect or auto-detection ports are usually plug-and-play, but they offer another type of convenience. An auto-detect port may automatically determine what kind of device has been attached, but it also determines what purpose the port itself should have. For example, some sound cards allow plugging in several different types of audio speakers; then a dialogue box pops up on the computer screen asking whether the speaker is left, right, front, or rear for surround sound installations. The user's response determines the purpose of the port, which is physically a 1/8" tip-ring-sleeve mini jack. Some auto-detect ports can even switch between input and output based on context. As of 2006, manufacturers have nearly standardized colors associated with ports on personal computers, although there are no guarantees. The following is a short list: Orange, purple, or grey: Keyboard PS/2 Green: Mouse PS/2 Blue or magenta: Parallel printer DB-25 Amber: Serial DB-25 or DB-9 Pastel pink: Microphone 1/8" stereo (TRS) minijack Pastel green: Speaker 1/8" stereo (TRS) minijack Additionally, USB ports are color-coded according to the specification and data transfer speed, e.g. USB 1.x and 2.x ports are usually white or black, and USB 3.0 ones are blue. SuperSpeed+ connectors are teal in color. FireWire ports used with video equipment (among other devices) can be either 4-pin or 6-pin. The two extra conductors in the 6-pin connection carry electrical power. This is why a self-powered device such as a camcorder often connects with a cable that is 4-pins on the camera side and 6-pins on the computer side, the two power conductors simply being ignored. This is also why laptop computers usually have only 4-pin FireWire ports, as they cannot provide enough power to meet requirements for devices needing the power provided by 6-pin connections. Optical (light) fiber, microwave, and other technologies (i.e., quantum) have different kinds of connections, as metal wires are not effective for signal transfers with these technologies. Optical connections are usually a polished glass or plastic interface, possibly with an oil that lessens refraction between the two interface surfaces. Microwaves are conducted through a pipe, which can be seen on a large scale by examining microwave towers with "funnels" on them leading to pipes. Hardware port trunking (HPT) is a technology that allows multiple hardware ports to be combined into a single group, effectively creating a single connection with a higher Bandwidth sometimes referred to as a double-barrel approach. This technology also provides a higher degree of fault tolerance because a failure on one port may just mean a slow-down rather than a dropout. By contrast, in software port trunking (SPT), two agents (websites, channels, etc.) are bonded into one with the same effectiveness; i.e., ISDN B1 (64K) plus B2 (64K) equals data throughput of 128K. The USB-C standard, published in 2014, supersedes previous connectors and is reversible (although not electrically), meaning it can be plugged both ways. Reversible plugs have a symmetric pinout. Other reversible connectors include Apple's Lightning. Types of ports Digital Visual Interface DisplayPort eSATA PS/2 Serial SCSI USB
Technology
User interface
null
22591478
https://en.wikipedia.org/wiki/Ring%20spinning
Ring spinning
Ring spinning is a spindle-based method of spinning fibres, such as cotton, flax or wool, to make a yarn. The ring frame developed from the throstle frame, which in its turn was a descendant of Arkwright's water frame. Ring spinning is a continuous process, unlike mule spinning which uses an intermittent action. In ring spinning, the roving is first attenuated by using drawing rollers, then spun and wound around a rotating spindle which in its turn is contained within an independently rotating ring flyer. Traditionally ring frames could only be used for the coarser counts, but they could be attended by semi-skilled labour. History Early machines The Saxony wheel was a double band treadle spinning wheel. The spindle rotated faster than the traveller in a ratio of 8:6, drawing was done by the spinners fingers. The water frame was developed and patented by Arkwright in the 1770s. The roving was attenuated (stretched) by draughting rollers and twisted by winding it onto a spindle. It was heavy large-scale machine that needed to be driven by power, which in the late 18th century meant by a water wheel. Cotton mills were designed for the purpose by Arkwright, Jedediah Strutt and others along the River Derwent in Derbyshire. Water frames could only spin weft. The throstle frame was a descendant of the water frame. It used the same principles, was better engineered and driven by steam. In 1828 the Danforth throstle frame was invented in the United States. The heavy flyer caused the spindle to vibrate, and the yarn snarled every time the frame was stopped. Not a success. The Ring frame is credited to John Thorp in Rhode Island in 1828/9 and developed by Mr. Jencks of Pawtucket, Rhode Island, who Richard Marsden names as the inventor. Developments in the United States Machine shops experimented with ring frames and components in the 1830s. The success of the ring frame, however, was dependent on the market it served and it was not until industry leaders like Whitin Machine Works in the 1840s and the Lowell Machine Shop in the 1850s began to manufacture ring frames that the technology started to take hold. At the time of the American Civil War, the American industry boasted 1,091 mills with 5,200,000 spindles processing 800,000 bales of cotton. The largest mill, Naumkeag Steam Cotton Co. in Salem, Mass.had 65,584 spindles. The average mill housed only 5,000 to 12,000 spindles, with mule spindles out-numbering ring spindles two-to-one. After the war, mill building started in the south, it was seen as a way of providing employment. Almost exclusively these mills used ring technology to produce coarse counts, and the New England mills moved into fine counts. Jacob Sawyer vastly improved spindle for the ring frame in 1871, taking the speed from 5000rpm to 7500rpm and reducing the power needed, formerly 100 spindles would need 1 hp but now 125 could be driven. This also led to production of fine yarns. During the next ten years, the Draper Corporation protected its patent through the courts. One infringee was Jenks, who was marketing a spindle known after its designer, Rabbeth. When they lost the case, Mssrs. Fales and Jenks, revealed a new patent free spindle also designed by Rabbeth, and also named the Rabbeth spindle. The Rabbeth spindle was self-lubricating and capable of running without vibration at over 7500 rpm. The Draper Co. bought the patent and expanded the Sawyer Spindle Co. to manufacture it. They licensed it to Fales & Jenks Machine Co., the Hopedale Machine Co., and later, other machine builders. From 1883 to 1890 this was the standard spindle, and William Draper spent much of his time in court defending this patent. Adoption in Europe The new method was compared with the self-acting spinning mule which was developed by Richard Roberts using the more advanced engineering techniques in Manchester. The ring frame was reliable for coarser counts while Lancashire spun fine counts as well. The ring frame was heavier, requiring structural alteration in the mills and needed more power. These were not problems in the antebellum cotton industry in New England. It fulfilled New England's difficulty in finding skilled spinners: skilled spinners were plentiful in Lancashire. In the main the requirements on the two continents were different, and the ring frame was not the method of choice for Europe at that moment. Mr Samuel Brooks of Brooks & Doxey Manchester was convinced of the viability of the method. After a fact-finding tour to the States by his agent Blakey, he started to work on improving the frame. It was still too primitive to compete with the highly developed mule frames, let alone supersede them. He first started on improving the doubling frame, constructing the necessary tooling needed to improve the precision of manufacture. This was profitable and machines offering 180,000 spindle were purchased by a sewing thread manufacturer. Brooks and other manufacturers now worked on improving the spinning frame. The principal cause for concern was the design of the Booth-Sawyer spindle. The bobbin did not fit tightly on the spindle and vibrated wildly at higher speeds. Howard & Bullough of Accrington used the Rabbath spindle, which solved these problems. Another problem was ballooning, where the thread built up in an uneven manner. This was addressed by Furniss and Young of Mellor Bottom Mill, Mellor by attaching an open ring to the traverse or ring rail. This device controlled the thread, and consequently a lighter traveller could be made which could operate at higher speeds. Another problem was the accumulation of fluff on the traveller breaking the thread - this was eliminated by a device called a traveller cleaner. A major time constraint was doffing, or changing the spindles. Three hundred or more spindles had to be removed, and replaced. The machine had to be stopped while the doffers, who were often very young boys, did this task. The ring frame was idle until it was completed. Harold Partington (1906 - 1994) of Chadderton, England, patented a 'Means for Doffing Ring Frames' in September 1953 (US Patent 2,653,440). The machine removed full bobbins from the ring frame spindles, and placed empty bobbins onto the spindles in their place; eight spindles at a time. It was traversable along the front of the ring frame step by step through successive operations, and thereby reduced the period of stoppage of the ring frame as well as reducing the labour required for removing all the filled bobbins on a frame and replacing them with empty bobbins. The Partington autodoffer was developed with assistance from Platt Brothers (Oldham) and worked perfectly in ideal conditions: flat horizontal floor and ring frame parallel to the floor and standing vertically. Sadly, these conditions were unobtainable in most Lancashire cotton mills at the time and so the autodoffer did not go into production. The Partington autodoffer was unique and the only one to work properly as an add-on to a ring frame. A more modern mechanical doffer system fitted as an integral part of the ring frame, reduced the doffing time to 30–35 seconds. Rings and mules The ring frame was extensively used in the United States, where coarser counts were manufactured. Many of frame manufacturers were US affiliates of the Lancashire firms, such as Howard & Bullough and Tweedales and Smalley. They were constantly trying to improve the speed and quality of their product. The US market was relatively small, the total number of spindles in the entire United States was barely more than the number of spindles in one Lancashire town, Oldham. When production in Lancashire peaked in 1926 Oldham had 17.669 million spindles and the UK had 58.206 million. Technologically mules were more versatile. The mules were more easily changed to spin the larger variety of qualities of cotton then found in Lancashire. While Lancashire concentrated on "Fines" for export, it also spun a wider range, including the very coarse wastes. The existence of the Liverpool cotton exchange meant that mill owners had access to a wider selection of staples. The wage cost per spindle is higher for ring spinning. In the states, where cotton staple was cheap, the additional labour costs of running mules could be absorbed, but Lancashire had to pay shipment costs. The critical factor was the availability of labour, when skilled labour was scarce then the ring became advantageous. This had always been so in New England, and when it became so in Lancashire, ring frames started to be adopted. The first known mill in Lancashire dedicated to ring spinning was built in Milnrow for the New Ladyhouse Cotton Spinning Company (registered 26 April 1877). A cluster of smaller mills developed which between 1884 and 1914 out performed the ring mills of Oldham. After 1926, the Lancashire industry went into sharp decline, the Indian export market was lost, Japan was self-sufficient. Textile firms united to reduce capacity rather than to add to it. It wasn't until the late 1940s that some replacement spindles started to be ordered and ring frames became dominant. Debate still continues in academic papers on whether the Lancashire entrepreneurs made the right purchases decisions in the 1890s. The engine house and steam engine of the Ellenroad Ring Mill are preserved. New technologies The search for faster and more reliable ring spinning techniques continues. In 2005, a PhD paper was written at Auburn University, Alabama on using magnetic levitation to reduce friction, a techniques known as Magnetic ring spinning. Open end spinning was developed in Czechoslovakia in the years preceding 1967. It was far faster than ring spinning, and did away with many preparatory processes. Put simply, the thread was ejected spinning from a nozzle, and on exiting hooked onto other loose fibres in the chamber behind. It was first introduced into the United Kingdom at the Maple Mill, Oldham. How it works A ring frame was constructed from cast iron, and later pressed steel. On each side of the frame are the spindles, above them are draughting (drafting) rollers and on top is a creel loaded with bobbins of roving. The roving (unspun thread) passes downwards from the bobbins to the draughting rollers. Here the back roller steadied the incoming thread, while the front rollers rotated faster, pulling the roving out and making the fibres more parallel. The rollers are individually adjustable, originally by mean of levers and weights. The attenuated roving now passes through a thread guide that is adjusted to be centred above the spindle. Thread guides are on a thread rail which allows them to be hinged out of the way for doffing or piecing a broken thread. The attenuated roving passes down to the spindle assembly, where it is threaded though a small D ring called the traveller. The traveller moves along the ring. It is this that gives the ring frame its name. From here the thread is attached to the existing thread on the spindle. The traveller and the spindle share the same axis but rotate at different speeds. The spindle is driven and the traveller drags behind thus distributing the rotation between winding up on the spindle and twist into the yarn. The bobbin is fixed on the spindle. In a ring frames, the different speed was achieved by drag caused by air resistance and friction (lubrication of the contact surface between the traveller and the ring was a necessity). Spindles could rotate at speeds up to 25,000 rpm, this spins the yarn. The up and down ring rail motion guides the thread onto the bobbin into the shape required: i.e. a cop. The lifting must be adjusted for different yarn counts. Doffing is a separate process. An attendant (or robot in an automated system) winds down the ring rails to the bottom. The machine stops. The thread guides are hinged up. The completed bobbin coils (yarn packages) are removed from the spindles. The new bobbin tube is placed on the spindle trapping the thread between it and the cup in the wharf of the spindle, the thread guides are lowered and the machine restarted. Now all the processes are done automatically. The yarn is taken to a cone winder. Currently, machines are manufactured by Rieter (Switzerland), Toyota (Japan), Zinser, Suessen, (Germany) and Marzoli (Italy)LMW (India). LMW Rieter and Toyota as have machines with up to 1824 spindles. All require controlled atmospheric conditions.
Technology
Spinning
null
1951922
https://en.wikipedia.org/wiki/Ichthyosaurus
Ichthyosaurus
Ichthyosaurus (derived from Greek () meaning 'fish' and () meaning 'lizard') is a genus of ichthyosaurs from the Early Jurassic (Hettangian - Pliensbachian) of Europe (Belgium, England, Germany and Portugal). Some specimens of the ichthyosaurid Protoichthyosaurus from England and Switzerland have been erroneously referred to this genus in the past. It is among the best known ichthyosaur genera, as it is the type genus of the order Ichthyosauria. History of discovery Ichthyosaurus was the first complete fossil to be discovered in the early 19th century by Mary Anning in England; the holotype of I. communis, no coll. number given, was a fairly complete specimen discovered by Mary and Joseph Anning around 1814 in Lyme Regis but was reported as lost by McGowan (1974) in his review of the latipinnate ichthyosaurs of England. The name Ichthyosaurus was first used by Charles König in 1818, but it was not used in a formal scientific description, with the earliest described ichthyosaur being Proteosaurus by James Everard Home in 1819 for a skeleton which is now attributed to Temnodontosaurus platyodon. Henry De la Beche and William Conybeare in 1821 considered Ichthyosaurus to have taxonomic priority over Proteosaurus and named the species I. communis based on BMNH 2149 (now NHMUK PV R1158), a now partially lost specimen now assigned to Temnodontosaurus that was discovered and collected between 1811 and 1812. One specimen that Home had assigned to Proteosaurus was the first complete ichthyosaur skeleton known, but it was destroyed in WW2. Two casts were rediscovered in 2022, showing that the specimen belonged to Ichthyosaurus, but of uncertain species. During the 19th century, almost all fossil ichthyosaurs were attributed to Ichthyosaurus, resulting in the genus having over 50 species by 1900. These species were subsequently moved to separate genera or synonymised with other species. I. anningae, described in 2015 from a fossil found in the early 1980s in Dorset, England, was named after Anning. The fossil was acquired by Doncaster Museum and Art Gallery, where it was misidentified as a plaster cast. In 2008, Dean Lomax, from the University of Manchester, recognised it as genuine and worked with Judy Massare, of the State University of New York, to establish it as a new species. Description Ichthyosaurus was smaller than most of its relatives, with the largest specimen of I. somersetensis measuring up to in length. In comparison, other species were much smaller, with the I. communis reaching up to in length, I. larkini probably up to , I. anningae up to , I. breviceps up to , and I. conybeari up to . Many Ichthyosaurus fossils are well-preserved and fully articulated. Some fossils still had baby specimens inside them, indicating that Ichthyosaurus was viviparous. Similar finds in the related Stenopterygius also show this. Jurassic ichthyosaurs had a fleshy dorsal fin on their back as well as a large caudal fin. Icthyosaurus is distinguished from other ichthyosaurs by having a wide forefin with 5 or more digits with an anterior digital bifurcation, but the morphology of the humerus and coracoids are also distinct from that of other Lower Jurassic ichthyosaurs, as is the arrangement of the dermal bones, though the suture lines used to diagnose these are not always visible. Classification This cladogram below follows the topology from a 2010 analysis by Patrick S. Druckenmiller and Erin E. Maxwell. Palaeobiology Ichthyosaurus is suggested to have been a ram feeder, with the morphology of its hyobranchial apparatus suggesting that it was incapable of suction feeding, using the jaws and teeth alone to capture prey. Ichthyosaurus is thought to have been a pursuit predator that was capable of sustained swift swimming via thunniform locomotion. Stomach contents of Ichthyosaurus anningae indicate that it fed on cephalopods (likely belemnites) and fish. Like other ichthyosaurs, it likely relied on its sense of sight, possibly in combination with olfaction. It was initially believed that Ichthyosaurus laid eggs on land, but fossil evidence shows that in fact the females gave birth to live young. As such, they were well-adapted to life as fully pelagic organisms (i.e. they never came onto land). Three pregnant females are known, all of the subspecies I. somersetensis. Although none of the fetuses show a clear birth orientation it is likely they exited tail-first, a common feature in highly aquatic vertebrates. Cultural significance Joseph Victor von Scheffels poem Der Ichthyosaurus describes its extinction in humouristic verses. A monument on Hohentwiel cites it as well. The poem has been translated among others by Charles Godfrey Leland Some of the stanzas:
Biology and health sciences
Prehistoric marine reptiles
Animals
1952367
https://en.wikipedia.org/wiki/Wedge
Wedge
A wedge is a triangular shaped tool, a portable inclined plane, and one of the six simple machines. It can be used to separate two objects or portions of an object, lift up an object, or hold an object in place. It functions by converting a force applied to its blunt end into forces perpendicular (normal) to its inclined surfaces. The mechanical advantage of a wedge is given by the ratio of the length of its slope to its width. Although a short wedge with a wide angle may do a job faster, it requires more force than a long wedge with a narrow angle. The force is applied on a flat, broad surface. This energy is transported to the pointy, sharp end of the wedge, hence the force is transported. The wedge simply transports energy in the form of friction and collects it to the pointy end, consequently breaking the item. History Wedges have existed for thousands of years. They were first made of simple stone. Perhaps the first example of a wedge is the hand axe (see also Olorgesailie), which is made by chipping stone, generally flint, to form a bifacial edge, or wedge. A wedge is a simple machine that transforms lateral force and movement of the tool into a transverse splitting force and movement of the workpiece. The available power is limited by the effort of the person using the tool, but because power is the product of force and movement, the wedge amplifies the force by reducing the movement. This amplification, or mechanical advantage is the ratio of the input speed to output speed. For a wedge, this is given by 1/tanα, where α is the tip angle. The faces of a wedge are modeled as straight lines to form a sliding or prismatic joint. The origin of the wedge is unknown. In ancient Egyptian quarries, bronze wedges were used to break away blocks of stone used in construction. Wooden wedges that swelled after being saturated with water were also used. Some indigenous peoples of the Americas used antler wedges for splitting and working wood to make canoes, dwellings and other objects. Uses of a wedge Wedges are used to lift heavy objects, separating them from the surface upon which they rest. Consider a block that is to be lifted by a wedge. As the wedge slides under the block, the block slides up the sloped side of a wedge. This lifts the weight FB of the block. The horizontal force FA needed to lift the block is obtained by considering the velocity of the wedge vA and the velocity of the block vB. If we assume the wedge does not dissipate or store energy, then the power into the wedge equals the power out. Or The velocity of the block is related to the velocity of the wedge by the slope of the side of the wedge. If the angle of the wedge is α then which means that the mechanical advantage Thus, the smaller the angle α the greater the ratio of the lifting force to the applied force on the wedge. This is the mechanical advantage of the wedge. This formula for mechanical advantage applies to cutting edges and splitting operations, as well as to lifting. They can also be used to separate objects, such as blocks of cut stone. Splitting mauls and splitting wedges are used to split wood along the grain. A narrow wedge with a relatively long taper, used to finely adjust the distance between objects is called a gib, and is commonly used in machine tool adjustment. The tips of forks and nails are also wedges, as they split and separate the material into which they are pushed or driven; the shafts may then hold fast due to friction. Blades and wedges The blade is a compound inclined plane, consisting of two inclined planes placed so that the planes meet at one edge. When the edge where the two planes meet is pushed into a solid or fluid substance, it overcomes the resistance of materials to separate by transferring the force exerted against the material into two opposing forces normal to the faces of the blade. The blade's first known use by humans was the sharp edge of a flint stone that was used to cleave or split animal tissue, e.g. cutting meat. The use of iron or other metals led to the development of knives for those kinds of tasks. The blade of the knife allowed humans to cut meat, fibers, and other plant and animal materials with much less force than it would take to tear them apart by simply pulling with their hands. Other examples are plows, which separate soil particles, scissors which separate fabric, axes which separate wood fibers, and chisels and planes which separate wood. Wedges, saws and chisels can separate thick and hard materials, such as wood, solid stone and hard metals and they do so with much less force, waste of material, and with more precision, than crushing, which is the application of the same force over a wider area of the material to be separated. Other examples of wedges are found in drill bits, which produce circular holes in solids. The two edges of a drill bit are sharpened, at opposing angles, into a point and that edge is wound around the shaft of the drill bit. When the drill bit spins on its axis of rotation, the wedges are forced into the material to be separated. The resulting cut in the material is in the direction of rotation of the drill bit, while the helical shape of a bit allows the removal of the cut material. Examples for holding objects faster Wedges can also be used to hold objects in place, such as engine parts (poppet valves), bicycle parts (stems and eccentric bottom brackets), and doors. A wedge-type door stop (door wedge) functions largely because of the friction generated between the bottom of the door and the wedge, and the wedge and the floor (or other surface). Mechanical advantage The mechanical advantage or MA of a wedge can be calculated by dividing the height of the wedge by the wedge's width: The more acute, or narrow, the angle of a wedge, the greater the ratio of the length of its slope to its width, and thus the more mechanical advantage it will yield. A wedge will bind when the wedge included angle is less than the arctangent of the coefficient of friction between the wedge and the material. Therefore, in an elastic material such as wood, friction may bind a narrow wedge more easily than a wide one. This is why the head of a splitting maul has a much wider angle than that of an axe.
Technology
Basics_8
null
1952495
https://en.wikipedia.org/wiki/Deinocheirus
Deinocheirus
Deinocheirus ( ) is a genus of large ornithomimosaur that lived during the Late Cretaceous around 70 million years ago. In 1965, a pair of large arms, shoulder girdles, and a few other bones of a new dinosaur were first discovered in the Nemegt Formation of Mongolia. In 1970, this specimen became the holotype of the only species within the genus, Deinocheirus mirificus; the genus name is Greek for "horrible hand". No further remains were discovered for almost fifty years, and its nature remained a mystery. Two more complete specimens were described in 2014, which shed light on many aspects of the animal. Parts of these new specimens had been looted from Mongolia some years before, but were repatriated in 2014. Deinocheirus was an unusual ornithomimosaur, the largest of the clade at long, and weighing . Though it was a bulky animal, it had many hollow bones which saved weight. The arms were among the largest of any bipedal dinosaur at long, with large, blunt claws on its three-fingered hands. The legs were relatively short, and bore blunt claws. Its vertebrae had tall neural spines that formed a "sail" along its back. Most of the vertebrae and some other bones were highly pneumatised by invading air sacs. The tail ended in pygostyle-like vertebrae, which indicate the presence of a fan of feathers. The skull was long, with a wide bill and a deep lower jaw, similar to those of hadrosaurs. The classification of Deinocheirus was long uncertain, and it was initially placed in the theropod group Carnosauria, but similarities with ornithomimosaurians were soon noted. After more complete remains were found, Deinocheirus was shown to be a primitive ornithomimosaurian, most closely related to the smaller genera Garudimimus and Beishanlong, together forming the family Deinocheiridae. Members of this group were not adapted for speed, unlike other ornithomimosaurs. Deinocheirus is thought to have been omnivorous; its skull shape indicates a diet of plants, fish scales were found in association with one specimen and gastroliths were also present in the stomach region of the specimen. The large claws may have been used for digging and gathering plants. Bite marks on Deinocheirus bones have been attributed to the tyrannosaurid Tarbosaurus. Discovery The first known fossil remains of Deinocheirus were discovered by Polish palaeontologist Zofia Kielan-Jaworowska on July 9, 1965, at the Altan Ula III site (coordinates: ) in the Nemegt Basin of the Gobi Desert. She was part of a Polish group accompanied by Mongolian palaeontologist Rinchen Barsbold during the 1963–1965 Polish-Mongolian palaeontological expeditions, which were organised by the Polish Academy of Sciences and the Mongolian Academy of Sciences. The crew spent July 9–11 excavating the specimen and loading it onto a vehicle. A 1968 report by Kielan-Jaworowska and Naydin Dovchin, which summarised the accomplishments of the expeditions, announced that the remains represented a new family of theropod dinosaur. The specimen was discovered on a small hill in sandstone, and consists of a partial, disarticulated skeleton, most parts of which had probably eroded away at the time of discovery. The specimen consisted of both forelimbs, excluding the claws of the right hand, the complete shoulder girdle, centra of three dorsal vertebrae, five ribs, gastralia (belly ribs), and two ceratobranchialia. The specimen was made the holotype of Deinocheirus mirificus, named by Halszka Osmólska and Ewa Roniewicz in 1970. The generic name is derived from Greek deinos (δεινός), meaning "horrible", and cheir (χείρ), meaning "hand", due to the size and strong claws of the forelimbs. The specific name comes from Latin and means "unusual" or "peculiar", chosen for the unusual structure of the forelimbs. The Polish-Mongolian expeditions were notable for being led by women, among the first to name new dinosaurs. The original specimen number of the holotype was ZPal MgD-I/6, but it has since been re-catalogued as MPC-D 100/18. The paucity of known Deinocheirus remains inhibited a thorough understanding of the animal for almost half a century onwards, and the scientific literature often described it as among the most "enigmatic", "mysterious", and "bizarre" of dinosaurs. The holotype arms became part of a traveling exhibit of Mongolian dinosaur fossils, touring various countries. In 2012, Phil R. Bell, Philip J. Currie, and Yuong-Nam Lee announced the discovery of additional elements of the holotype specimen, including fragments of gastralia, found by a Korean-Mongolian team which re-located the original quarry in 2008. Bite marks on two gastralia were identified as belonging to Tarbosaurus, and it was proposed that this accounted for the scattered, disassociated state of the holotype specimen. Additional specimens In 2013, the discovery of two new Deinocheirus specimens were announced before the annual Society of Vertebrate Paleontology (SVP) conference by Lee, Barsbold, Currie, and colleagues. Housed at the Mongolian Academy of Sciences, these two headless individuals were given the specimen numbers MPC-D 100/127 and MPC-D 100/128. MPC-D 100/128, a subadult specimen, was found by scientists in the Altan Ula IV locality (coordinates: ) of the Nemegt Formation during the Korea-Mongolia International Dinosaur Expedition in 2006, but had already been damaged by fossil poachers. The second specimen, MPC-D 100/127, was found by scientists in the Bugiin Tsav locality (coordinates: ) in 2009. It is slightly larger than the holotype, and it could be clearly identified as Deinocheirus by its left forelimb, and therefore helped identify the earlier collected specimen as Deinocheirus. The specimen had also been excavated by poachers, who had removed the skull, hands and feet, but left behind a single toe bone. It had probably been looted after 2002, based on money left in the quarry. Skulls, claw bones and teeth are often selectively targeted by poachers on the expense of the rest of the skeletons (which are often vandalized), due to their saleability. Currie stated in an interview that it was a policy of their team to investigate quarries after they had been looted and recover anything of significance, and that finding any new Deinocheirus fossils was cause for celebration, even without the poached parts. A virtual model of Deinocheirus revealed at the SVP presentation brought applause from the crowd of attending palaeontologists, and the American palaeontologist Stephen L. Brusatte stated he had never been as surprised by a SVP talk, though new fossils are routinely presented at the conference. After the new specimens were announced, it was rumoured that a looted skull had found its way to a European museum through the black market. The poached elements were spotted in a private European collection by the French fossil trader François Escuillé, who notified Belgian palaeontologist Pascal Godefroit about them in 2011. They suspected the remains belonged to Deinocheirus, and contacted the Korean-Mongolian team. Escuillé subsequently acquired the fossils and donated them to the Royal Belgian Institute of Natural Sciences. The recovered material consisted of a skull, a left hand, and feet, which had been collected in Mongolia, sold to a Japanese buyer, and resold to a German party (the fossils also passed through China and France). The team concluded that these elements belonged to specimen MPC-D 100/127, as the single leftover toe bone fit perfectly into the unprepared matrix of a poached foot, the bone and matrix matched in colour, and because the elements belonged to an individual of the same size, with no overlap in skeletal elements. On May 1, 2014, the fossils were repatriated to Mongolia by a delegation from the Belgian Museum, during a ceremony held at the Mongolian Academy of Sciences. The reunited skeleton was deposited at the Central Museum of Mongolian Dinosaurs in Ulaanbaatar, along with a Tarbosaurus skeleton which had also been brought back after being stolen. American palaeontologist Thomas R. Holtz stated in an interview that the new Deinocheirus remains looked like the "product of a secret love affair between a hadrosaur and Gallimimus". Combined with the poached elements, both new specimens represent almost the entire skeleton of Deinocheirus, as MPC-D 100/127 includes all material apart from the middle dorsal vertebrae, most caudal vertebrae, and the right forelimb; MPC-D 100/128 fills in most gaps of the other skeleton, with nearly all dorsal and caudal vertebrae, the ilium, a partial ischium, and most of the left hindlimb. In 2014, the specimens were described by Lee, Barsbold, Currie, Yoshitsugu Kobayashi, Hang-Jae Lee Lee, Godefroit, Escuillié, and Tsogtbaatar Chinzorig. A similar series of events was reported earlier in 2014 with Spinosaurus, another sail-backed theropod which had only been known from few remains since 1912. Poached remains were reunited with specimens obtained by scientists, and Spinosaurus was shown to have been quite different from other spinosaurids. The two cases showed that the lifestyle and appearance of incompletely known extinct animals cannot always be safely inferred from close relatives. By 2017, the Mongolian government had increased its effort to seize poached fossils from collectors and repatriate them, but proving their provenance had become a scientific and political concern. Therefore, a study tested the possibility of identifying poached fossils by geochemical methods, using Deinocheirus and other Nemegt dinosaurs as examples. In 2018, numerous large, tridactyl (three-toed) tracks were reported from the Nemegt locality (discovered in 2007 alongside sauropod tracks). Though the tracks were similar to those of hadrosaurs, no tracks of hadrosaur hands were identified, and since the feet of Deinocheirus are now known to have been similar to those of hadrosaurs, it cannot be ruled out that the tracks were made by this genus. Description Deinocheirus is the largest ornithomimosaurian (ostrich dinosaur) discovered; according to the 2014 description, the largest known specimen measured about long, with an estimated weight of . The two other known specimens are smaller, the holotype being 94% as big while the smallest, a subadult, is only 74% as big. In 2016, Gregory S. Paul presented a higher length estimate of but a lower mass estimate of . Also in 2016, Asier Larramendi and Molina-Pérez presented a higher length estimate of and a higher mass estimate of , and an estimated hip height of . In 2020, Campione and Evans gave a body mass estimate of approximately . When only the incomplete holotype arms were known, various sizes were extrapolated from them by different methods. A 2010 study estimated the hip height of Deinocheirus to be . The weight had previously been estimated between and . Enormous sizes were also suggested by comparing the arms with those of tyrannosaurs, even though members of that group did not have large arms in proportion to their body size. The only known skull, belonging to the largest specimen, measures from the premaxilla at the front to the back of the occipital condyle. The widest part of the skull behind the eyes is only wide in comparison. The skull was similar to those of other ornithomimosaurs in being low and narrow, but differed in that the snout was more elongated. The skull bone walls were rather thin, about . It had a rounded, flattened beak, which would have been covered by keratin in life. The nostrils were turned upwards, and the nasal bone was a narrow strap that extended up above the eye sockets. The outer diameter of the sclerotic rings in the eyes was small, , compared to the size of the skull. The lower temporal fenestrae, openings behind the eyes, were partially closed off by the jugal bones, similar to Gallimimus. The jaws were toothless and down-turned, and the lower jaw was very massive and deep compared to the slender and low upper jaw. The relative size of the lower jaw was closer to that of tyrannosaurids than to other ornithomimosaurs. The snout was spatulate (flared outwards to the sides) and wide, which is wider than the skull roof. This shape was similar to the snout of duck-billed hadrosaurids. Postcranial skeleton Deinocheirus and Therizinosaurus possessed the longest forelimbs known for any bipedal dinosaurs. The holotype forelimbs measure long—the humerus (upper arm bone) is , the ulna , and the hand is —including the recurved claws. Each scapulocoracoid of the shoulder girdle has a length of . Each half of the paired ceratobranchialia measure . The shoulder-blade was long and narrow, and the deltopectoralis crest was pronounced and triangular. The upper arm (humerus) was relatively slender, and only slightly longer than the hand. The ulna and radius (lower arm bones) were elongate and not firmly connected to each other in a syndesmosis. The metacarpus was long compared to the fingers. The three fingers were about equal in length, the first being the stoutest and the second the longest. Various rough areas and impressions on the forelimbs indicate the presence of powerful muscles. Most articular surfaces of the arm bones were deeply furrowed, indicating that the animal had thick pads of cartilage between the joints. Though the arms of Deinocheirus were large, the ratio between them and the shoulder girdle was less than that of the smaller ornithomimosaur Ornithomimus. The arm bones of Deinocheirus were similar in proportions to those of the small theropod Compsognathus. Though Deinocheirus was a bulky animal, its dorsal ribs were tall and relatively straight, indicating that the body was narrow. The ten neck vertebrae were low and long, and progressively shorter backwards from the skull. This resulted in a more S-curved neck than seen in other ornithomimosaurs, due to the larger skull. The neural spines of the twelve back vertebrae became increasingly longer from front to back, the last one being 8.5 times the height of the centrum part. This is almost the same as the highest ratio in the neural spines of the theropod Spinosaurus. The neural spines had a system of interconnecting ligaments, which stiffened the vertebral column allowing it to support the abdomen while transmitting the stress to the hips and hindlimbs. Together, the neural spines formed a tall "sail" along the lower back, hips, and base of the tail, somewhat similar to that of Spinosaurus. All the vertebrae were highly pneumatised by invading air sacs, except for the atlas bone and the hindmost tail vertebrae, and were thereby connected to the respiratory system. The back vertebrae were as pneumatised as those of sauropod dinosaurs, and had an extensive system of depressions. These adaptations may be correlated with gigantism, as they reduce weight. The six vertebrae of the sacrum were also tall and pneumatised, and all but the first one were fused together at the top, their neural spines forming a neural plate. The ilium, the top hip bone, was also partially pneumatised close to the sacral vertebrae. Part of the pelvis was hypertrophied (enlarged) compared to other ornithomimosaurs, to support the weight of the animal with strong muscle attachments. The front hip bones tilted upwards in life. The tail of Deinocheirus ended in at least two fused vertebrae, which were described as similar to the pygostyle of oviraptorosaurian and therizinosauroid theropods. Ornithomimosaurs are known to have had pennaceous feathers, so this feature suggests that they might have had a fan of feathers at the tail end. The wishbone (furcula), an element not known from any other ornithomimosaurs, was U-shaped. The hindlimbs were relatively short, and the thigh bone (femur) was longer than the shin bone (tibia), as is common for large animals. The metatarsus was short and not arctometatarsalian, as in most other theropods. The claw bones of the feet were blunt and broad-tipped instead of tapered, unlike other theropods, but resembled the unguals of large ornithischian dinosaurs. The proportions of the toe bones resembled those of tyrannosaurs, due to the large weight they had to bear. Classification When Deinocheirus was only known from the original forelimbs, its taxonomic relationship was difficult to determine, and several hypotheses were proposed. Osmólska and Roniewicz initially concluded that Deinocheirus did not belong in any already named theropod family, so they created a new, monotypic family Deinocheiridae, placed in the infraorder Carnosauria. This was due to the large size and thick-walled limb bones, but they also found some similarities with Ornithomimus, and, to a lesser extent, Allosaurus. In 1971, John Ostrom first proposed that Deinocheirus belonged with the Ornithomimosauria, while noting that it contained both ornithomimosaurian and non-ornithomimosaurian characters. In 1976, Rhinchen Barsbold named the order Deinocheirosauria, which was to include the supposedly related genera Deinocheirus and Therizinosaurus. A relationship between Deinocheirus and the long-armed therizinosaurs was supported by some later writers, but they are not considered to be closely related today. In 2004, Peter Makovicky, Kobayashi and Currie pointed out that Deinocheirus was likely a primitive ornithomimosaurian, since it lacked some of the features typical of the Ornithomimidae family. Primitive traits include its recurved claws, the low humerus-to-scapula ratio, and the lack of a syndesmosis. A 2006 study by Kobayashi and Barsbold found Deinocheirus to be possibly the most primitive ornithomimosaur, but was unable to further resolve its affinities, due to the lack of skull and hindlimb elements. A cladistic analysis accompanying the 2014 description of the two much more complete specimens found that Deinocheirus formed a clade with Garudimimus and Beishanlong, which were therefore included in the Deinocheiridae. The resulting cladogram follows below: The 2014 study defined Deinocheiridae as a clade including all taxa with a more recent common ancestor with Deinocheirus mirificus than with Ornithomimus velox. The three members share various anatomical features in the limbs. The 2014 cladogram suggested that ornithomimosaurians diverged into two major lineages in the Early Cretaceous: Deinocheiridae and Ornithomimidae. Unlike other ornithomimosaurians, deinocheirids were not built for running. The anatomical peculiarities of Deinocheirus when compared to other, much smaller ornithomimosaurs, can largely be explained by its much larger size and weight. Deinocheirids and the smaller ornithomimids did not have teeth, unlike more primitive ornithomimosaurs. In 2020, the deinocheirid Paraxenisaurus from Mexico was named, making it the first member of the group known from North America. Its describers suggested deinocheirids originated in Laurasia (the northern supercontinent of the time) or that they dispersed across polar regions in the Northern Hemisphere, and a similar interchange is also known to have occurred in other dinosaur groups with Asian affinities during the Campanian–Maastrichtian ages. This study also found Harpymimus to be a basal deinocheirid, while placing Beishanlong just outside the group, as a basal ornithomimosaur. Palaeobiology The blunt and short hand-claws of Deinocheirus were similar to those of the therizinosaur Alxasaurus, which indicates the long arms and claws were used for digging and gathering plants. The blunt claws of the feet could have helped the animal from sinking into substrate when wading. The robust hind limbs and hip region indicates the animal moved slowly. The large size of the animal may have protected it against predators such as Tarbosaurus, but in turn it lost the running ability of other ornithomimosaurs. The long neural spines and possible tail fan may have been used for display behaviour. Deinocheirus was likely diurnal (active during the day), since the sclerotic rings of the eyes were relatively small in comparison with its skull length. The hand had good mobility relative to the lower arm, but was capable of only a limited flexing motion, unable to close in grasping. The brain of Deinocheirus was reconstructed through CT scans and presented at the 2014 Society of Vertebrate Paleontology conference. The brain was globular and similar in shape to that of birds and troodontid theropods, the cerebrum was expanded in a way similar to most theropods, and the olfactory tracts were relatively large. The brain was proportionally small and compact, and its reptile encephalization quotient (brain-body ratio) was estimated at 0.69, which is low for theropods, and similar to sauropods. Other ornithomimosaurs have proportionally large brains, and the small brain of Deinocheirus may reflect its social behaviour or diet. Its coordination and balance would not have been as important as for carnivorous theropods. In 2015, Akinobu Watanabe and colleagues found that together with Archaeornithomimus and Gallimimus, Deinocheirus had the most pneumatised skeleton among ornithomimosaurs. Pneumatisation is thought to be advantageous for flight in modern birds, but its function in non-avian dinosaurs is not known with certainty. It has been proposed that pneumatisation was used to reduce the mass of large bones (associated with gigantic size in the case of Deinocheirus), that it was related to high metabolism, balance during locomotion, or used for thermoregulation. A bone microstructure study presented at the European Association of Vertebrate Palaeontologists in 2015 showed that Deinocheirus probably had a high metabolic rate, and grew rapidly before reaching sexual maturity. A histological study of a gastralia fragment from the holotype presented at a 2018 conference showed that its internal structure was similar to that of ossified tendons of other theropods. The osteons contained possible canaliculi, which would be the first-known occurrence of such structures in a basal ornithomimosaur. The structure of the periosteum and lack of growth arrest lines suggests that the holotype was a fully grown adult. Diet The distinct shape of the skull shows that Deinocheirus had a more specialised diet than other ornithomimosaurs. The beak was similar to that of ducks, which indicates it may have likewise foraged in water, or browsed near the ground like some sauropods and hadrosaurs. The attachment sites for the muscles that open and close the jaws were very small in comparison to the size of the skull, which indicates Deinocheirus had a weak bite force. The skull was likely adapted for cropping soft understorey or water vegetation. The depth of the lower jaw indicates the presence of a large tongue, which could have assisted the animal in sucking in food material obtained with the broad beak when foraging on the bottom of freshwater bodies. More than 1,400 gastroliths (stomach stones, 8 to 87 mm in size) were found among the ribs and gastralia of specimen MPC-D100/127. The ratio of gastrolith mass to total weight, 0.0022, supports the theory that these gastroliths helped the toothless animals in grinding their food. Features such as the presence of a beak and a U-shaped, downturned jaw, are indicators of facultative (optional) herbivory among coelurosaurian theropods. In spite of these features, fish vertebrae and scales were also found among the gastroliths, which suggests that it was an omnivore. Ornithomimosaurs in general are thought to have fed on both plants and small animals. David J. Button and Zanno found in 2019 herbivorous dinosaurs mainly followed two distinct modes of feeding, either processing food in the gut—characterized by gracile skulls and low bite forces—or the mouth, characterized by features associated with extensive processing. Deinocheirus, along with ornithomimid ornithomimosaurs, diplodocoid and titanosaur sauropods, Segnosaurus, and caenagnathids, was found to be in the former category. These researchers suggested that deinocheirids and ornithomimid ornithomimosaurians such as Gallimimus had invaded these niches separately, convergently achieving relatively large sizes. Advantages from large body mass in herbivores include increased intake rate of food and fasting resistance, and these trends may therefore indicate that deinocheirids and ornithomimids were more herbivorous than other ornithomimosaurians. They cautioned that the correlations between body mass and body mass were not simple, and that there was no directional trend towards increased mass seen in the clade. Furthermore, the diet of most ornithomimosaurians is poorly known, and Deinocheirus appears to have been at least opportunistically omnivorous. A 2022 article by Waisum Ma and colleagues examined how feeding mechanics varied between different non-bird coelurosaurian groups through finite element analysis, revealing that they all underwent reduction of feeding-related stress in their jaws. They found that Deinocheirus showed different patterns of stress and strain distribution than other ornithomimisaurs, indicating it was a specialized feeder. They suspected Deinocheirus may have reverted to omnivory/carnivory. Various feeding behaviours were proposed before more complete remains of Deinocheirus were known, and it was early on envisioned as a predatory, allosaur-like animal with giant arms. In their original description, Osmólska and Roniewicz found that the hands of Deinocheirus were unsuited for grasping, but could instead have been used to tear prey apart. In 1970, the Russian paleontologist Anatoly Konstantinovich Rozhdestvensky compared the forelimbs of Deinocheirus to sloths, leading him to hypothesise that Deinocheirus was a specialised climbing dinosaur, that fed on plants and animals found in trees. In 1988, Paul instead suggested that the claws were too blunt for predatory purposes, but would have been good defensive weapons. While attempting to determine the ecological niches for Deinocheirus and Therizinosaurus in 2010, Phil Senter and James H. Robins suggested that Deinocheirus had the largest vertical feeding range due to its hip height, and specialised in eating high foliage. In 2017, it was suggested that the claws of Deinocheirus were adapted for pulling large quantities of herbaceous plants out of water, and to decrease the resistance of water. Palaeopathology Osmólska and Roniewicz reported palaeopathologies in the holotype specimen such as abnormal pits, grooves and tubercles on the first and second phalanx of the left second finger that may have been the result of injuries to the joint between the two bones. The damage may have caused changes to the arrangement of ligaments of muscles. The two coracoids are also differently developed. A rib of specimen MPC-D 100/127 shows a healed trauma which has remodelled the bone. In 2012, bite marks on two gastralia of the holotype specimen were reported. The size and shape of the bite marks match the teeth of Tarbosaurus, the largest known predator from the Nemegt Formation. Various types of feeding traces were identified; punctures, gouges, striae, fragmentary teeth, and combinations of the above marks. The bite marks probably represent feeding behaviour instead of aggression between the species, and the fact that bite marks were not found elsewhere on the body indicates the predator focused on internal organs. Tarbosaurus bite marks have also been identified on hadrosaur and sauropod fossils, but theropod bite marks on bones of other theropods are very rare in the fossil record. Palaeoenvironment The three known Deinocheirus specimens were recovered from the Nemegt Formation in the Gobi Desert of southern Mongolia. This geologic formation has never been dated radiometrically, but the fauna present in the fossil record indicate it was probably deposited during the early Maastrichtian age, at the end of the Late Cretaceous about 70 million years ago. The rock facies of the Nemegt Formation suggest the presence of stream and river channels, mudflats, and shallow lakes. Such large river channels and soil deposits are evidence of a far more humid climate than those found in the older Barun Goyot and Djadochta formations. However, caliche deposits indicate at least periodic droughts occurred. Sediment was deposited in the channels and floodplains of large rivers. Deinocheirus is thought to have been widely distributed within the Nemegt Formation, as the only three specimens found have been apart. The river systems of the Nemegt Formation provided a suitable niche for Deinocheirus with its omnivorous habits. The environment was similar to the Okavango Delta of present-day Botswana. Within this ecosystem, Deinocheirus would have eaten plants and small animals, including fish. It may have competed for trees with other large herbivorous dinosaurs such as the long-necked theropod Therizinosaurus, various titanosaurian sauropods, and the smaller hadrosaurid Saurolophus. Deinocheirus may have competed with those herbivores for higher foliage such as trees, but was also able to feed on material that they could not. Along with Deinocheirus, the discoveries of Therizinosaurus and Gigantoraptor show that three groups of herbivorous theropods (ornithomimosaurs, therizinosaurs and oviraptorosaurs), independently reached their maximum sizes in the late Cretaceous of Asia. The habitats in and around the Nemegt rivers where Deinocheirus lived provided a home for a wide array of organisms. Occasional mollusc fossils are found, as well as a variety of other aquatic animals like fish and turtles, and the crocodylomorph Paralligator. Mammal fossils are rare in the Nemegt Formation, but many birds have been found, including the enantiornithine Gurilynia, the hesperornithiform Judinornis, as well as Teviornis, a possible Anseriform. Herbivorous dinosaurs of the Nemegt Formation include ankylosaurids such as Tarchia, the pachycephalosaurian Prenocephale, large hadrosaurids such as Saurolophus and Barsboldia, and sauropods such as Nemegtosaurus, and Opisthocoelicaudia. Predatory theropods that lived alongside Deinocheirus include tyrannosauroids such as Tarbosaurus, Alioramus, and Bagaraatan, and troodontids such as Borogovia, Tochisaurus, and Zanabazar. Theropod groups with both omnivorous and herbivorous members include therizinosaurs, such as Therizinosaurus, oviraptorosaurians, such as Elmisaurus, Nemegtomaia, and Rinchenia, and other ornithomimosaurians, such as Anserimimus and Gallimimus.
Biology and health sciences
Theropods
Animals
1952923
https://en.wikipedia.org/wiki/Capstan%20%28nautical%29
Capstan (nautical)
A capstan is a vertical-axled rotating machine developed for use on sailing ships to multiply the pulling force of sailors when hauling ropes, cables, and hawsers. The principle is similar to that of the windlass, which has a horizontal axle. History The word, connected with the Old French capestan or cabestan(t), from Old Provençal cabestan, from capestre "pulley cord," from Latin capistrum, -a halter, from capere, to take hold of, seems to have come into English (14th century) from Portuguese or Spanish shipmen at the time of the Crusades. Both device and word are considered Spanish inventions. Early form In its earliest form, the capstan consisted of a timber mounted vertically through a vessel's structure which was free to rotate. Levers, known as bars, were inserted through holes at the top of the timber and used to turn the capstan. A rope wrapped several turns around the drum was thus hauled upon. A rudimentary ratchet was provided to hold the tension. The ropes were always wound in a clockwise direction (seen from above). Later form Capstans evolved to consist of a wooden drum or barrel mounted on an iron axle. Two barrels on a common axle were used frequently to allow men on two decks to apply force to the bars. Later capstans were made entirely of iron, with gearing in the head providing a mechanical advantage when the bars were pushed counterclockwise. One form of capstan was connected by a shaft and gears to an anchor windlass on the deck below. On riverine vessels, the capstan was sometimes cranked by steam power. Capstan winches were also important on sailing trawlers (e.g. Brixham trawlers) as a means for fetching in the nets after the trawl. When they became available, steam powered capstan winches offered a great saving in effort. These used a compact combined steam engine and boiler below decks that drove the winch from below via a shaft. Ruston, Proctor and Company at the UK 1883 Fisheries Exhibition marketed an engine, boiler, shafts and capstan designed specifically for this task. Messenger As ships and their anchors grew in size, the anchor cable or chain would be too big to go around the capstan. Also, a wet cable or chain would be difficult to manage. A messenger would then be used as an intermediate device. This was a continuous loop of cable or chain which would go around the capstan. The main anchor cable or chain would then be attached to the messenger for hauling using some temporary connection such as ropes called nippers. These would be attached and detached as the anchor was pulled up onto the ship; (weighed) thus allowing a continuous hoist of the anchor, without any need for stopping or surging. Modern form Modern capstans are powered electrically, hydraulically, pneumatically, or via an internal combustion engine. Typically, a gearbox is used which trades reduced speed, relative to the prime mover, for increased torque. Similar machines In yachting terminology, a winch functions on the same principle as a capstan. However, in industrial applications, the term "winch" generally implies a machine which stores the rope on a drum. Most cassette players utilize a device called a capstan to draw the magnetic tape from the cassette across the tape head. It functions similarly to, and was likely named for, the nautical device. Use on land Hydraulically powered capstans were sometimes used in railway goods yards for shunting, or shifting railcars short distances. One example was Broad Street goods station in London. The yard was on a deck above some warehouses, and the deck was not strong enough to carry a locomotive, so ropes and capstans were used instead.
Technology
Mechanisms
null
1955561
https://en.wikipedia.org/wiki/Sampling%20error
Sampling error
In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. Since the sample does not include all members of the population, statistics of the sample (often known as estimators), such as means and quartiles, generally differ from the statistics of the entire population (known as parameters). The difference between the sample statistic and population parameter is considered the sampling error. For example, if one measures the height of a thousand individuals from a population of one million, the average height of the thousand is typically not the same as the average height of all one million people in the country. Since sampling is almost always done to estimate population parameters that are unknown, by definition exact measurement of the sampling errors will not be possible; however they can often be estimated, either by general methods such as bootstrapping, or by specific methods incorporating some assumptions (or guesses) regarding the true population distribution and parameters thereof. Description Sampling Error The sampling error is the error caused by observing a sample instead of the whole population. The sampling error is the difference between a sample statistic used to estimate a population parameter and the actual but unknown value of the parameter. Effective Sampling In statistics, a truly random sample means selecting individuals from a population with an equivalent probability; in other words, picking individuals from a group without bias. Failing to do this correctly will result in a sampling bias, which can dramatically increase the sample error in a systematic way. For example, attempting to measure the average height of the entire human population of the Earth, but measuring a sample only from one country, could result in a large over- or under-estimation. In reality, obtaining an unbiased sample can be difficult as many parameters (in this example, country, age, gender, and so on) may strongly bias the estimator and it must be ensured that none of these factors play a part in the selection process. Even in a perfect non-biased sample, the sample error will still exist due to the remaining statistical component; consider that measuring only two or three individuals and taking the average would produce a wildly varying result each time. The likely size of the sampling error can generally be reduced by taking a larger sample. Sample Size Determination The cost of increasing a sample size may be prohibitive in reality. Since the sample error can often be estimated beforehand as a function of the sample size, various methods of sample size determination are used to weigh the predicted accuracy of an estimator against the predicted cost of taking a larger sample. Bootstrapping and Standard Error As discussed, a sample statistic, such as an average or percentage, will generally be subject to sample-to-sample variation. By comparing many samples, or splitting a larger sample up into smaller ones (potentially with overlap), the spread of the resulting sample statistics can be used to estimate the standard error on the sample. In Genetics The term "sampling error" has also been used in a related but fundamentally different sense in the field of genetics; for example in the bottleneck effect or founder effect, when natural disasters or migrations dramatically reduce the size of a population, resulting in a smaller population that may or may not fairly represent the original one. This is a source of genetic drift, as certain alleles become more or less common), and has been referred to as "sampling error", despite not being an "error" in the statistical sense.
Mathematics
Statistics
null
1956137
https://en.wikipedia.org/wiki/Luminous%20blue%20variable
Luminous blue variable
Luminous blue variables (LBVs) are rare, massive and evolved stars that show unpredictable and sometimes dramatic variations in their spectra and brightness. They are also known as S Doradus variables after S Doradus, one of the brightest stars of the Large Magellanic Cloud. Discovery and history The LBV stars P Cygni and η Carinae have been known as unusual variables since the 17th century, but their true nature was not fully understood until late in the 20th century. In 1922 John Charles Duncan published the first three variable stars ever detected in an external galaxy, variables 1, 2, and 3, in the Triangulum Galaxy (M33). These were followed up by Edwin Hubble with three more in 1926: A, B, and C in M33. Then in 1929 Hubble added a list of variables detected in M31. Of these, Var A, Var B, Var C, and Var 2 in M33 and Var 19 in M31 were followed up with a detailed study by Hubble and Allan Sandage in 1953. Var 1 in M33 was excluded as being too faint and Var 3 had already been classified as a Cepheid variable. At the time they were simply described as irregular variables, although remarkable for being the brightest stars in those galaxies. The original Hubble Sandage paper contains a footnote that S Doradus might be the same type of star, but expressed strong reservations, so the link would have to wait several decades to be confirmed. Later papers referred to these five stars as Hubble–Sandage variables. In the 1970s, Var 83 in M33 and AE Andromedae, AF Andromedae (=Var 19), Var 15, and Var A-1 in M31 were added to the list and described by several authors as "luminous blue variables", although it was not considered a formal name at the time. The spectra were found to contain lines with P Cygni profiles and were compared to η Carinae. In 1978, Roberta M. Humphreys published a study of eight variables in M31 and M33 (excluding Var A) and referred to them as luminous blue variables, as well as making the link to the S Doradus class of variable stars. In 1984 in a presentation at the IAU symposium, Peter Conti formally grouped the S Doradus variables, Hubble–Sandage variables, η Carinae, P Cygni, and other similar stars together under the term "luminous blue variables" and shortened it to LBV. He also clearly separated them from those other luminous blue stars, the Wolf–Rayet stars. Variable star types are usually named after the first member discovered to be variable, for example δ Sct variables named after the star δ Sct. The first luminous blue variable to be identified as a variable star was P Cygni, and these stars have been referred to as P Cygni type variables. The General Catalogue of Variable Stars decided there was a possibility of confusion with P Cygni profiles, which also occur in other types of stars, and chose the acronym SDOR for "variables of the S Doradus type". The term "S Doradus variable" was used to describe P Cygni, S Doradus, η Carinae, and the Hubble-Sandage variables as a group in 1974. Physical properties LBVs are massive unstable supergiant (or hypergiant) stars that show a variety of spectroscopic and photometric variation, most obviously periodic outbursts and occasional much larger eruptions. In their "quiescent" state they are typically B-type stars, occasionally slightly hotter, with unusual emission lines. They are found in a region of the Hertzsprung–Russell diagram known as the S Doradus instability strip, where the least luminous have a temperature around 10,000 K and a luminosity about 250,000 times that of the Sun, whereas the most luminous have a temperature around 25,000 K and a luminosity over a million times that of the Sun, making them some of the most luminous of all stars. During a normal outburst the temperature decreases to around 8,500 K for all stars, slightly hotter than the yellow hypergiants. The bolometric luminosity usually remains constant, which means that visual brightness increases somewhat by a magnitude or two. A few examples have been found where luminosity appears to change during an outburst, but the properties of these unusual stars are difficult to determine accurately. For example, AG Carinae may decrease in luminosity by around 30% during outbursts; and AFGL 2298 has been observed to dramatically increase its luminosity during an outburst although it is not clear if that should be classified as a modest giant eruption. S Doradus typifies this behaviour, which has been referred to as strong-active cycle, and it is regarded as a key criterion for identifying luminous blue variables. Two distinct periodicities are seen, either variations taking longer than 20 years, or less than 10 years. In some cases, the variations are much smaller, less than half a magnitude, with only small temperature reductions. These are referred to as weak-active cycles and always occur on timescales of less than 10 years. Some LBVs have been observed to undergo giant eruptions with dramatically increased mass loss and luminosity, so violent that several were initially catalogued as supernovae. The outbursts mean there are usually nebulae around such stars; η Carinae is the best-studied and most luminous known example, but may not be typical. It is generally assumed that all luminous blue variables undergo one or more of these large eruptions, but they have only been observed in two or three well-studied stars and a handful of supernova imposters (such as SN 2009ip, which later evolved into a true supernova). The two clear examples in the Milky Way galaxy, P Cygni and η Carinae, and the possible example in the Small Magellanic Cloud, HD 5980A, have not shown strong-cycle variations. It is still possible that the two types of variability occur in different groups of stars. 3-D simulations have shown that these outbursts may be caused by variations in helium opacity. Many luminous blue variables also show small amplitude variability with periods less than a year, which appears typical of Alpha Cygni variables, and stochastic (i.e. totally random) variations. Luminous blue variables are by definition more luminous than most stars and also more massive, but within a very wide range. The most luminous are more than (Eta Carinae reaches 4.6 million) and have masses approaching, possibly exceeding, . The least luminous have luminosities around and masses as low as , although they would have been considerably more massive as main-sequence stars, due to their rapid mass loss. Their high mass loss rates could be due to outbursts and very high luminosity and show some enhancement of helium and nitrogen. Evolution Because of these stars' large mass and high luminosity, their lifetime is very short—only a few million years in total and much less than a million years in the LBV phase. They are rapidly evolving on observable timescales; examples have been detected where stars with Wolf–Rayet spectra (WNL/Ofpe) have developed to show LBV outbursts and a handful of supernovae have been traced to likely LBV progenitors. Some models suggest the latter scenario, where luminous blue variable stars are the final evolutionary stage of some massive stars before they explode as supernovae, for at least stars with initial masses between 20 and 25 solar masses. For more-massive stars, computer simulations of their evolution suggest the luminous blue variable phase takes place during the latest phases of core hydrogen burning (LBV with high surface temperature), the hydrogen shell burning phase (LBV with lower surface temperature), and the earliest part of the core helium burning phase (LBV with high surface temperature again) before transitioning to the Wolf–Rayet phase, thus being analogous to the red giant and red supergiant phases of less massive stars. There appear to be two groups of LBVs, one with luminosities above 630,000 times the Sun and the other with luminosities below 400,000 times the Sun, although this is disputed in more recent research. Models have been constructed showing that the lower-luminosity group are post-red-supergiants with initial masses of 30–60 times the Sun, whereas the higher-luminosity group are population-II stars with initial masses 60–90 times the Sun that never develop to red supergiants, although they may become yellow hypergiants. Some models suggest that LBVs are a stage in the evolution of very massive stars required for them to shed excess mass, whereas others require that most of the mass is lost at an earlier cool-supergiant stage. Normal outbursts and the stellar winds in the quiescent state are not sufficient for the required mass loss, but LBVs occasionally produce abnormally large outbursts that can be mistaken for a faint supernova and these may shed the necessary mass. Recent models all agree that the LBV stage occurs after the main-sequence stage and before the hydrogen-depleted Wolf–Rayet stage, and that essentially all LBV stars will eventually explode as supernovae. LBVs apparently can explode directly as a supernova, but probably only a small fraction do. If the star does not lose enough mass before the end of the LBV stage, it may undergo a particularly powerful supernova created by pair-instability. The latest models of stellar evolution suggest that some single stars with initial masses around 20 times that of the Sun will explode as LBVs as type II-P, type IIb, or type Ib supernovae, whereas binary stars undergo much-more-complex evolution through envelope stripping leading to less predictable outcomes. Supernova-like outbursts Luminous blue variable stars can undergo "giant outbursts" with dramatically increased mass loss and luminosity. η Carinae is the prototypical example, with P Cygni showing one or more similar outbursts 300–400 years ago, but dozens have now been catalogued in external galaxies. Many of these were initially classified as supernovae but re-examined because of unusual features. The nature of the outbursts and of the progenitor stars seems to be highly variable, with the outbursts most likely having several different causes. The historical η Carinae and P Cygni outbursts, and several seen more recently in external galaxies, have lasted years or decades whereas some of the supernova imposter events have declined to normal brightness within months. Well-studied examples are: SN 1954J SN 1961V SN 1997bs Early models of stellar evolution had predicted that although the high-mass stars that produce LBVs would often or always end their lives as supernovae, the supernova explosion would not occur at the LBV stage. Prompted by the progenitor of SN 1987A being a blue supergiant, and most likely an LBV, several subsequent supernovae have been associated with LBV progenitors. The progenitor of SN 2005gl has been shown to be an LBV apparently in outburst only a few years earlier. Progenitors of several other type IIn supernovae have been detected and were likely to have been LBVs: SN 2009ip SN 2010jl Modelling suggests that at near-solar metallicity, stars with an initial mass around will explode as a supernova while in the LBV stage of their lives. They will be post-red-supergiants with luminosities a few hundred thousand times that of the Sun. The supernova is expected to be of type II, most likely type IIb, although possibly type IIn due to episodes of enhanced mass loss that occur as an LBV and in the yellow-hypergiant stage. List of LBVs The identification of LBVs requires confirmation of the characteristic spectral and photometric variations, but these stars can be "quiescent" for decades or centuries at which time they are indistinguishable from many other hot luminous stars. A candidate luminous blue variable (cLBV) can be identified relatively quickly on the basis of its spectrum or luminosity, and dozens have been catalogued in the Milky Way during recent surveys. Recent studies of dense clusters and mass spectrographic analysis of luminous stars have identified dozens of probable LBVs in the Milky Way out of a likely total population of just a few hundred, although few have been observed in enough detail to confirm the characteristic types of variability. In addition the majority of the LBVs in the Magellanic Clouds have been identified, several dozen in M31 and M33, plus a handful in other local group galaxies. Milky Way η Carinae P Cygni AG Carinae HR Carinae V432 Carinae (Wray 15-751) V4029 Sagittarii (HD 168607) V905 Scorpii (HD 160529) V1672 Aquilae (AFGL 2298) W1-243 (in Westerlund 1) V481 Scuti (LBV G24.73+0.69) GCIRS 34W MWC 930 (= V446 Scuti) Wray 16-137 WS1 (discovered as WISE Shell 1) MN44 MN48 Candidates: G79.29+0.46 Wray 17-96 HD 316285 MN 112 GAL 026.47+00.02 Several more LBV's have been found near or in the Galactic Center: V4650 Sagittarii (FMM 362 or qF362, in the Quintuplet cluster) V4998 Sagittarii (LBV3, G0.120 0.048, very close to the Quintuplet cluster) Pistol star, Peony star and LBV 1806-20 (candidate LBV's, see below) Large Magellanic Cloud S Doradus HD 269858 (= R127) HD 269006 (= R71) HD 269929 (= R143) HD 269662 (= R110) HD 269700 (= R116) HD 269582 (= MWC 112) HD 269216 HD 37836 (candidate) Small Magellanic Cloud HD 5980 (= R14) HD 6884 (= R40) Andromeda Galaxy AF Andromedae AE Andromedae Var 15 Var A-1 J004526.62+415006.3 J004051.59+403303.0 LAMOST J0037+4016 Triangulum Galaxy Var 2 (an extremely hot star showing no variability since 1935 and hardly studied) Var 83 Var B Var C GR 290 (Romano's star, an unusually hot LBV) NGC 2403: V12 V37 V38 NGC 1156 J025941.21+251412.2 J025941.54+251421.8 NGC 2366 (NGC 2363) NGC 2363-V1 NGC 4449 J122809.72+440514.8 J122817.83+440630.8 NGC 4559 AT 2016blu, which has had multiple outbursts since its discovery in the year 2012. NGC 4736 (Messier 94) NGC 4736_1 PHL 293B Unnamed star that underwent an outburst from 1998 to 2008 in an unusual supernova-like event, and has now disappeared Sunburst galaxy Godzilla Other A number of cLBVs in the Milky Way (and in the case of Sanduleak -69° 202, in the LMC) are well known because of their extreme luminosity or unusual characteristics, including: GCIRS 16SW (S97, candidate LBV orbiting the black hole at the center of this galaxy) Wray 17-96 (unusual hypergiant in the gap between the two semi-stable LBV regions) Pistol Star (once thought to be the most luminous star in the galaxy) LBV 1806-20 (one of the most luminous stars known) Sanduleak -69° 202 (the star that exploded as SN 1987A) Cygnus OB2-12 (blue hypergiant and one of the most luminous stars known) HD 80077 (blue hypergiant) V1429 Aquilae (with a supergiant companion, very similar to a less luminous η Car) V4030 Sagittarii (hypergiant surrounded by a nebula identical to the one around Sanduleak -69° 202) WR 102ka (the Peony star, one of the most luminous stars known, and would be one of the hottest LBVs) Sher 25 (blue supergiant in NGC 3603 with a bipolar outflow and surrounded by a circumstellar ring) BD+40°4210 (blue supergiant in the stellar association Cygnus OB2) Further well-known stars have been LBVs relatively recently, are LBVs in a stable phase or are not currently classified as LBVs but may be transitioning into LBVs: Zeta-1 Scorpii (naked-eye hypergiant) IRC+10420 (yellow hypergiant that has increased its temperature into the LBV range) V509 Cassiopeiae (= HR 8752, an unusual yellow hypergiant evolving bluewards) Rho Cassiopeiae (unstable yellow hypergiant suffering periodic outbursts)
Physical sciences
Stellar astronomy
Astronomy
6531493
https://en.wikipedia.org/wiki/Protein%20%28nutrient%29
Protein (nutrient)
Proteins are essential nutrients for the human body. They are one of the building blocks of body tissue and can also serve as a fuel source. As a fuel, proteins provide as much energy density as carbohydrates: 17 kJ (4 kcal) per gram; in contrast, lipids provide 37 kJ (9 kcal) per gram. The most important aspect and defining characteristic of protein from a nutritional standpoint is its amino acid composition. Proteins are polymer chains made of amino acids linked together by peptide bonds. During human digestion, proteins are broken down in the stomach to smaller polypeptide chains via hydrochloric acid and protease actions. This is crucial for the absorption of the essential amino acids that cannot be biosynthesized by the body. There are nine essential amino acids which humans must obtain from their diet in order to prevent protein-energy malnutrition and resulting death. They are phenylalanine, valine, threonine, tryptophan, methionine, leucine, isoleucine, lysine, and histidine. There has been debate as to whether there are 8 or 9 essential amino acids. The consensus seems to lean towards 9 since histidine is not synthesized in adults. There are five amino acids which humans are able to synthesize in the body. These five are alanine, aspartic acid, asparagine, glutamic acid and serine. There are six conditionally essential amino acids whose synthesis can be limited under special pathophysiological conditions, such as prematurity in the infant or individuals in severe catabolic distress. These six are arginine, cysteine, glycine, glutamine, proline and tyrosine. Dietary sources of protein include grains, legumes, nuts, seeds, meats, dairy products, fish, eggs, edible insects, and seaweeds. Protein functions in human body Protein is a nutrient needed by the human body for growth and maintenance. Aside from water, proteins are the most abundant kind of molecules in the body. Protein can be found in all cells of the body and is the major structural component of all cells in the body, especially muscle. This also includes body organs, hair and skin. Proteins are also used in membranes, such as glycoproteins. When broken down into amino acids, they are used as precursors to nucleic acid, co-enzymes, hormones, immune response, cellular repair, and other molecules essential for life. Additionally, protein is needed to form blood cells. Sources Protein occurs in a wide range of food. On a worldwide basis, plant protein foods contribute over 60% of the per capita supply of protein. In North America, animal-derived foods contribute about 70% of protein sources. Insects are a source of protein in many parts of the world. In parts of Africa, up to 50% of dietary protein derives from insects. It is estimated that more than 2 billion people eat insects daily. Meat, dairy, eggs, soybeans, fish, whole grains, and cereals are sources of protein. Examples of food staples and cereal sources of protein, each with a concentration greater than 7%, are (in no particular order) buckwheat, oats, rye, millet, maize (corn), rice, wheat, sorghum, amaranth, and quinoa. Game meat is an affordable protein source in some countries. Plant sources of proteins include legumes, nuts, seeds, grains, and some vegetables and fruits. Plant foods with protein concentrations greater than 7% include (but are not limited to) soybeans, lentils, kidney beans, white beans, mung beans, chickpeas, cowpeas, lima beans, pigeon peas, lupines, wing beans, almonds, Brazil nuts, cashews, pecans, walnuts, cotton seeds, pumpkin seeds, hemp seeds, sesame seeds, and sunflower seeds. Photovoltaic-driven microbial protein production uses electricity from solar panels and carbon dioxide from the air to create fuel for microbes, which are grown in bioreactor vats and then processed into dry protein powders. The process makes highly efficient use of land, water and fertiliser. People eating a balanced diet do not need protein supplements. The table below presents food groups as protein sources. Colour key: Protein source with highest density of respective amino acid. Protein source with lowest density of respective amino acid. Protein powders – such as casein, whey, egg, rice, soy and cricket flour– are processed and manufactured sources of protein. Testing in foods The classic assays for protein concentration in food are the Kjeldahl method and the Dumas method. These tests determine the total nitrogen in a sample. The only major component of most food which contains nitrogen is protein (fat, carbohydrate and dietary fiber do not contain nitrogen). If the amount of nitrogen is multiplied by a factor depending on the kinds of protein expected in the food the total protein can be determined. This value is known as the "crude protein" content. The use of correct conversion factors is heavily debated, specifically with the introduction of more plant-derived protein products. However, on food labels the protein is calculated by the nitrogen multiplied by 6.25, because the average nitrogen content of proteins is about 16%. The Kjeldahl test is typically used because it is the method the AOAC International has adopted and is therefore used by many food standards agencies around the world, though the Dumas method is also approved by some standards organizations. Accidental contamination and intentional adulteration of protein meals with non-protein nitrogen sources that inflate crude protein content measurements have been known to occur in the food industry for decades. To ensure food quality, purchasers of protein meals routinely conduct quality control tests designed to detect the most common non-protein nitrogen contaminants, such as urea and ammonium nitrate. In at least one segment of the food industry, the dairy industry, some countries (at least the U.S., Australia, France and Hungary) have adopted "true protein" measurement, as opposed to crude protein measurement, as the standard for payment and testing: "True protein is a measure of only the proteins in milk, whereas crude protein is a measure of all sources of nitrogen and includes nonprotein nitrogen, such as urea, which has no food value to humans. ... Current milk-testing equipment measures peptide bonds, a direct measure of true protein." Measuring peptide bonds in grains has also been put into practice in several countries including Canada, the UK, Australia, Russia and Argentina where near-infrared reflectance (NIR) technology, a type of infrared spectroscopy is used. The Food and Agriculture Organization of the United Nations (FAO) recommends that only amino acid analysis be used to determine protein in, inter alia, foods used as the sole source of nourishment, such as infant formula, but also provides: "When data on amino acids analyses are not available, determination of protein based on total N content by Kjeldahl (AOAC, 2000) or similar method ... is considered acceptable." The testing method for protein in beef cattle feed has grown into a science over the post-war years. The standard text in the United States, Nutrient Requirements of Beef Cattle, has been through eight editions over at least seventy years. The 1996 sixth edition substituted for the fifth edition's crude protein the concept of "metabolizeable protein", which was defined around the year 2000 as "the true protein absorbed by the intestine, supplied by microbial protein and undegraded intake protein". The limitations of the Kjeldahl method were at the heart of the Chinese protein export contamination in 2007 and the 2008 China milk scandal in which the industrial chemical melamine was added to the milk or glutens to increase the measured "protein". Protein quality The most important aspect and defining characteristic of protein from a nutritional standpoint is its amino acid composition. There are multiple systems which rate proteins by their usefulness to an organism based on their relative percentage of amino acids and, in some systems, the digestibility of the protein source. They include biological value, net protein utilization, and PDCAAS (Protein Digestibility Corrected Amino Acids Score) which was developed by the FDA as a modification of the Protein efficiency ratio (PER) method. The PDCAAS rating was adopted by the US Food and Drug Administration (FDA) and the Food and Agricultural Organization of the United Nations/World Health Organization (FAO/WHO) in 1993 as "the preferred 'best'" method to determine protein quality. These organizations have suggested that other methods for evaluating the quality of protein are inferior. In 2013 FAO proposed changing to Digestible Indispensable Amino Acid Score. Digestion Most proteins are decomposed to single amino acids by digestion in the gastro-intestinal tract. Digestion typically begins in the stomach when pepsinogen is converted to pepsin by the action of hydrochloric acid, and continued by trypsin and chymotrypsin in the small intestine. Before the absorption in the small intestine, most proteins are already reduced to single amino acid or peptides of several amino acids. Most peptides longer than four amino acids are not absorbed. Absorption into the intestinal absorptive cells is not the end. There, most of the peptides are broken into single amino acids. Absorption of the amino acids and their derivatives into which dietary protein is degraded is done by the gastrointestinal tract. The absorption rates of individual amino acids are highly dependent on the protein source; for example, the digestibilities of many amino acids in humans, the difference between soy and milk proteins and between individual milk proteins, beta-lactoglobulin and casein. For milk proteins, about 50% of the ingested protein is absorbed between the stomach and the jejunum and 90% is absorbed by the time the digested food reaches the ileum. Biological value (BV) is a measure of the proportion of absorbed protein from a food which becomes incorporated into the proteins of the organism's body. Newborn Newborns of mammals are exceptional in protein digestion and assimilation in that they can absorb intact proteins at the small intestine. This enables passive immunity, i.e., transfer of immunoglobulins from the mother to the newborn, via milk. Dietary requirements Considerable debate has taken place regarding issues surrounding protein intake requirements. The amount of protein required in a person's diet is determined in large part by overall energy intake, the body's need for nitrogen and essential amino acids, body weight and composition, rate of growth in the individual, physical activity level, the individual's energy and carbohydrate intake, and the presence of illness or injury. Physical activity and exertion as well as enhanced muscular mass increase the need for protein. Requirements are also greater during childhood for growth and development, during pregnancy, or when breastfeeding in order to nourish a baby or when the body needs to recover from malnutrition or trauma or after an operation. Dietary recommendations According to US & Canadian Dietary Reference Intake guidelines, women ages 19–70 need to consume 46 grams of protein per day while men ages 19–70 need to consume 56 grams of protein per day to minimize risk of deficiencies. These Recommended Dietary Allowances (RDAs) were calculated based on 0.8 grams protein per kilogram body weight and average body weights of 57 kg (126 pounds) and 70 kg (154 pounds), respectively. However, this recommendation is based on structural requirements but disregards use of protein for energy metabolism. This requirement is for a normal sedentary person. In the United States, average protein consumption is higher than the RDA. According to results of the National Health and Nutrition Examination Survey (NHANES 2013–2014), average protein consumption for women ages 20 and older was 69.8 grams and for men 98.3 grams/day. According to research from Harvard University, the National Academy of Medicine suggests that adults should consume at least 0.8 grams of protein per kilogram of body weight daily, which is roughly equivalent to a little more than 7 grams for every 20 pounds of body weight. This recommendation is widely accepted by health professionals as a guideline for maintaining muscle mass, supporting metabolic functions, and promoting overall health. Active people Several studies have concluded that active people and athletes may require elevated protein intake (compared to 0.8 g/kg) due to increase in muscle mass and sweat losses, as well as need for body repair and energy source. Suggested amounts vary from 1.2 to 1.4 g/kg for those doing endurance exercise to as much as 1.6-1.8 g/kg for strength exercise and up to 2.0 g/kg/day for older people, while a proposed maximum daily protein intake would be approximately 25% of energy requirements i.e. approximately 2 to 2.5 g/kg. However, many questions still remain to be resolved. In addition, some have suggested that athletes using restricted-calorie diets for weight loss should further increase their protein consumption, possibly to 1.8–2.0 g/kg, in order to avoid loss of lean muscle mass. Aerobic exercise protein needs Endurance athletes differ from strength-building athletes in that endurance athletes do not build as much muscle mass from training as strength-building athletes do. Research suggests that individuals performing endurance activity require more protein intake than sedentary individuals so that muscles broken down during endurance workouts can be repaired. Although the protein requirement for athletes still remains controversial (for instance see Lamont, Nutrition Research Reviews, pages 142 - 149, 2012), research does show that endurance athletes can benefit from increasing protein intake because the type of exercise endurance athletes participate in still alters the protein metabolism pathway. The overall protein requirement increases because of amino acid oxidation in endurance-trained athletes. Endurance athletes who exercise over a long period (2–5 hours per training session) use protein as a source of 5–10% of their total energy expended. Therefore, a slight increase in protein intake may be beneficial to endurance athletes by replacing the protein lost in energy expenditure and protein lost in repairing muscles. One review concluded that endurance athletes may increase daily protein intake to a maximum of 1.2–1.4 g per kg body weight. Anaerobic exercise protein needs Research also indicates that individuals performing strength training activity require more protein than sedentary individuals. Strength-training athletes may increase their daily protein intake to a maximum of 1.4–1.8 g per kg body weight to enhance muscle protein synthesis, or to make up for the loss of amino acid oxidation during exercise. Many athletes maintain a high-protein diet as part of their training. In fact, some athletes who specialize in anaerobic sports (e.g., weightlifting) believe a very high level of protein intake is necessary, and so consume high protein meals and also protein supplements. Special populations Protein allergies A food allergy is an abnormal immune response to proteins in food. The signs and symptoms may range from mild to severe. They may include itchiness, swelling of the tongue, vomiting, diarrhea, hives, trouble breathing, or low blood pressure. These symptoms typically occurs within minutes to one hour after exposure. When the symptoms are severe, it is known as anaphylaxis. The following eight foods are responsible for about 90% of allergic reactions: cow's milk, eggs, wheat, shellfish, fish, peanuts, tree nuts and soy. Chronic kidney disease While there is no conclusive evidence that a high protein diet can cause chronic kidney disease, there is a consensus that people with this disease should decrease consumption of protein. According to one 2009 review updated in 2018, people with chronic kidney disease who reduce protein consumption have less likelihood of progressing to end stage kidney disease. Moreover, people with this disease while using a low protein diet (0.6 g/kg/d - 0.8 g/kg/d) may develop metabolic compensations that preserve kidney function, although in some people, malnutrition may occur. Phenylketonuria Individuals with phenylketonuria (PKU) must keep their intake of phenylalaninean essential amino acidextremely low to prevent a mental disability and other metabolic complications. Phenylalanine is a component of the artificial sweetener aspartame, so people with PKU need to avoid low calorie beverages and foods with this ingredient. Excess consumption The U.S. and Canadian Dietary Reference Intake review for protein concluded that there was not sufficient evidence to establish a Tolerable upper intake level, i.e., an upper limit for how much protein can be safely consumed. When amino acids are in excess of needs, the liver takes up the amino acids and deaminates them, a process converting the nitrogen from the amino acids into ammonia, further processed in the liver into urea via the urea cycle. Excretion of urea occurs via the kidneys. Other parts of the amino acid molecules can be converted into glucose and used for fuel. When food protein intake is periodically high or low, the body tries to keep protein levels at an equilibrium by using the "labile protein reserve" to compensate for daily variations in protein intake. However, unlike body fat as a reserve for future caloric needs, there is no protein storage for future needs. Excessive protein intake may increase calcium excretion in urine, occurring to compensate for the pH imbalance from oxidation of sulfur amino acids. This may lead to a higher risk of kidney stone formation from calcium in the renal circulatory system. One meta-analysis reported no adverse effects of higher protein intakes on bone density. Another meta-analysis reported a small decrease in systolic and diastolic blood pressure with diets higher in protein, with no differences between animal and plant protein. High protein diets have been shown to lead to an additional 1.21 kg of weight loss over a period of 3 months versus a baseline protein diet in a meta-analysis. Benefits of decreased body mass index as well as HDL cholesterol were more strongly observed in studies with only a slight increase in protein intake rather where high protein intake was classified as 45% of total energy intake. Detrimental effects to cardiovascular activity were not observed in short-term diets of 6 months or less. There is little consensus on the potentially detrimental effects to healthy individuals of a long-term high protein diet, leading to caution advisories about using high protein intake as a form of weight loss. The 2015–2020 Dietary Guidelines for Americans (DGA) recommends that men and teenage boys increase their consumption of fruits, vegetables and other under-consumed foods, and that a means of accomplishing this would be to reduce overall intake of protein foods. The 2015 - 2020 DGA report does not set a recommended limit for the intake of red and processed meat. While the report acknowledges research showing that lower intake of red and processed meat is correlated with reduced risk of cardiovascular diseases in adults, it also notes the value of nutrients provided from these meats. The recommendation is not to limit intake of meats or protein, but rather to monitor and keep within daily limits the sodium (< 2300 mg), saturated fats (less than 10% of total calories per day), and added sugars (less than 10% of total calories per day) that may be increased as a result of consumption of certain meats and proteins. While the 2015 DGA report does advise for a reduced level of consumption of red and processed meats, the 2015-2020 DGA key recommendations recommend that a variety of protein foods be consumed, including both vegetarian and non-vegetarian sources of protein. Protein deficiency Protein deficiency and malnutrition (PEM) can lead to variety of ailments including Intellectual disability and kwashiorkor. Symptoms of kwashiorkor include apathy, diarrhea, inactivity, failure to grow, flaky skin, fatty liver, and edema of the belly and legs. This edema is explained by the action of lipoxygenase on arachidonic acid to form leukotrienes and the normal functioning of proteins in fluid balance and lipoprotein transport. PEM is fairly common worldwide in both children and adults and accounts for 6 million deaths annually. In the industrialized world, PEM is predominantly seen in hospitals, is associated with disease, or is often found in the elderly.
Biology and health sciences
Biochemistry and molecular biology
null
512443
https://en.wikipedia.org/wiki/Moorhen
Moorhen
Moorhens—sometimes called marsh hens—are medium-sized water birds that are members of the rail family (Rallidae). Most species are placed in the genus Gallinula, Latin for "little hen." They are close relatives of coots. They are often referred to as (black) gallinules. Recently, one of the species of Gallinula was found to have enough differences to form a new genus Paragallinula with the only species being the lesser moorhen (Paragallinula angulata). Two species from the Australian region, sometimes separated in , are called "native hens" (also native-hen or nativehen). The native hens differ visually by shorter, thicker and stubbier toes and bills, and longer tails that lack the white signal pattern of typical moorhens. Description These rails are mostly brown and black with some white markings in plumage color. Unlike many of the rails, they are usually easy to see because they feed in open water margins rather than hidden in reedbeds. They have short rounded wings and are weak fliers, although usually capable of covering long distances. The common moorhen in particular migrates up to from some of its breeding areas in the colder parts of Siberia. Those that migrate do so at night. The Gough moorhen on the other hand is considered almost flightless; it can only flutter some metres. As is common in rails, there has been a marked tendency to evolve flightlessness in island populations. Moorhens can walk very well on their strong legs, and have long toes that are well adapted to soft uneven surfaces. These birds are omnivorous, consuming plant material, small rodents, amphibians and eggs. They are aggressively territorial during the breeding season, but are otherwise often found in sizeable flocks on the shallow vegetated lakes they prefer. Systematics and evolution The genus Gallinula was introduced by the French zoologist Mathurin Jacques Brisson in 1760 with the common moorhen (Gallinula chloropus) as the type species. The genus Gallinula contains five extant, one recently extinct, and one possibly extinct species: Samoan moorhen, Gallinula pacifica – sometimes placed in Pareudiastes, possibly extinct (1907?) Makira moorhen, Gallinula silvestris – sometimes placed in Pareudiastes or Edithornis, extremely rare with no direct observations in recent decades, but still considered likely extant due to reports of the species persisting in very small numbers. †Tristan moorhen, Gallinula nesiotis – formerly sometimes placed in ; extinct (late 19th century) Gough moorhen, Gallinula comeri – formerly sometimes placed in Porphyriornis Common moorhen, Gallinula chloropus Common gallinule, Gallinula (chloropus) galeata, recently split by the AOU, other committees still evaluating Dusky moorhen, Gallinula tenebrosa Former members of the genus: Lesser moorhen, Paragallinula angulata Spot-flanked gallinule, Porphyriops melanops Black-tailed native hen, Tribonyx ventralis Tasmanian native hen, Tribonyx mortierii Other moorhens have been described from older remains. Apart from the 1–3 extinctions in more recent times, another 1–4 species have gone extinct as a consequence of early human settlement: Hodgen's waterhen (Gallinula hodgenorum) of New Zealand—which belongs in subgenus Tribonyx—and a species close to the Samoan moorhen from Buka, Solomon Islands, which is almost certainly distinct from the Makira moorhen, as the latter cannot fly. The undescribed Viti Levu gallinule of Fiji would either be separated in Pareudiastes if that genus is considered valid, or may be a completely new genus. Similarly, the undescribed "swamphen" of Mangaia, currently tentatively assigned to Porphyrio, may belong to Gallinula/Pareudiastes. Evolution Still older fossils document the genus since the Late Oligocene onwards. The genus seems to have originated in the Southern Hemisphere, in the general region of Australia. By the Pliocene, it was probably distributed worldwide: Gallinula sp. (Early Pliocene of Hungary and Germany) Gallinula kansarum (Late Pliocene of Kansas, USA) Gallinula balcanica (Late Pliocene of Varshets, Bulgaria). Gallinula gigantea (Early Pleistocene of Czech Republic and Israel) The ancient "Gallinula" disneyi (Late Oligocene—Early Miocene of Riversleigh, Australia) has been separated as genus Australlus. Even among non-Passeriformes, this genus has a long documented existence. Consequently, some unassigned fragmentary rail fossils might also be from moorhens or native hens. For example, specimen QM F30696, a left distal tibiotarsus piece from the Oligo-Miocene boundary at Riversleigh, is similar to but differs in details from "G." disneyi. It cannot be said if this bird—if a distinct species—was flightless. From size alone, it might have been an ancestor of G. mortierii (see also below). In addition to paleosubspecies of Gallinula chloropus, the doubtfully distinct Late Pliocene to Pleistocene Gallinula mortierii reperta was described, referring to the population of the Tasmanian native hen that once inhabited mainland Australia and became extinct at the end of the last ice age. It may be that apart from climate change it was driven to extinction by the introduction of the dingo, which as opposed to the marsupial predators hunted during the day, but this would require a survival of mainland Gallinula mortierii to as late as about 1500 BC. "G." disneyi was yet another flightless native hen, indicative of that group's rather basal position among moorhens. Its time and place of occurrence suggest it as an ancestor of G. mortierii (reperta), from which it differed mostly in its much smaller size. However, some limb bone proportions are also strikingly different, and in any case such a scenario would require a flightless bird to change but little during some 20 million years in an environment rich in predators. As the fossils of G. disneyi as well as the rich recent and subfossil material of G. mortierii shows no evidence of such a change at all, "G." disneyi more probably represents a case of parallel evolution at an earlier date, as signified by its placement in Australlus.
Biology and health sciences
Gruiformes
Animals
512470
https://en.wikipedia.org/wiki/Rubber%20band
Rubber band
A rubber band (also known as an elastic, gum band or lacky band) is a loop of rubber, usually ring or oval shaped, and commonly used to hold multiple objects together. The rubber band was patented in England on March 17, 1845, by Stephen Perry. Most rubber bands are manufactured out of natural rubber as well as for latex free rubber bands or, especially at larger sizes, an elastomer, and are sold in a variety of sizes. Notable developments in the evolution of rubber bands began in 1923 when William H. Spencer obtained a few Goodyear inner tubes and cut the bands by hand in his basement, where he founded Alliance Rubber Company. Spencer persuaded the Akron Beacon Journal as well as the Tulsa World to try wrapping their newspapers with one of his rubber bands to prevent them from blowing across lawns. He went on to pioneer other new markets for rubber bands such as: agricultural and industrial applications and a myriad of other uses. Spencer obtained a patent on February 19, 1957, for a new "Method for Making Elastic Bands" which produced rubber bands in an Open Ring design. Manufacturing Most rubber, whether it is natural or synthetic, normally arrives at the manufacturing facility in large bales. Rubber bands are made by extruding the rubber into a long tube to provide its general shape. There are a number of different methods that can be applied at this point in the manufacturing process. Originally, and in some instances still today, the rubber tubes will then be placed on mandrels, curing the rubber with heat, and then slicing them across the width of the tube into little bands. This causes the tube to split into multiple sections, creating rubber bands. This is most commonly known as an "off-line" rubber extrusion process. While other rubber products may use synthetic rubber, most rubber bands are primarily manufactured using natural rubber because of its superior elasticity. Natural rubber originates from the latex of the rubber tree, which is acquired by tapping into the bark layers of the rubber tree. Rubber trees belong to the spurge family (Euphorbiaceae) and only survive in hot, humid tropical climates near the equator, so the majority of latex is produced in the Southeast Asian countries of Malaysia, Thailand, and Indonesia. Once the latex has been tapped and is exposed to the air, it begins to harden and become elastic, or rubbery. Rubber band sizes Measuring A rubber band is usually measured in three basic dimensions: the flat length, cut width, and wall thickness. The flat length is the total unstretched length. Perpendicular to the flat length is the cut width. Wall thickness is generally measured using tools such as calipers or pin gauges. The wall thickness determines the band's strength and durability. If one imagines a rubber band during manufacture, that is, a long tube of rubber on a mandrel, before it is sliced into rubber bands, the band's width is decided by how far apart the slices are cut, and its length by the circumference of the tube. Size numbers A rubber band is assigned an industry-standard number based on its dimensions. The first use of rubber band size numbers can be traced back to the early 20th century. While it is difficult to pinpoint an exact date, the practice of categorizing rubber bands by size allowed to easily select the appropriate rubber band for their specific needs. Standards for the rubber band were established in the United States in 1925 by the Department of Commerce, Bureau of Standards and is the first known publication to reference rubber band size. Generally, rubber bands are numbered from smallest to largest, width first. Thus, rubber bands numbered 8–19 are all  inch wide, with lengths going from  inch to  inches. Rubber band numbers 30–35 are for width of  inch, going again from shorter to longer. For even longer bands, the numbering starts over for numbers above 100, again starting at width  inch. {| class="wikitable" |+Rubber band sizes | Size | Length (in) | Width (in) | Thickness (in) |- | 10 | | | |- | 12 | | | |- | 14 | 2 | | |- | 31 | | | |- | 32 | 3 | | |- | 33 | | | |- | 61 | 2 | | |- | 62 | | | |- | 63 | 3 | | |- | 64 | | | |- | 117 | 7 | | |- |} Thermodynamics Temperature affects the elasticity of a rubber band in an unusual way. Heating causes the rubber band to contract and cooling causes expansion. Stretching a rubber band will cause it to release heat, while releasing it after it has been stretched will make it absorb heat, causing its surroundings to become a little cooler. This effect is due to the higher entropy of the unstressed state, which is more entangled and therefore has more states available. In other words, the ability to convert thermal energy into work while the rubber relaxes is allowed by the higher entropy of the relaxed state. The result is that a rubber band behaves somewhat like an ideal monatomic gas inasmuch as (to good approximation) that elastic polymers do not store any potential energy in stretched chemical bonds. No elastic work is done to "stretch" molecules when work is done upon these bulk polymers. Instead, all work done to the rubber is "released" (not stored) and appears immediately in the polymer as thermal energy. Conversely, when the polymer does work on the surroundings (such as contracting to lift an object) it converts thermal energy to work in the process and cools in the same manner as an ideal gas, expanding while doing work. Red rubber bands In the UK during 2004, following complaints from the public about postal carriers creating litter by discarding the rubber bands which they used to keep their mail together, the Royal Mail introduced red bands for their workers to use: it was hoped that, as the bands were easier to spot than the traditional brown ones and since only the Royal Mail used them, employees would see (and feel compelled to pick up) any red bands which they had inadvertently dropped. As of 2006, some 342 million red bands were being used per year. The Royal Mail no longer uses red rubber bands as of about 2010. The exact date of the expunging of these is uncertain, presumably as different areas used up old stock at varying rates. Rubber bands in orthodontics Special rubber bands of medical-grade latex can be used (worn) for orthodontic correction of teeth position together with metal braces or clear aligners to apply additional pressure on the teeth being straightened. These rubber bands are manufactured in different sizes for use in the varying steps of Orthodontic treatment. They are often termed orthodontic elastics. Ranger bands This type of rubber band was popularized by use in the military. Ranger bands are essentially sections of tire inner tubing cut into various sizes. They have the advantage of being versatile, durable, and resistant to weather and abrasion. They are commonly used for lashings, and can also be used for makeshift handle grips, providing a strong high-friction surface with excellent shock absorption. Identical loops of inner tube are used by cavers and cave divers, and in that context are called snoopy loops by the British caving and cave diving community. When they get lost they are recognizable as a common form of litter. Snoopy loops are easily cut from discarded car and motorcycle inner tubes using a pair of scissors. A knife cut may leave a notched edge which can lead to tearing. Varying sizes of inner tube are used for different tasks. Uses in caving include sealing cuffs of oversuits and collars of boots against the ingress of water, holding kneepads and elbow pads in place or securing dive lines to small rocks. and have been used for first aid for strapping injured joints tightly in place. Technical divers use small snoopy loops made from bicycle inner tubes to prevent backup lights clipped to a dive harness from dangling, and larger loops cut from car tubes are used to stow hoses against sling or sidemount cylinders. The exact origin is unknown and has been subject to much speculation. The practice of using snoopy Loops has been claimed to have originated in Greece and spotted by Cave Diving Group members in the late 1970s. The practice was then propagated in Yorkshire Dales. Another claim is that snoopy loops were named by Dave Morris, a Cave Diving Group caver who noticed how they 'snooped' around boulders. It was considered a ridiculous name at the time. None of these claims are particularly plausible as the use is obvious and is likely to have originated independently in several places at earlier dates. Elastration In animal husbandry, rubber bands are used for docking and castration of livestock. The procedure involves banding the body part with a tight latex (rubber) band to restrict blood flow. As the blood flow diminishes, the cells within the gonads die and dehydrate. The part eventually drops off. Model use Rubber bands have long been one of the methods of powering small free-flight model aircraft, the rubber band being anchored at the rear of the fuselage and connected to the propeller at the front. To 'wind up' the 'engine', the propeller is repeatedly turned, twisting the rubber band. When the propeller has had enough turns, the propeller is released and the model launched, the rubber band then turning the propeller rapidly until it has unwound. One of the first to use this method was pioneer aerodynamicist George Cayley, who used rubber band-driven motors for powering his small experimental models. These 'rubber motors' have also been used for powering small model boats. Balls A rubber band ball is a sphere of rubber bands made by using a knotted single band as a starting point and then wrapping rubber bands around the center until the desired size is achieved. The ball is usually made from 100% rubber bands, but some instructions call for using a marble, a crumpled piece of paper, or a ping-pong ball as a starting point. Notable rubber band balls The world's largest rubber band ball as of November 19, 2008, was created by Joel Waul of Lauderhill, Florida. He is currently the World Record Holder according to the Guinness World Records. The ball, which previously sat under a tarp in Waul's driveway, weighs 9,032 pounds (4,097 kg), is more than tall (which implies about a circumference), and consists of more than 700,000 rubber bands. It set the world record on November 13, 2008, in Lauderhill, Florida. The ball is now owned by Ripley's Believe It or Not!. Steve Milton of Eugene, Oregon, previously held the record for the biggest rubber band ball beginning in 2006. During the construction of his rubber band ball, he was sponsored by OfficeMax, who sent him rubber bands to use for his ball. His ball was approximately 175,000 rubber bands, tall (circumference: ), and weighed . He began building the ball, with help from his family, in November, 2005 and would store the ball in their garage. Before Steve Milton, the record was held by John Bain of Wilmington, Delaware, beginning in 1998. In 2003, his ball weighed around , consisting of over 850,000 rubber bands and is tall (circumference: ). He put the ball up for auction in 2005, but he and his ball participated in Guinness World Records Day 2006. The bands were donated by two companies: Alliance Rubber and Textrip Ltd./Stretchwell Inc. The former world record was set in 1978.
Technology
Containers
null
512662
https://en.wikipedia.org/wiki/Cardiovascular%20disease
Cardiovascular disease
Cardiovascular disease (CVD) is any disease involving the heart or blood vessels. CVDs constitute a class of diseases that includes: coronary artery diseases (e.g. angina, heart attack), heart failure, hypertensive heart disease, rheumatic heart disease, cardiomyopathy, arrhythmia, congenital heart disease, valvular heart disease, carditis, aortic aneurysms, peripheral artery disease, thromboembolic disease, and venous thrombosis. The underlying mechanisms vary depending on the disease. It is estimated that dietary risk factors are associated with 53% of CVD deaths. Coronary artery disease, stroke, and peripheral artery disease involve atherosclerosis. This may be caused by high blood pressure, smoking, diabetes mellitus, lack of exercise, obesity, high blood cholesterol, poor diet, excessive alcohol consumption, and poor sleep, among other things. High blood pressure is estimated to account for approximately 13% of CVD deaths, while tobacco accounts for 9%, diabetes 6%, lack of exercise 6%, and obesity 5%. Rheumatic heart disease may follow untreated strep throat. It is estimated that up to 90% of CVD may be preventable. Prevention of CVD involves improving risk factors through: healthy eating, exercise, avoidance of tobacco smoke and limiting alcohol intake. Treating risk factors, such as high blood pressure, blood lipids and diabetes is also beneficial. Treating people who have strep throat with antibiotics can decrease the risk of rheumatic heart disease. The use of aspirin in people who are otherwise healthy is of unclear benefit. Cardiovascular diseases are the leading cause of death worldwide except Africa. Together CVD resulted in 17.9 million deaths (32.1%) in 2015, up from 12.3 million (25.8%) in 1990. Deaths, at a given age, from CVD are more common and have been increasing in much of the developing world, while rates have declined in most of the developed world since the 1970s. Coronary artery disease and stroke account for 80% of CVD deaths in males and 75% of CVD deaths in females. Most cardiovascular disease affects older adults. In the United States 11% of people between 20 and 40 have CVD, while 37% between 40 and 60, 71% of people between 60 and 80, and 85% of people over 80 have CVD. The average age of death from coronary artery disease in the developed world is around 80, while it is around 68 in the developing world. CVD is typically diagnosed seven to ten years earlier in men than in women. Types There are many cardiovascular diseases involving the blood vessels. They are known as vascular diseases. Coronary artery disease (coronary heart disease or ischemic heart disease) Peripheral arterial disease – a disease of blood vessels that supply blood to the arms and legs Cerebrovascular disease – a disease of blood vessels that supply blood to the brain (includes stroke) Renal artery stenosis Aortic aneurysm There are also many cardiovascular diseases that involve the heart. Cardiomyopathy – diseases of cardiac muscle Hypertensive heart disease – diseases of the heart secondary to high blood pressure or hypertension Heart failure – a clinical syndrome caused by the inability of the heart to supply sufficient blood to the tissues to meet their metabolic requirements Pulmonary heart disease – a failure at the right side of the heart with respiratory system involvement Cardiac dysrhythmias – abnormalities of heart rhythm Inflammatory heart diseases Endocarditis – inflammation of the inner layer of the heart, the endocardium. The structures most commonly involved are the heart valves. Inflammatory cardiomegaly Myocarditis – inflammation of the myocardium, the muscular part of the heart, caused most often by viral infection and less often by bacterial infections, certain medications, toxins, and autoimmune disorders. It is characterized in part by infiltration of the heart by lymphocyte and monocyte types of white blood cells. Eosinophilic myocarditis – inflammation of the myocardium caused by pathologically activated eosinophilic white blood cells. This disorder differs from myocarditis in its causes and treatments. Valvular heart disease Congenital heart disease – heart structure malformations existing at birth Rheumatic heart disease – heart muscles and valves damage due to rheumatic fever caused by Streptococcus pyogenes a group A streptococcal infection. Risk factors There are many risk factors for heart diseases: age, sex, tobacco use, physical inactivity, non-alcoholic fatty liver disease, excessive alcohol consumption, unhealthy diet, obesity, genetic predisposition and family history of cardiovascular disease, raised blood pressure (hypertension), raised blood sugar (diabetes mellitus), raised blood cholesterol (hyperlipidemia), undiagnosed celiac disease, psychosocial factors, poverty and low educational status, air pollution, and poor sleep. While the individual contribution of each risk factor varies between different communities or ethnic groups the overall contribution of these risk factors is very consistent. Some of these risk factors, such as age, sex or family history/genetic predisposition, are immutable; however, many important cardiovascular risk factors are modifiable by lifestyle change, social change, drug treatment (for example prevention of hypertension, hyperlipidemia, and diabetes). People with obesity are at increased risk of atherosclerosis of the coronary arteries. Genetics Cardiovascular disease in a person's parents increases their risk by ~3 fold, and genetics is an important risk factor for cardiovascular diseases. Genetic cardiovascular disease can occur either as a consequence of single variant (Mendelian) or polygenic influences. There are more than 40 inherited cardiovascular disease that can be traced to a single disease-causing DNA variant, although these conditions are rare. Most common cardiovascular diseases are non-Mendelian and are thought to be due to hundreds or thousands of genetic variants (known as single nucleotide polymorphisms), each associated with a small effect. Age Age is the most important risk factor in developing cardiovascular or heart diseases, with approximately a tripling of risk with each decade of life. Coronary fatty streaks can begin to form in adolescence. It is estimated that 82 percent of people who die of coronary heart disease are 65 and older. Simultaneously, the risk of stroke doubles every decade after age 55. Multiple explanations are proposed to explain why age increases the risk of cardiovascular/heart diseases. One of them relates to serum cholesterol level. In most populations, the serum total cholesterol level increases as age increases. In men, this increase levels off around age 45 to 50 years. In women, the increase continues sharply until age 60 to 65 years. Aging is also associated with changes in the mechanical and structural properties of the vascular wall, which leads to the loss of arterial elasticity and reduced arterial compliance and may subsequently lead to coronary artery disease. Sex Men are at greater risk of heart disease than pre-menopausal women. Once past menopause, it has been argued that a woman's risk is similar to a man's although more recent data from the WHO and UN disputes this. If a female has diabetes, she is more likely to develop heart disease than a male with diabetes. Women who have high blood pressure and had complications in their pregnancy have three times the risk of developing cardiovascular disease compared to women with normal blood pressure who had no complications in pregnancy. Coronary heart diseases are 2 to 5 times more common among middle-aged men than women. In a study done by the World Health Organization, sex contributes to approximately 40% of the variation in sex ratios of coronary heart disease mortality. Another study reports similar results finding that sex differences explains nearly half the risk associated with cardiovascular diseases One of the proposed explanations for sex differences in cardiovascular diseases is hormonal difference. Among women, estrogen is the predominant sex hormone. Estrogen may have protective effects on glucose metabolism and hemostatic system, and may have direct effect in improving endothelial cell function. The production of estrogen decreases after menopause, and this may change the female lipid metabolism toward a more atherogenic form by decreasing the HDL cholesterol level while increasing LDL and total cholesterol levels. Among men and women, there are differences in body weight, height, body fat distribution, heart rate, stroke volume, and arterial compliance. In the very elderly, age-related large artery pulsatility and stiffness are more pronounced among women than men. This may be caused by the women's smaller body size and arterial dimensions which are independent of menopause. Tobacco Cigarettes are the major form of smoked tobacco. Risks to health from tobacco use result not only from direct consumption of tobacco, but also from exposure to second-hand smoke. Approximately 10% of cardiovascular disease is attributed to smoking; however, people who quit smoking by age 30 have almost as low a risk of death as never smokers. Physical inactivity Insufficient physical activity (defined as less than 5 x 30 minutes of moderate activity per week, or less than 3 x 20 minutes of vigorous activity per week) is currently the fourth leading risk factor for mortality worldwide. In 2008, 31.3% of adults aged 15 or older (28.2% men and 34.4% women) were insufficiently physically active. The risk of ischemic heart disease and diabetes mellitus is reduced by almost a third in adults who participate in 150 minutes of moderate physical activity each week (or equivalent). In addition, physical activity assists weight loss and improves blood glucose control, blood pressure, lipid profile and insulin sensitivity. These effects may, at least in part, explain its cardiovascular benefits. Diet High dietary intakes of saturated fat, trans-fats and salt, and low intake of fruits, vegetables and fish are linked to cardiovascular risk, although whether all these associations indicate causes is disputed. The World Health Organization attributes approximately 1.7 million deaths worldwide to low fruit and vegetable consumption. Frequent consumption of high-energy foods, such as processed foods that are high in fats and sugars, promotes obesity and may increase cardiovascular risk. The amount of dietary salt consumed may also be an important determinant of blood pressure levels and overall cardiovascular risk. There is moderate quality evidence that reducing saturated fat intake for at least two years reduces the risk of cardiovascular disease. High trans-fat intake has adverse effects on blood lipids and circulating inflammatory markers, and elimination of trans-fat from diets has been widely advocated. In 2018 the World Health Organization estimated that trans fats were the cause of more than half a million deaths per year. There is evidence that higher consumption of sugar is associated with higher blood pressure and unfavorable blood lipids, and sugar intake also increases the risk of diabetes mellitus. High consumption of processed meats is associated with an increased risk of cardiovascular disease, possibly in part due to increased dietary salt intake. Alcohol The relationship between alcohol consumption and cardiovascular disease is complex, and may depend on the amount of alcohol consumed. There is a direct relationship between high levels of drinking alcohol and cardiovascular disease. Drinking at low levels without episodes of heavy drinking may be associated with a reduced risk of cardiovascular disease, but there is evidence that associations between moderate alcohol consumption and protection from stroke are non-causal. At the population level, the health risks of drinking alcohol exceed any potential benefits. Celiac disease Untreated celiac disease can cause the development of many types of cardiovascular diseases, most of which improve or resolve with a gluten-free diet and intestinal healing. However, delays in recognition and diagnosis of celiac disease can cause irreversible heart damage. Sleep A lack of good sleep, in amount or quality, is documented as increasing cardiovascular risk in both adults and teens. Recommendations suggest that infants typically need 12 or more hours of sleep per day, adolescents at least eight or nine hours, and adults seven or eight. About one-third of adult Americans get less than the recommended seven hours of sleep per night, and in a study of teenagers, just 2.2 percent of those studied got enough sleep, many of whom did not get good quality sleep. Studies have shown that short sleepers getting less than seven hours sleep per night have a 10 percent to 30 percent higher risk of cardiovascular disease. Sleep disorders such as sleep-disordered breathing and insomnia, are also associated with a higher cardiometabolic risk. An estimated 50 to 70 million Americans have insomnia, sleep apnea or other chronic sleep disorders. In addition, sleep research displays differences in race and class. Short sleep and poor sleep tend to be more frequently reported in ethnic minorities than in whites. African-Americans report experiencing short durations of sleep five times more often than whites, possibly as a result of social and environmental factors. Black children and children living in disadvantaged neighborhoods have much higher rates of sleep apnea. Socioeconomic disadvantage Cardiovascular disease has a greater impact on low- and middle-income countries compared to those with higher income. Although data on the social patterns of cardiovascular disease in low- and middle-income countries is limited, reports from high-income countries consistently demonstrate that low educational status or income are associated with a greater risk of cardiovascular disease. Policies that have resulted in increased socio-economic inequalities have been associated with greater subsequent socio-economic differences in cardiovascular disease implying a cause and effect relationship. Psychosocial factors, environmental exposures, health behaviours, and health-care access and quality contribute to socio-economic differentials in cardiovascular disease. The Commission on Social Determinants of Health recommended that more equal distributions of power, wealth, education, housing, environmental factors, nutrition, and health care were needed to address inequalities in cardiovascular disease and non-communicable diseases. Air pollution Particulate matter has been studied for its short- and long-term exposure effects on cardiovascular disease. Currently, airborne particles under 2.5 micrometers in diameter (PM2.5) are the major focus, in which gradients are used to determine CVD risk. Overall, long-term PM exposure increased rate of atherosclerosis and inflammation. In regards to short-term exposure (2 hours), every 25 μg/m3 of PM2.5 resulted in a 48% increase of CVD mortality risk. In addition, after only 5 days of exposure, a rise in systolic (2.8 mmHg) and diastolic (2.7 mmHg) blood pressure occurred for every 10.5 μg/m3 of PM2.5. Other research has implicated PM2.5 in irregular heart rhythm, reduced heart rate variability (decreased vagal tone), and most notably heart failure. PM2.5 is also linked to carotid artery thickening and increased risk of acute myocardial infarction. Cardiovascular risk assessment Existing cardiovascular disease or a previous cardiovascular event, such as a heart attack or stroke, is the strongest predictor of a future cardiovascular event. Age, sex, smoking, blood pressure, blood lipids and diabetes are important predictors of future cardiovascular disease in people who are not known to have cardiovascular disease. These measures, and sometimes others, may be combined into composite risk scores to estimate an individual's future risk of cardiovascular disease. Numerous risk scores exist although their respective merits are debated. Other diagnostic tests and biomarkers remain under evaluation but currently these lack clear-cut evidence to support their routine use. They include family history, coronary artery calcification score, high sensitivity C-reactive protein (hs-CRP), ankle–brachial pressure index, lipoprotein subclasses and particle concentration, lipoprotein(a), apolipoproteins A-I and B, fibrinogen, white blood cell count, homocysteine, N-terminal pro B-type natriuretic peptide (NT-proBNP), and markers of kidney function. High blood phosphorus is also linked to an increased risk. Depression and traumatic stress There is evidence that mental health problems, in particular depression and traumatic stress, is linked to cardiovascular diseases. Whereas mental health problems are known to be associated with risk factors for cardiovascular diseases such as smoking, poor diet, and a sedentary lifestyle, these factors alone do not explain the increased risk of cardiovascular diseases seen in depression, stress, and anxiety. Moreover, posttraumatic stress disorder is independently associated with increased risk for incident coronary heart disease, even after adjusting for depression and other covariates. Occupational exposure Little is known about the relationship between work and cardiovascular disease, but links have been established between certain toxins, extreme heat and cold, exposure to tobacco smoke, and mental health concerns such as stress and depression. Non-chemical risk factors A 2015 SBU-report looking at non-chemical factors found an association for those: with mentally stressful work with a lack of control over their working situation — with an effort-reward imbalance who experience low social support at work; who experience injustice or experience insufficient opportunities for personal development; or those who experience job insecurity those who work night schedules; or have long working weeks those who are exposed to noise Specifically the risk of stroke was also increased by exposure to ionizing radiation. Hypertension develops more often in those who experience job strain and who have shift-work. Differences between women and men in risk are small, however men risk having and dying of heart attacks or stroke twice as often as women during working life. Chemical risk factors A 2017 SBU report found evidence that workplace exposure to silica dust, engine exhaust or welding fumes is associated with heart disease. Associations also exist for exposure to arsenic, benzopyrenes, lead, dynamite, carbon disulphide, carbon monoxide, metalworking fluids and occupational exposure to tobacco smoke. Working with the electrolytic production of aluminium or the production of paper when the sulphate pulping process is used is associated with heart disease. An association was also found between heart disease and exposure to compounds which are no longer permitted in certain work environments, such as phenoxy acids containing TCDD(dioxin) or asbestos. Workplace exposure to silica dust or asbestos is also associated with pulmonary heart disease. There is evidence that workplace exposure to lead, carbon disulphide, phenoxyacids containing TCDD, as well as working in an environment where aluminum is being electrolytically produced, is associated with stroke. Somatic mutations As of 2017, evidence suggests that certain leukemia-associated mutations in blood cells may also lead to increased risk of cardiovascular disease. Several large-scale research projects looking at human genetic data have found a robust link between the presence of these mutations, a condition known as clonal hematopoiesis, and cardiovascular disease-related incidents and mortality. Radiation therapy Radiation treatments (RT) for cancer can increase the risk of heart disease and death, as observed in breast cancer therapy. Therapeutic radiation increases the risk of a subsequent heart attack or stroke by 1.5 to 4 times; the increase depends on the dose strength, volume, and location. Use of concomitant chemotherapy, e.g. anthracyclines, is an aggravating risk factor. The occurrence rate of RT induced cardiovascular disease is estimated between 10% and 30%. Side-effects from radiation therapy for cardiovascular diseases have been termed radiation-induced heart disease or radiation-induced cardiovascular disease. Symptoms are dose-dependent and include cardiomyopathy, myocardial fibrosis, valvular heart disease, coronary artery disease, heart arrhythmia and peripheral artery disease. Radiation-induced fibrosis, vascular cell damage and oxidative stress can lead to these and other late side-effect symptoms. Pathophysiology Population-based studies show that atherosclerosis, the major precursor of cardiovascular disease, begins in childhood. The Pathobiological Determinants of Atherosclerosis in Youth (PDAY) study demonstrated that intimal lesions appear in all the aortas and more than half of the right coronary arteries of youths aged 7–9 years. Obesity and diabetes mellitus are linked to cardiovascular disease, as are a history of chronic kidney disease and hypercholesterolaemia. In fact, cardiovascular disease is the most life-threatening of the diabetic complications and diabetics are two- to four-fold more likely to die of cardiovascular-related causes than nondiabetics. Screening Screening ECGs (either at rest or with exercise) are not recommended in those without symptoms who are at low risk. This includes those who are young without risk factors. In those at higher risk the evidence for screening with ECGs is inconclusive. Additionally echocardiography, myocardial perfusion imaging, and cardiac stress testing is not recommended in those at low risk who do not have symptoms. Some biomarkers may add to conventional cardiovascular risk factors in predicting the risk of future cardiovascular disease; however, the value of some biomarkers is questionable. Ankle-brachial index (ABI), high-sensitivity C-reactive protein (hsCRP), and coronary artery calcium, are also of unclear benefit in those without symptoms as of 2018. The NIH recommends lipid testing in children beginning at the age of 2 if there is a family history of heart disease or lipid problems. It is hoped that early testing will improve lifestyle factors in those at risk such as diet and exercise. Screening and selection for primary prevention interventions has traditionally been done through absolute risk using a variety of scores (ex. Framingham or Reynolds risk scores). This stratification has separated people who receive the lifestyle interventions (generally lower and intermediate risk) from the medication (higher risk). The number and variety of risk scores available for use has multiplied, but their efficacy according to a 2016 review was unclear due to lack of external validation or impact analysis. Risk stratification models often lack sensitivity for population groups and do not account for the large number of negative events among the intermediate and low risk groups. As a result, future preventative screening appears to shift toward applying prevention according to randomized trial results of each intervention rather than large-scale risk assessment. Prevention Up to 90% of cardiovascular disease may be preventable if established risk factors are avoided. Currently practised measures to prevent cardiovascular disease include: Maintaining a healthy diet, such as the Mediterranean diet, a vegetarian, vegan or another plant-based diet. Replacing saturated fat with healthier choices: Clinical trials show that replacing saturated fat with polyunsaturated vegetable oil reduced CVD by 30%. Prospective observational studies show that in many populations lower intake of saturated fat coupled with higher intake of polyunsaturated and monounsaturated fat is associated with lower rates of CVD. Decrease body fat if overweight or obese. The effect of weight loss is often difficult to distinguish from dietary change, and evidence on weight reducing diets is limited. In observational studies of people with severe obesity, weight loss following bariatric surgery is associated with a 46% reduction in cardiovascular risk. Limit alcohol consumption to the recommended daily limits. People who moderately consume alcoholic drinks have a 25–30% lower risk of cardiovascular disease. However, people who are genetically predisposed to consume less alcohol have lower rates of cardiovascular disease suggesting that alcohol itself may not be protective. Excessive alcohol intake increases the risk of cardiovascular disease and consumption of alcohol is associated with increased risk of a cardiovascular event in the day following consumption. Decrease non-HDL cholesterol. Statin treatment reduces cardiovascular mortality by about 31%. Stopping smoking and avoidance of second-hand smoke. Stopping smoking reduces risk by about 35%. At least 150 minutes (2 hours and 30 minutes) of moderate exercise per week. Lower blood pressure, if elevated. A 10 mmHg reduction in blood pressure reduces risk by about 20%. Lowering blood pressure appears to be effective even at normal blood pressure ranges. Decrease psychosocial stress. This measure may be complicated by imprecise definitions of what constitute psychosocial interventions. Mental stress–induced myocardial ischemia is associated with an increased risk of heart problems in those with previous heart disease. Severe emotional and physical stress leads to a form of heart dysfunction known as Takotsubo syndrome in some people. Stress, however, plays a relatively minor role in hypertension. Specific relaxation therapies are of unclear benefit. Not enough sleep also raises the risk of high blood pressure. Adults need about 7–9 hours of sleep. Sleep apnea is also a major risk as it causes breathing to stop briefly, which can put stress on the body which can raise the risk of heart disease. Most guidelines recommend combining preventive strategies. There is some evidence that interventions aiming to reduce more than one cardiovascular risk factor may have beneficial effects on blood pressure, body mass index and waist circumference; however, evidence was limited and the authors were unable to draw firm conclusions on the effects on cardiovascular events and mortality. There is additional evidence to suggest that providing people with a cardiovascular disease risk score may reduce risk factors by a small amount compared to usual care. However, there was some uncertainty as to whether providing these scores had any effect on cardiovascular disease events. It is unclear whether or not dental care in those with periodontitis affects their risk of cardiovascular disease. According to a 2021 WHO study, working 55+ hours a week raises the risk of stroke by 35% and the risk of dying from heart conditions by 17%, when compared to a 35-40 hours week. Diet A diet high in fruits and vegetables decreases the risk of cardiovascular disease and death. A 2021 review found that plant-based diets can provide a risk reduction for CVD if a healthy plant-based diet is consumed. Unhealthy plant-based diets do not provide benefits over diets including meat. A similar meta-analysis and systematic review also looked into dietary patterns and found "that diets lower in animal foods and unhealthy plant foods, and higher in healthy plant foods are beneficial for CVD prevention". A 2018 meta-analysis of observational studies concluded that "In most countries, a vegan diet is associated with a more favourable cardio-metabolic profile compared to an omnivorous diet." Evidence suggests that the Mediterranean diet may improve cardiovascular outcomes. There is also evidence that a Mediterranean diet may be more effective than a low-fat diet in bringing about long-term changes to cardiovascular risk factors (e.g., lower cholesterol level and blood pressure). The DASH diet (high in nuts, fish, fruits and vegetables, and low in sweets, red meat and fat) has been shown to reduce blood pressure, lower total and low density lipoprotein cholesterol and improve metabolic syndrome; but the long-term benefits have been questioned. A high-fiber diet is associated with lower risks of cardiovascular disease. Worldwide, dietary guidelines recommend a reduction in saturated fat, and although the role of dietary fat in cardiovascular disease is complex and controversial there is a long-standing consensus that replacing saturated fat with unsaturated fat in the diet is sound medical advice. Total fat intake has not been found to be associated with cardiovascular risk. A 2020 systematic review found moderate quality evidence that reducing saturated fat intake for at least 2 years caused a reduction in cardiovascular events. A 2015 meta-analysis of observational studies however did not find a convincing association between saturated fat intake and cardiovascular disease. Variation in what is used as a substitute for saturated fat may explain some differences in findings. The benefit from replacement with polyunsaturated fats appears greatest, while replacement of saturated fats with carbohydrates does not appear to have a beneficial effect. A diet high in trans fatty acids is associated with higher rates of cardiovascular disease, and in 2015 the Food and Drug Administration (FDA) determined that there was 'no longer a consensus among qualified experts that partially hydrogenated oils (PHOs), which are the primary dietary source of industrially produced trans fatty acids (IP-TFA), are generally recognized as safe (GRAS) for any use in human food'. There is conflicting evidence concerning whether dietary supplements of omega-3 fatty acids (a type of polyunsaturated essential fatty acid) added to diet improve cardiovascular risk. The benefits of recommending a low-salt diet in people with high or normal blood pressure are not clear. In those with heart failure, after one study was left out, the rest of the trials show a trend to benefit. Another review of dietary salt concluded that there is strong evidence that high dietary salt intake increases blood pressure and worsens hypertension, and that it increases the number of cardiovascular disease events; both as a result of the increased blood pressure and probably through other mechanisms. Moderate evidence was found that high salt intake increases cardiovascular mortality; and some evidence was found for an increase in overall mortality, strokes, and left ventricular hypertrophy. Intermittent fasting Overall, the current body of scientific evidence is uncertain on whether intermittent fasting could prevent cardiovascular disease. Intermittent fasting may help people lose more weight than regular eating patterns, but was not different from energy restriction diets. Medication Blood pressure medication reduces cardiovascular disease in people at risk, irrespective of age, the baseline level of cardiovascular risk, or baseline blood pressure. The commonly-used drug regimens have similar efficacy in reducing the risk of all major cardiovascular events, although there may be differences between drugs in their ability to prevent specific outcomes. Larger reductions in blood pressure produce larger reductions in risk, and most people with high blood pressure require more than one drug to achieve adequate reduction in blood pressure. Adherence to medications is often poor, and while mobile phone text messaging has been tried to improve adherence, there is insufficient evidence that it alters secondary prevention of cardiovascular disease. Statins are effective in preventing further cardiovascular disease in people with a history of cardiovascular disease. As the event rate is higher in men than in women, the decrease in events is more easily seen in men than women. In those at risk, but without a history of cardiovascular disease (primary prevention), statins decrease the risk of death and combined fatal and non-fatal cardiovascular disease. The benefit, however, is small. A United States guideline recommends statins in those who have a 12% or greater risk of cardiovascular disease over the next ten years. Niacin, fibrates and CETP Inhibitors, while they may increase HDL cholesterol do not affect the risk of cardiovascular disease in those who are already on statins. Fibrates lower the risk of cardiovascular and coronary events, but there is no evidence to suggest that they reduce all-cause mortality. Anti-diabetic medication may reduce cardiovascular risk in people with Type 2 diabetes, although evidence is not conclusive. A meta-analysis in 2009 including 27,049 participants and 2,370 major vascular events showed a 15% relative risk reduction in cardiovascular disease with more-intensive glucose lowering over an average follow-up period of 4.4 years, but an increased risk of major hypoglycemia. Aspirin has been found to be of only modest benefit in those at low risk of heart disease, as the risk of serious bleeding is almost equal to the protection against cardiovascular problems. In those at very low risk, including those over the age of 70, it is not recommended. The United States Preventive Services Task Force recommends against use of aspirin for prevention in women less than 55 and men less than 45 years old; however, it is recommended for some older people. The use of vasoactive agents for people with pulmonary hypertension with left heart disease or hypoxemic lung diseases may cause harm and unnecessary expense. Antibiotics for secondary prevention of coronary heart disease Antibiotics may help patients with coronary disease to reduce the risk of heart attacks and strokes. However, evidence in 2021 suggests that antibiotics for secondary prevention of coronary heart disease are harmful, with increased mortality and occurrence of stroke; the use of antibiotics is not supported for preventing secondary coronary heart disease. Physical activity Exercise-based cardiac rehabilitation following a heart attack reduces the risk of death from cardiovascular disease and leads to less hospitalizations. There have been few high-quality studies of the benefits of exercise training in people with increased cardiovascular risk but no history of cardiovascular disease. A systematic review estimated that inactivity is responsible for 6% of the burden of disease from coronary heart disease worldwide. The authors estimated that 121,000 deaths from coronary heart disease could have been averted in Europe in 2008 if people had not been physically inactive. Low-quality evidence from a limited number of studies suggest that yoga has beneficial effects on blood pressure and cholesterol. Tentative evidence suggests that home-based exercise programs may be more efficient at improving exercise adherence. Dietary supplements While a healthy diet is beneficial, the effect of antioxidant supplementation (vitamin E, vitamin C, etc.) or vitamins has not been shown to protect against cardiovascular disease and in some cases may possibly result in harm. Mineral supplements have also not been found to be useful. Niacin, a type of vitamin B3, may be an exception with a modest decrease in the risk of cardiovascular events in those at high risk. Magnesium supplementation lowers high blood pressure in a dose-dependent manner. Magnesium therapy is recommended for people with ventricular arrhythmia associated with torsades de pointes who present with long QT syndrome, and for the treatment of people with digoxin intoxication-induced arrhythmias. There is no evidence that omega-3 fatty acid supplementation is beneficial. A 2022 review found that some dietary supplements, including micronutrients, may reduce risk factors for cardiovascular disease. Management Cardiovascular disease is treatable with initial treatment primarily focused on diet and lifestyle interventions. Influenza may make heart attacks and strokes more likely and therefore influenza vaccination may decrease the chance of cardiovascular events and death in people with heart disease. Proper CVD management necessitates a focus on MI and stroke cases due to their combined high mortality rate, keeping in mind the cost-effectiveness of any intervention, especially in developing countries with low or middle-income levels. Regarding MI, strategies using aspirin, atenolol, streptokinase or tissue plasminogen activator have been compared for quality-adjusted life-year (QALY) in regions of low and middle income. The costs for a single QALY for aspirin and atenolol were less than US$25, streptokinase was about $680, and t-PA was $16,000. Aspirin, ACE inhibitors, beta-blockers, and statins used together for secondary CVD prevention in the same regions showed single QALY costs of $350. There are also surgical or procedural interventions that can save someone's life or prolong it. For heart valve problems, a person could have surgery to replace the valve. For arrhythmias, a pacemaker can be put in place to help reduce abnormal heart rhythms and for a heart attack, there are multiple options two of these are a coronary angioplasty and a coronary artery bypass surgery. There is probably no additional benefit in terms of mortality and serious adverse events when blood pressure targets were lowered to ≤ 135/85 mmHg from ≤ 140 to 160/90 to 100 mmHg. Epidemiology Cardiovascular diseases are the leading cause of death worldwide and in all regions except Africa. In 2008, 30% of all global death was attributed to cardiovascular diseases. Death caused by cardiovascular diseases are also higher in low- and middle-income countries as over 80% of all global deaths caused by cardiovascular diseases occurred in those countries. It is also estimated that by 2030, over 23 million people will die from cardiovascular diseases each year. It is estimated that 60% of the world's cardiovascular disease burden will occur in the South Asian subcontinent despite only accounting for 20% of the world's population. This may be secondary to a combination of genetic predisposition and environmental factors. Organizations such as the Indian Heart Association are working with the World Heart Federation to raise awareness about this issue. Research There is evidence that cardiovascular disease existed in pre-history, and research into cardiovascular disease dates from at least the 18th century. The causes, prevention, and/or treatment of all forms of cardiovascular disease remain active fields of biomedical research, with hundreds of scientific studies being published on a weekly basis. Recent areas of research include the link between inflammation and atherosclerosis the potential for novel therapeutic interventions, and the genetics of coronary heart disease.
Biology and health sciences
Illness and injury
null
513039
https://en.wikipedia.org/wiki/Hypercholesterolemia
Hypercholesterolemia
Hypercholesterolemia, also called high cholesterol, is the presence of high levels of cholesterol in the blood. It is a form of hyperlipidemia (high levels of lipids in the blood), hyperlipoproteinemia (high levels of lipoproteins in the blood), and dyslipidemia (any abnormalities of lipid and lipoprotein levels in the blood). Elevated levels of non-HDL cholesterol and LDL in the blood may be a consequence of diet, obesity, inherited (genetic) diseases (such as LDL receptor mutations in familial hypercholesterolemia), or the presence of other diseases such as type 2 diabetes and an underactive thyroid. Cholesterol is one of three major classes of lipids produced and used by all animal cells to form membranes. Plant cells manufacture phytosterols (similar to cholesterol), but in rather small quantities. Cholesterol is the precursor of the steroid hormones and bile acids. Since cholesterol is insoluble in water, it is transported in the blood plasma within protein particles (lipoproteins). Lipoproteins are classified by their density: very low density lipoprotein (VLDL), intermediate density lipoprotein (IDL), low density lipoprotein (LDL) and high density lipoprotein (HDL). All the lipoproteins carry cholesterol, but elevated levels of the lipoproteins other than HDL (termed non-HDL cholesterol), particularly LDL-cholesterol, are associated with an increased risk of atherosclerosis and coronary heart disease. In contrast, higher levels of HDL cholesterol are protective. Avoiding trans fats and replacing saturated fats in adult diets with polyunsaturated fats are recommended dietary measures to reduce total blood cholesterol and LDL in adults. In people with very high cholesterol (e.g., familial hypercholesterolemia), diet is often not sufficient to achieve the desired lowering of LDL, and lipid-lowering medications are usually required. If necessary, other treatments such as LDL apheresis or even surgery (for particularly severe subtypes of familial hypercholesterolemia) are performed. About 34 million adults in the United States have high blood cholesterol. Signs and symptoms Although hypercholesterolemia itself is asymptomatic, longstanding elevation of serum cholesterol can lead to atherosclerosis (build-up of fatty plaques in the arteries, so-called 'hardening of the arteries'). Over a period of decades, elevated serum cholesterol contributes to formation of atheromatous plaques in the arteries. This can lead to progressive narrowing of the involved arteries. Alternatively smaller plaques may rupture and cause a clot to form and obstruct blood flow. A sudden blockage of a coronary artery may result in a heart attack. A blockage of an artery supplying the brain can cause a stroke. If the development of the stenosis or occlusion is gradual, blood supply to the tissues and organs slowly diminishes until organ function becomes impaired. At this point tissue ischemia (restriction in blood supply) may manifest as specific symptoms. For example, temporary ischemia of the brain (commonly referred to as a transient ischemic attack) may manifest as temporary loss of vision, dizziness and impairment of balance, difficulty speaking, weakness or numbness or tingling, usually on one side of the body. Insufficient blood supply to the heart may cause chest pain, and ischemia of the eye may manifest as transient visual loss in one eye. Insufficient blood supply to the legs may manifest as calf pain when walking, while in the intestines it may present as abdominal pain after eating a meal. Some types of hypercholesterolemia lead to specific physical findings. For example, familial hypercholesterolemia (Type IIa hyperlipoproteinemia) may be associated with xanthelasma palpebrarum (yellowish patches underneath the skin around the eyelids), arcus senilis (white or gray discoloration of the peripheral cornea), and xanthomata (deposition of yellowish cholesterol-rich material) of the tendons, especially of the fingers. Type III hyperlipidemia may be associated with xanthomata of the palms, knees and elbows. Causes Hypercholesterolemia is typically due to a combination of environmental and genetic factors. Environmental factors include weight, diet, and stress. Loneliness is also a risk factor. Diet Diet has an effect on blood cholesterol, but the size of this effect varies between individuals. A diet high in sugar or saturated fats increases total cholesterol and LDL. Trans fats have been shown to reduce levels of high-density lipoprotein while increasing levels of LDL. A 2016 review found tentative evidence that dietary cholesterol is associated with higher blood cholesterol. As of 2018 there appears to be a modest positive, dose-related relationship between cholesterol intake and LDL cholesterol. Medical conditions and treatments A number of other conditions can also increase cholesterol levels including diabetes mellitus type 2, obesity, alcohol use, monoclonal gammopathy, dialysis therapy, nephrotic syndrome, hypothyroidism, Cushing's syndrome and anorexia nervosa. Several medications and classes of medications may interfere with lipid metabolism: thiazide diuretics, ciclosporin, glucocorticoids, beta blockers, retinoic acid, antipsychotics, certain anticonvulsants and medications for HIV as well as interferons. Genetics Genetic contributions typically arise from the combined effects of multiple genes, known as "polygenic," although in certain cases, they may stem from a single gene defect, as seen in familial hypercholesterolemia. In familial hypercholesterolemia, mutations may be present in the APOB gene (autosomal dominant), the autosomal recessive LDLRAP1 gene, autosomal dominant familial hypercholesterolemia (HCHOLA3) variant of the PCSK9 gene, or the LDL receptor gene. Familial hypercholesterolemia affects about one in 250 individuals. The Lithuanian Jewish population may exhibit a genetic founder effect. One variation, G197del LDLR which is implicated in familial hypercholesterolemia, has been dated to the 14th century. The of these variations has been the subject of debate. Diagnosis Cholesterol is measured in milligrams per deciliter (mg/dL) of blood in the United States and some other countries. In the United Kingdom, most European countries and Canada, millimoles per liter of blood (mmol/L) is the measure. For healthy adults, the UK National Health Service recommends upper limits of total cholesterol of 5 mmol/L, and low-density lipoprotein cholesterol (LDL) of 3 mmol/L. For people at high risk of cardiovascular disease, the recommended limit for total cholesterol is 4 mmol/L, and 2 mmol/L for LDL. In the United States, the National Heart, Lung, and Blood Institute within the National Institutes of Health classifies total cholesterol of less than 200 mg/dL as "desirable", 200 to 239 mg/dL as "borderline high", and 240 mg/dL or more as "high". There is no absolute cutoff between normal and abnormal cholesterol levels, and values must be considered in relation to other health conditions and risk factors. Higher levels of total cholesterol increase the risk of cardiovascular disease, particularly coronary heart disease. Levels of LDL or non-HDL cholesterol both predict future coronary heart disease; which is the better predictor is disputed. High levels of small dense LDL may be particularly adverse, although measurement of small dense LDL is not advocated for risk prediction. In the past, LDL and VLDL levels were rarely measured directly due to cost. Levels of fasting triglycerides were taken as an indicator of VLDL levels (generally about 45% of fasting triglycerides is composed of VLDL), while LDL was usually estimated by the Friedewald formula: LDL total cholesterol – HDL – (0.2 x fasting triglycerides). However, this equation is not valid on nonfasting blood samples or if fasting triglycerides are elevated (>4.5 mmol/L or >~400 mg/dL). Recent guidelines have, therefore, advocated the use of direct methods for measurement of LDL wherever possible. It may be useful to measure all lipoprotein subfractions (VLDL, IDL, LDL, and HDL) when assessing hypercholesterolemia and measurement of apolipoproteins and lipoprotein (a) can also be of value. Genetic screening is now advised if a form of familial hypercholesterolemia is suspected. Classification Classically, hypercholesterolemia was categorized by lipoprotein electrophoresis and the Fredrickson classification. Newer methods, such as "lipoprotein subclass analysis", have offered significant improvements in understanding the connection with atherosclerosis progression and clinical consequences. If the hypercholesterolemia is hereditary (familial hypercholesterolemia), more often a family history of premature, earlier onset atherosclerosis is found. Screening method The U.S. Preventive Services Task Force in 2008 strongly recommends routine screening for men 35 years and older and women 45 years and older for lipid disorders and the treatment of abnormal lipids in people who are at increased risk of coronary heart disease. They also recommend routinely screening men aged 20 to 35 years and women aged 20 to 45 years if they have other risk factors for coronary heart disease. In 2016 they concluded that testing the general population under the age of 40 without symptoms is of unclear benefit. In Canada, screening is recommended for men 40 and older and women 50 and older. In those with normal cholesterol levels, screening is recommended once every five years. Once people are on a statin further testing provides little benefit except possibly to determine compliance with treatment. In the UK, after someone is diagnosed with familial hypercholesterolemia, clinicians, family, or both, contact first- and second-degree relatives to come forward for testing and treatment. Research suggests that clinician-only contact results in more people coming forward for testing. Treatment Treatment recommendations have been based on four risk levels for heart disease. For each risk level, LDL cholesterol levels representing goals and thresholds for treatment and other action are made. The higher the risk category, the lower the cholesterol thresholds. For those at high risk, a combination of lifestyle modification and statins has been shown to decrease mortality. Lifestyle Lifestyle changes recommended for those with high cholesterol include: smoking cessation, limiting alcohol consumption, increasing physical activity, and maintaining a healthy weight. Overweight or obese individuals can lower blood cholesterol by losing weight – on average a kilogram of weight loss can reduce LDL cholesterol by 0.8 mg/dl. Diet Eating a diet with a high proportion of vegetables, fruit, dietary fibre, and low in fats results in a modest decrease in total cholesterol. Eating dietary cholesterol causes a small rise in serum cholesterol, the magnitude of which can be predicted using the Keys and Hegsted equations. Dietary limits for cholesterol were proposed in United States, but not in Canada, United Kingdom, and Australia. However, in 2015 the Dietary Guidelines Advisory Committee in the United States removed its recommendation of limiting cholesterol intake. A 2020 Cochrane review found replacing saturated fat with polyunsaturated fat resulted in a small decrease in cardiovascular disease by decreasing blood cholesterol. Other reviews have not found an effect from saturated fats on cardiovascular disease. Trans fats are recognized as a potential risk factor for cholesterol-related cardiovascular disease, and avoiding them in an adult diet is recommended. The National Lipid Association recommends that people with familial hypercholesterolemia restrict intakes of total fat to 25–35% of energy intake, saturated fat to less than 7% of energy intake, and cholesterol to less than 200 mg per day. Changes in total fat intake in low calorie diets do not appear to affect blood cholesterol. Increasing soluble fiber consumption has been shown to reduce levels of LDL cholesterol, with each additional gram of soluble fiber reducing LDL by an average of 2.2 mg/dL (0.057 mmol/L). Increasing consumption of whole grains also reduces LDL cholesterol, with whole grain oats being particularly effective. Inclusion of 2 g per day of phytosterols and phytostanols and 10 to 20 g per day of soluble fiber decreases dietary cholesterol absorption. A diet high in fructose can raise LDL cholesterol levels in the blood. Medication Statins are the typically used medications, in addition to healthy lifestyle interventions. Statins can reduce total cholesterol by about 50% in the majority of people, and are effective in reducing the risk of cardiovascular disease in both people with and without pre-existing cardiovascular disease. In people without cardiovascular disease, statins have been shown to reduce all-cause mortality, fatal and non-fatal coronary heart disease, and strokes. Greater benefit is observed with the use of high-intensity statin therapy. Statins may improve quality of life when used in people without existing cardiovascular disease (i.e. for primary prevention). Statins decrease cholesterol in children with hypercholesterolemia, but no studies as of 2010 show improved outcomes and diet is the mainstay of therapy in childhood. Other agents that may be used include fibrates, nicotinic acid, and cholestyramine. These, however, are only recommended if statins are not tolerated or in pregnant women. Injectable antibodies against the protein PCSK9 (evolocumab, bococizumab, alirocumab) can reduce LDL cholesterol and have been shown to reduce mortality. Guidelines In the US, guidelines exist from the National Cholesterol Education Program (2004) and a joint body of professional societies led by the American Heart Association. In the UK, the National Institute for Health and Clinical Excellence has made recommendations for the treatment of elevated cholesterol levels, published in 2008, and a new guideline appeared in 2014 that covers the prevention of cardiovascular disease in general. The Task Force for the management of dyslipidaemias of the European Society of Cardiology and the European Atherosclerosis Society published guidelines for the management of dyslipidaemias in 2011. Specific populations Among people whose life expectancy is relatively short, hypercholesterolemia is not a risk factor for death by any cause including coronary heart disease. Among people older than 70, hypercholesterolemia is not a risk factor for being hospitalized with myocardial infarction or angina. There are also increased risks in people older than 85 in the use of statin drugs. Because of this, medications which lower lipid levels should not be routinely used among people with limited life expectancy. The American College of Physicians recommends for hypercholesterolemia in people with diabetes: Lipid-lowering therapy should be used for secondary prevention of cardiovascular mortality and morbidity for all adults with known coronary artery disease and type 2 diabetes. Statins should be used for primary prevention against macrovascular (coronary artery disease, cerebrovascular disease, or peripheral vascular disease) complications in adults with type 2 diabetes and other cardiovascular risk factors. Once lipid-lowering therapy is initiated, people with type 2 diabetes mellitus should be taking at least moderate doses of a statin. For those people with type 2 diabetes who are taking statins, routine monitoring of liver function tests or muscle enzymes is not recommended except in specific circumstances. Alternative medicine A 2002 survey found that 1.1% of U.S. adults who used alternative medicine did so to treat high cholesterol. Consistent with previous surveys, this one found the majority of individuals (55%) used it in conjunction with conventional medicine. A systematic review of the effectiveness of herbal medicines used in traditional Chinese medicine had inconclusive results due to the poor methodological quality of the included studies. A review of trials of phytosterols and/or phytostanols, average dose 2.15 g/day, reported an average of 9% lowering of LDL-cholesterol. In 2000, the Food and Drug Administration approved the labeling of foods containing specified amounts of phytosterol esters or phytostanol esters as cholesterol-lowering; in 2003, an FDA Interim Health Claim Rule extended that label claim to foods or dietary supplements delivering more than 0.8 g/day of phytosterols or phytostanols. Some researchers, however, are concerned about diet supplementation with plant sterol esters and draw attention to lack of long-term safety data. Epidemiology Rates of high total cholesterol in the United States in 2010 are just over 13%, down from 17% in 2000. Average total cholesterol in the United Kingdom is 5.9 mmol/L, while in rural China and Japan, average total cholesterol is 4 mmol/L. Rates of coronary artery disease are high in Great Britain, but low in rural China and Japan. Research directions Gene therapy is being studied as a potential treatment.
Biology and health sciences
Cardiovascular disease
Health
513093
https://en.wikipedia.org/wiki/Monoclinic%20crystal%20system
Monoclinic crystal system
In crystallography, the monoclinic crystal system is one of the seven crystal systems. A crystal system is described by three vectors. In the monoclinic system, the crystal is described by vectors of unequal lengths, as in the orthorhombic system. They form a parallelogram prism. Hence two pairs of vectors are perpendicular (meet at right angles), while the third pair makes an angle other than 90°. Bravais lattices Two monoclinic Bravais lattices exist: the primitive monoclinic and the base-centered monoclinic. For the base-centered monoclinic lattice, the primitive cell has the shape of an oblique rhombic prism; it can be constructed because the two-dimensional centered rectangular base layer can also be described with primitive rhombic axes. The length of the primitive cell below equals of the conventional cell above. Crystal classes The table below organizes the space groups of the monoclinic crystal system by crystal class. It lists the International Tables for Crystallography space group numbers, followed by the crystal class name, its point group in Schoenflies notation, Hermann–Mauguin (international) notation, orbifold notation, and Coxeter notation, type descriptors, mineral examples, and the notation for the space groups. Sphenoidal is also called monoclinic hemimorphic, domatic is also called monoclinic hemihedral, and prismatic is also called monoclinic normal. The three monoclinic hemimorphic space groups are as follows: a prism with a wallpaper group p2 cross-section ditto with screw axes instead of axes ditto with screw axes as well as axes, parallel, in between; in this case an additional translation vector is one half of a translation vector in the base plane plus one half of a perpendicular vector between the base planes. The four monoclinic hemihedral space groups include those with pure reflection at the base of the prism and halfway those with glide planes instead of pure reflection planes; the glide is one half of a translation vector in the base plane those with both in between each other; in this case an additional translation vector is this glide plus one half of a perpendicular vector between the base planes. In two dimensions The only monoclinic Bravais lattice in two dimensions is the oblique lattice.
Physical sciences
Crystallography
Physics
513257
https://en.wikipedia.org/wiki/Tape%20measure
Tape measure
A tape measure or measuring tape is a long, flexible ruler used to measure length or distance. It usually consists of a ribbon of cloth, plastic, fibreglass, or metal strip with linear measurement markings. Types Tape measures are often designed for specific uses or trades. Tapes may have different scales, be made of different materials, and be of different lengths depending on the intended use. Tape measures used in tailoring are called "sewing tape". Originally made from flexible cloth or plastic, fiberglass is now the preferred material due to its resistance from stretching or tearing. Sewing tape is mainly used for the measuring of the subject's waist line.Measuring tapes designed for carpentry or construction often use a curved metallic ribbon that can remain stiff and straight when extended, but can also retract into a coil for convenient storage. This type of tape measure will have a hook on the end to aid measuring. The hook is connected to the tape with loose rivets through oval holes, and can move a distance equal to its thickness, to provide both inside and outside measurements that are accurate. Self-marking tape measures allows the user have a graphite tip, allowing for accurate markings. Surveying requires the measuring of large distances and require an increased need for accuracy. Due to this, measuring tapes used for surveying may be made out of invar because of its low rate of thermal expansion. Cased measuring tape There are two basic types of cased measuring tapes: spring return tape measures and manual return tape measures. While spring return tape measures are compact and self-retracting, manual return measures are designed for longer distances and often require manual winding, often via hand crank. History Prior to the advent of standardized measuring tapes, tailors employed cloth tapes without any markings. These tapes were manually inscribed with notches to denote specific measurements, enabling tailors to record the proportions of their clients. James Chesterman, a British metalworker, is credited with the invention of the first retractable tape measure in 1821. His design consisted of a spring-loaded cloth strip with marked measurements, housed within a compact case. Building upon his prior design, Chesterman would patent the first steel tape measure. By capitalizing on the declining popularity of crinoline dresses, Chesterman repurposed the surplus flat wire used in the dresses to create the flexible measuring tape. On 6 December 1864, William H. Bangs received a patent for the first design of a spring return tape measure. Bang's design would later be improved upon by Alvin J. Fellows on July 14, 1868. Fellows' design differed from Bang's by allowing the tape to be held in place via a spring-click mechanism. The first patented long tape measure in the United States was granted on 10 July 1860 to William H. Paine, and produced by George M. Eddy and Company. This design lacked any measurement points on it. Instead, it functioned as a singular unit of measurement, with the entire length of the tape representing a fixed distance. A brass piece, attached at the end of the tape, served as a reference marker. The length corresponding to the tape's full extension was then indicated on the case or crank mechanism. In 1871, Justus Roe introduced a cost cutting technique to the tape measure. Employing rivets to attach small brass washers to the tape, he could mark inches and feet. To further enhance readability, small brass tags were affixed at five-foot intervals, each bearing a number indicating the total number of feet to that point. While this technique was not patented, Justus Roe and Sons popularized this design in their "Roe Electric Reel Tape Measures" during the late 19th and early 20th centuries. To compete with other products, they transitioned to etching or stamping increments and numbers directly onto the tape, eliminating the need for rivets and washers. It is important to note that the "electric" moniker was merely a marketing term and did not signify any electrical functionality. On 3 January 1922, Hiram A. Farrand patented his concave-convex tape. The concave nature of his design allowed the tape to stay rigid, even when extended. Their product was later sold to Stanley Black & Decker. In 1947, the Swedish engineer Ture Anders Ljungberg began developing an improved version and in 1954 the TALmeter was introduced. It features edges at both the end of the tape and the mouth to cut marks so measures (including arcs) can be transferred without reading the scale, as well as a fold-out metal tongue at the rear, also with an edge, to be used when taking internal measures. The tape has three scales: the normal metric, the internal scale and a diameter scale used for instance to measure sheet metal to be rolled into a cylinder of a certain diameter. It was produced by his own company T A Ljungberg AB until 2005, when it was bought by Hultafors in 2005, who retained the name "Talmeter" for the product they now refer to as a märkmeter (marker-meter). In March 1963, Stanley Tools introduced the PowerLock tape measure series. It was the first to use a molded ABS case, thumb actuated tape lock, and riveted end hook. By 1989, Stanley was producing more than 200,000 tape measures every day. The first commercialized Digital Tape Measure was released by Starrett in 1995 under the DigiTape brand. Design The basic design on which all modern spring tape measures are built can trace its origins back to an 1864 patent by a Meriden, Connecticut resident named William H. Bangs Jr. According to the text of his patent, Bang's tape measure was an improvement on other versions previously designed. The spring tape measure has existed in the U.S. since Bang's patent in 1864, but its usage did not become very popular due to the difficulty in communication from one town to another and the expense of the tape measure. In the late 1920s, carpenters began slowly adopting H. A. Farrand's design as the one more commonly used. Farrand's new design was a concave/convex tape made of metal which would stand straight out a distance of four to six feet. This design is the basis for most modern pocket tape measures used today. With the mass production of the integrated circuit (IC) the tape measure has also entered into the digital age with the digital tape measure. Some incorporate a digital screen to give measurement readouts in multiple formats. An early patent for this type of measure was published in 1977. There are also other styles of tape measures that have incorporated lasers and ultrasonic technology to measure the distance of an object with fairly reliable accuracy. Tape measures often have black and red measurements on a yellow background as this is the optimal color combination for readability. United States Most tapes sold in the United States are inches- and feet-based. Some tapes have additional marks in the shape of small black diamonds, appearing every , used to mark out equal spacing for joists (five joists or trusses per US standard length of building material). Many US tapes also have special markings every , which is a US standard interval for studs in construction: three spaces of 16 inches make exactly which is the US commercial width of a sheet of plywood, gyproc or particle board. The sale of dual Metric/US Customary scale measuring tapes is slowly becoming common in the United States. For example, in some Walmarts there are Hyper Tough brand tapes available in both US customary units and Metric units. Unlike US rulers, of which an overwhelming majority contain both centimeter and inch scales, tape measures are longer and thus traditionally have had scales in both inches and feet & inches. So, the inclusion of a metric scale requires the measuring device either to contain 3 scales of measurement or the elimination of one of the US Customary scales. The use of millimeter only tape measures for housing construction is a part of the US metric building code. This code does not permit the use of centimeters. Millimeters produce whole (integer) numbers, reduce arithmetic errors, thus decreasing wastage due to such errors. The US made measuring tape shown on the right is interesting in that it is a "Reverse Measuring Tape", where the measurements can be read from right to left just as well as they can be read when the tape is used from left to right. As a curious fact, in 1956, Justus Roe, a surveyor and tape-maker by trade, made the gold-plated tape measure and, in a publicity gimmick, presented it to American professional baseball player Mickey Mantle. Australia The building industry was the first major industry grouping in Australia to complete its change to metric, being completed by January 1976. In this, the industry was grateful to the SAA (now Standards Australia) for the early production of the Standard AS 1155-1974 "Metric Units for Use in the Construction Industry", which specified the use of millimetres as the small unit for the metrication upgrade. In the adoption of the millimetre as the "small" unit of length for metrication (instead of the centimetre), the Metric Conversion Board leaned heavily on experience in the UK and within the ISO, where this decision had already been taken. This was formally stated as follows: "The metric units for linear measurement in building and construction will be the metre (m) and the millimetre (mm), with the kilometre (km) being used where required. This will apply to all sectors of the industry, and the centimetre (cm) shall not be used. … the centimetre should not be used in any calculation and it should never be written down". The logic of using the millimetre in this context was that the metric system had been so designed that there would exist a multiple or submultiple for every use. Decimal fractions would not have to be used. Since the tolerances on building components and building practice would rarely be less than one millimetre, the millimetre became the sub-unit most appropriate to this industry. Because of this, those in the building/construction industry mainly use millimetre only tapes. While dual scale tapes showing both inches and centimetres are sold, these are mainly imported low-cost items, since it would be a restraint of trade to not allow their importation. United Kingdom Tape measures sold in the UK often have dual scales for metric and imperial units. Like the American tape measures described above, they also have markings every and . Canada Tape measures sold in Canada often have dual scales for metric and imperial units. All tapes in imperial units have markings every , but not at every . Home construction in Canada is largely, if not entirely, in imperial measure. Accuracy and standardisation The accuracy of a tape measure is dependent on the ends of the tape and the markings printed onto the tape. The accuracy for the end of a retractable tape measure is dependent on the hook's sliding mechanism and thickness. The European Commission (EC) has standardised a non-compulsory classification system for certifying tape measure accuracy, with certified tapes falling into one of three classes of accuracy: Classes I, II, and III. For example, under specific conditions the tolerances for 10m long tapes are: Class I: accurate to ±1.10mm over 10m length Class II: accurate to ±2.30mm over 10m length Class III: accurate to ±4.60mm over 10m length If a tape measure has been certified then the class rating is printed onto the tape alongside other symbols including the nominal length of the tape, the year of manufacture, the country of manufacture, and the name of the manufacturer. For retractable tapes, Class I are the most accurate and tend to be the most expensive, while Class II tapes are the most common class available.
Technology
Measuring instruments
null
513591
https://en.wikipedia.org/wiki/Viticulture
Viticulture
Viticulture (, "vine-growing"), viniculture (, "wine-growing"), or winegrowing is the cultivation and harvesting of grapes. It is a branch of the science of horticulture. While the native territory of Vitis vinifera, the common grape vine, ranges from Western Europe to the Persian shores of the Caspian Sea, the vine has demonstrated high levels of adaptability to new environments, hence viticulture can be found on every continent except Antarctica. The duties of a viticulturist include monitoring and controlling pests and diseases, fertilizing, irrigation, canopy management, monitoring fruit development and characteristics, deciding when to harvest, and vine pruning during the winter months. Viticulturists are often intimately involved with winemakers, because vineyard management and the resulting grape characteristics provide the basis from which winemaking can begin. A great number of varieties are now approved in the European Union as true grapes for winegrowing and viticulture. History The earliest evidence of grape vine cultivation and winemaking dates back 8,000 years. The history of viticulture is closely related to the history of wine, with evidence that humans cultivated wild grapes to make wine as far back as the Neolithic period. Evidence suggests that some of the earliest domestication of Vitis vinifera occurred in the area of the modern countries Georgia and Armenia. The oldest-known winery was discovered in the "Areni-1" cave in Vayots Dzor, Armenia. Dated to BC, the site contained a wine press, fermentation vats, jars, and cups. Archaeologists also found V. vinifera seeds and vines. Commenting on the importance of the find, McGovern said, "The fact that winemaking was already so well developed in 4000 BC suggests that the technology probably goes back much earlier." There is also evidence of grape domestication in the Near East in the early Bronze Age, around 3200 BC. Evidence of ancient viticulture is provided by cuneiform sources (ancient writing on clay tablets), plant remains, historical geography, and archaeological excavations. The remnants of ancient wine jars have been used to determine the culture of wine consumption and cultivated grape species. In addition to winemaking, grapes have been grown for the production of raisins. The earliest act of cultivation appears to have been the favoring of hermaphroditic members of the Vitis vinifera species over the barren male vines and the female vines, which were dependent on a nearby male for pollination. With the ability to pollinate itself, over time the hermaphroditic vines were able to sire offspring that were consistently hermaphroditic. At the end of the 5th century BC, the Greek historian Thucydides wrote: Thucydides was most likely referencing the time between 3000 BC and 2000 BC, when viticulture emerged in force in Asia Minor, Greece, and the Cyclades Islands of the Aegean Sea. During this period, grape cultivation developed from an aspect of local consumption to an important component of international economies and trade. Roman From 1200 BC to 900 BC, the Phoenicians developed viticulture practices that were later used in Carthage. Around 500 BC, the Carthaginian writer Mago recorded such practices in a two-volume work that was one of the few artifacts to survive the Roman destruction of Carthage during the Third Punic War. The Roman statesman Cato the Elder was influenced by these texts, and around 160 BC he wrote De Agricultura, which expounded on Roman viticulture and agriculture. Around 65 AD, the Roman writer Columella produced the most detailed work on Roman viticulture in his twelve-volume text De Re Rustica. Columella's work is one of the earliest to detail trellis systems for raising vines off the ground. Columella advocated the use of stakes versus the previously accepted practice of training vines to grow up along tree trunks. The benefits of using stakes over trees was largely to minimize the dangers associated with climbing trees, which was necessary to prune the dense foliage in order to give the vines sunlight, and later to harvest them. Roman expansion across Western Europe brought Roman viticulture to the areas that would become some of the world's best-known winegrowing regions: the Spanish Rioja, the German Mosel, and the French Bordeaux, Burgundy and Rhône. Roman viticulturists were among the first to identify steep hillsides as one of the better locations to plant vines, because cool air runs downhill and gathers at the bottom of valleys. While some cool air is beneficial, too much can rob the vine of the heat it needs for photosynthesis, and in winter it increases the risk of frost. Medieval Catholic monks (particularly the Cistercians) were the most prominent viticulturists of the Middle Ages. Around this time, an early system of Metayage emerged in France with laborers (Prendeur) working the vineyards under contractual agreements with the landowners (Bailleur). In most cases, the prendeurs were given flexibility in selecting their crop and developing their own vineyard practice. In northern Europe, the weather and climate posed difficulties for grape cultivation, so certain species were selected that better suited the environment. Most vineyards grew white varieties of grape, which are more resistant to the damp and cold climates. A few species of red grape, such as the Pinot Noir, were also introduced. dates back to 1416 and depicts horticulture and viticulture in France. The images illustrate peasants bending down to prune grapes from vines behind castle walls. Additional illustrations depict grape vines being harvested, with each vine being cut to three spurs around knee height. Many of the viticultural practices developed in this time period would become staples of European viticulture until the 18th century. Varietals were studied more intently to see which vines were the most suitable for a particular area. Around this time, an early concept of terroir emerged as wines from particular places began to develop a reputation for uniqueness. The concept of pruning for quality over quantity emerged, mainly through Cistercian labors, though it would create conflict between the rich landowners who wanted higher quality wines and the peasant laborers whose livelihood depended on the quantity of wine they could sell. The Riesling is the famous example for higher quality of wine. In 1435 Count John IV. of Katzenelnbogen started this successful tradition. In Burgundy, the Cistercian monks developed the concept of cru vineyards as homogeneous pieces of land that consistently produce wines each vintage that are similar. In areas like the Côte-d'Or, the monks divided the land into separate vineyards, many of which still exist today, like Montrachet and La Romanée. In mythology and religion In Greek mythology, the demigod Dionysus (Bacchus in Roman mythology), son of Zeus, invented the grapevine and the winepress. When his closest satyr friend died trying to bring him a vine Dionysus deemed important, Dionysus forced the vine to bear fruit. His fame spread, and he finally became a god. The Bible makes numerous references to wine, and grapevines, both symbolically and literally. Grapes are first mentioned when Noah grows them on his farm (Genesis 9:20–21).
Technology
Agriculture_2
null
513668
https://en.wikipedia.org/wiki/Salviniales
Salviniales
The order Salviniales (formerly known as the Hydropteridales and including the former Marsileales) is an order of ferns in the class Polypodiopsida. Description Salviniales are all aquatic and differ from all other ferns in being heterosporous, meaning that they produce two different types of spore (megaspores and microspores) that develop into two different types of gametophyte (female and male gametophytes, respectively), and in that their gametophytes are endosporic, meaning that they never grow outside the spore wall and cannot become larger than the spores that produced them. The megasporangia each produce a single megaspore. In being heterosporus with endosporic gametophytes they are more similar to seed plants than to other ferns. The fertile and sterile leaves are dimorphic, taking on a different shape, and leaves bear anastomosing veins. Aerenchyma is frequently present in roots, shoots, and petioles (leaf stalks). The ferns of this order vary radically in form and do not look particularly fern-like. Species of the family Salviniaceae are natant (floating), while those of the family Marsileaceae are rooted. However, the natant species may temporarily grow on wet mud during times of low water, and the Marsileaceae may grow as emergent species, depending on species and location. The group has also the smallest known genomes of all ferns. One genus, Azolla, is amongst the fastest growing plants on earth and caused a cooling of the climate in the Azolla event about 50 million years ago. There is a well-known fossil member of the Marsileales, Hydropteris (incertae sedis). Classification In the molecular phylogenetic classification of Smith et al. (2006), the Salviniales were placed in the leptosporangiate ferns, class Polypodiopsida. Two families, Marsileaceae and Salviniaceae, were recognized. The linear sequence of Christenhusz et al. (2011), intended for compatibility with the classification of Chase and Reveal (2009) which placed all land plants in Equisetopsida, reclassified Smith's Polypodiopsida as subclass Polypodiidae and placed the Salviniales there. The circumscription of the order and its families was not changed, and that circumscription and placement in Polypodiidae has subsequently been followed in the classifications of Christenhusz and Chase (2014) and PPG I (2016). The likely phylogenic relationships between the two families and five genera of the Salviniales are shown in the following diagram.
Biology and health sciences
Ferns
Plants
513821
https://en.wikipedia.org/wiki/5-cell
5-cell
In geometry, the 5-cell is the convex 4-polytope with Schläfli symbol {3,3,3}. It is a 5-vertex four-dimensional object bounded by five tetrahedral cells. It is also known as a C5, hypertetrahedron, pentachoron, pentatope, pentahedroid, tetrahedral pyramid, or 4-simplex (Coxeter's polytope), the simplest possible convex 4-polytope, and is analogous to the tetrahedron in three dimensions and the triangle in two dimensions. The 5-cell is a 4-dimensional pyramid with a tetrahedral base and four tetrahedral sides. The regular 5-cell is bounded by five regular tetrahedra, and is one of the six regular convex 4-polytopes (the four-dimensional analogues of the Platonic solids). A regular 5-cell can be constructed from a regular tetrahedron by adding a fifth vertex one edge length distant from all the vertices of the tetrahedron. This cannot be done in 3-dimensional space. The regular 5-cell is a solution to the problem: Make 10 equilateral triangles, all of the same size, using 10 matchsticks, where each side of every triangle is exactly one matchstick, and none of the triangles and matchsticks intersect one another. No solution exists in three dimensions. Properties The 5-cell is the 4-dimensional simplex, the simplest possible 4-polytope. In other words, the 5-cell is a polychoron analogous to a tetrahedron in high dimension. It is formed by any five points which are not all in the same hyperplane (as a tetrahedron is formed by any four points which are not all in the same plane, and a triangle is formed by any three points which are not all in the same line). Any such five points constitute a 5-cell, though not usually a regular 5-cell. The regular 5-cell is not found within any of the other regular convex 4-polytopes except one: the 600-vertex 120-cell is a compound of 120 regular 5-cells. The 5-cell is self-dual, meaning its dual polytope is 5-cell itself. Its maximal intersection with 3-dimensional space is the triangular prism. Its dichoral angle is . It is the first in the sequence of 6 convex regular 4-polytopes, in order of volume at a given radius or number of vertexes. The convex hull of two 5-cells in dual configuration is the disphenoidal 30-cell, dual of the bitruncated 5-cell. As a configuration This configuration matrix represents the 5-cell. The rows and columns correspond to vertices, edges, faces, and cells. The diagonal numbers say how many of each element occur in the whole 5-cell. The nondiagonal numbers say how many of the column's element occur in or at the row's element. This self-dual polytope's matrix is identical to its 180 degree rotation. The k-faces can be read as rows left of the diagonal, while the k-figures are read as rows after the diagonal. All these elements of the 5-cell are enumerated in Branko Grünbaum's Venn diagram of 5 points, which is literally an illustration of the regular 5-cell in projection to the plane. Geodesics and rotations The 5-cell has only digon central planes through vertices. It has 10 digon central planes, where each vertex pair is an edge, not an axis, of the 5-cell. Each digon plane is orthogonal to 3 others, but completely orthogonal to none of them. The characteristic isoclinic rotation of the 5-cell has, as pairs of invariant planes, those 10 digon planes and their completely orthogonal central planes, which are 0-gon planes which intersect no 5-cell vertices. There are only two ways to make a circuit of the 5-cell through all 5 vertices along 5 edges, so there are two discrete Hopf fibrations of the great digons of the 5-cell. Each of the two fibrations corresponds to a left-right pair of isoclinic rotations which each rotate all 5 vertices in a circuit of period 5. The 5-cell has only two distinct period 5 isoclines (those circles through all 5 vertices), each of which acts as the single isocline of a right rotation and the single isocline of a left rotation in two different fibrations. Below, a spinning 5-cell is visualized with the fourth dimension squashed and displayed as colour. The Clifford torus is depicted in its rectangular (wrapping) form. Projections The A4 Coxeter plane projects the 5-cell into a regular pentagon and pentagram. The A3 Coxeter plane projection of the 5-cell is that of a square pyramid. The A2 Coxeter plane projection of the regular 5-cell is that of a triangular bipyramid (two tetrahedra joined face-to-face) with the two opposite vertices centered. Irregular 5-cells In the case of simplexes such as the 5-cell, certain irregular forms are in some sense more fundamental than the regular form. Although regular 5-cells cannot fill 4-space or the regular 4-polytopes, there are irregular 5-cells which do. These characteristic 5-cells are the fundamental domains of the different symmetry groups which give rise to the various 4-polytopes. Orthoschemes A 4-orthoscheme is a 5-cell where all 10 faces are right triangles. (The 5 vertices form 5 tetrahedral cells face-bonded to each other, with a total of 10 edges and 10 triangular faces.) An orthoscheme is an irregular simplex that is the convex hull of a tree in which all edges are mutually perpendicular. In a 4-dimensional orthoscheme, the tree consists of four perpendicular edges connecting all five vertices in a linear path that makes three right-angled turns. The elements of an orthoscheme are also orthoschemes (just as the elements of a regular simplex are also regular simplexes). Each tetrahedral cell of a 4-orthoscheme is a 3-orthoscheme, and each triangular face is a 2-orthoscheme (a right triangle). Orthoschemes are the characteristic simplexes of the regular polytopes, because each regular polytope is generated by reflections in the bounding facets of its particular characteristic orthoscheme. For example, the special case of the 4-orthoscheme with equal-length perpendicular edges is the characteristic orthoscheme of the 4-cube (also called the tesseract or 8-cell), the 4-dimensional analogue of the 3-dimensional cube. If the three perpendicular edges of the 4-orthoscheme are of unit length, then all its edges are of length , , , or , precisely the chord lengths of the unit 4-cube (the lengths of the 4-cube's edges and its various diagonals). Therefore this 4-orthoscheme fits within the 4-cube, and the 4-cube (like every regular convex polytope) can be dissected into instances of its characteristic orthoscheme. A 3-orthoscheme is easily illustrated, but a 4-orthoscheme is more difficult to visualize. A 4-orthoscheme is a tetrahedral pyramid with a 3-orthoscheme as its base. It has four more edges than the 3-orthoscheme, joining the four vertices of the base to its apex (the fifth vertex of the 5-cell). Pick out any one of the 3-orthoschemes of the six shown in the 3-cube illustration. Notice that it touches four of the cube's eight vertices, and those four vertices are linked by a 3-edge path that makes two right-angled turns. Imagine that this 3-orthoscheme is the base of a 4-orthoscheme, so that from each of those four vertices, an unseen 4-orthoscheme edge connects to a fifth apex vertex (which is outside the 3-cube and does not appear in the illustration at all). Although the four additional edges all reach the same apex vertex, they will all be of different lengths. The first of them, at one end of the 3-edge orthogonal path, extends that path with a fourth orthogonal edge by making a third 90 degree turn and reaching perpendicularly into the fourth dimension to the apex. The second of the four additional edges is a diagonal of a cube face (not of the illustrated 3-cube, but of another of the tesseract's eight 3-cubes). The third additional edge is a diagonal of a 3-cube (again, not the original illustrated 3-cube). The fourth additional edge (at the other end of the orthogonal path) is a long diameter of the tesseract itself, of length . It reaches through the exact center of the tesseract to the antipodal vertex (a vertex of the opposing 3-cube), which is the apex. Thus the characteristic 5-cell of the 4-cube has four edges, three edges, two edges, and one edge. The 4-cube can be dissected into 24 such 4-orthoschemes eight different ways, with six 4-orthoschemes surrounding each of four orthogonal tesseract long diameters. The 4-cube can also be dissected into 384 smaller instances of this same characteristic 4-orthoscheme, just one way, by all of its symmetry hyperplanes at once, which divide it into 384 4-orthoschemes that all meet at the center of the 4-cube. More generally, any regular polytope can be dissected into g instances of its characteristic orthoscheme that all meet at the regular polytope's center. The number g is the order of the polytope, the number of reflected instances of its characteristic orthoscheme that comprise the polytope when a single mirror-surfaced orthoscheme instance is reflected in its own facets. More generally still, characteristic simplexes are able to fill uniform polytopes because they possess all the requisite elements of the polytope. They also possess all the requisite angles between elements (from 90 degrees on down). The characteristic simplexes are the genetic codes of polytopes: like a Swiss Army knife, they contain one of everything needed to construct the polytope by replication. Every regular polytope, including the regular 5-cell, has its characteristic orthoscheme. There is a 4-orthoscheme which is the characteristic 5-cell of the regular 5-cell. It is a tetrahedral pyramid based on the characteristic tetrahedron of the regular tetrahedron. The regular 5-cell can be dissected into 120 instances of this characteristic 4-orthoscheme just one way, by all of its symmetry hyperplanes at once, which divide it into 120 4-orthoschemes that all meet at the center of the regular 5-cell. The characteristic 5-cell (4-orthoscheme) of the regular 5-cell has four more edges than its base characteristic tetrahedron (3-orthoscheme), which join the four vertices of the base to its apex (the fifth vertex of the 4-orthoscheme, at the center of the regular 5-cell). The four edges of each 4-orthoscheme which meet at the center of a regular 4-polytope are of unequal length, because they are the four characteristic radii of the regular 4-polytope: a vertex radius, an edge center radius, a face center radius, and a cell center radius. If the regular 5-cell has unit radius and edge length , its characteristic 5-cell's ten edges have lengths , , around its exterior right-triangle face (the edges opposite the characteristic angles 𝟀, 𝝉, 𝟁), plus , , (the other three edges of the exterior 3-orthoscheme facet the characteristic tetrahedron, which are the characteristic radii of the regular tetrahedron), plus , , , (edges which are the characteristic radii of the regular 5-cell). The 4-edge path along orthogonal edges of the orthoscheme is , , , , first from a regular 5-cell vertex to a regular 5-cell edge center, then turning 90° to a regular 5-cell face center, then turning 90° to a regular 5-cell tetrahedral cell center, then turning 90° to the regular 5-cell center. Isometries There are many lower symmetry forms of the 5-cell, including these found as uniform polytope vertex figures: The tetrahedral pyramid is a special case of a 5-cell, a polyhedral pyramid, constructed as a regular tetrahedron base in a 3-space hyperplane, and an apex point above the hyperplane. The four sides of the pyramid are made of triangular pyramid cells. Many uniform 5-polytopes have tetrahedral pyramid vertex figures with Schläfli symbols ( )∨{3,3}. Other uniform 5-polytopes have irregular 5-cell vertex figures. The symmetry of a vertex figure of a uniform polytope is represented by removing the ringed nodes of the Coxeter diagram. Construction As a Boerdijk–Coxeter helix A 5-cell can be constructed as a Boerdijk–Coxeter helix of five chained tetrahedra, folded into a 4-dimensional ring. The 10 triangle faces can be seen in a 2D net within a triangular tiling, with 6 triangles around every vertex, although folding into 4-dimensions causes edges to coincide. The purple edges form a regular pentagon which is the Petrie polygon of the 5-cell. The blue edges connect every second vertex, forming a pentagram which is the Clifford polygon of the 5-cell. The pentagram's blue edges are the chords of the 5-cell's isocline, the circular rotational path its vertices take during an isoclinic rotation, also known as a Clifford displacement. Net When a net of five tetrahedra is folded up in 4-dimensional space such that each tetrahedron is face bonded to the other four, the resulting 5-cell has a total of 5 vertices, 10 edges, and 10 faces. Four edges meet at each vertex, and three tetrahedral cells meet at each edge. This makes the six tetrahedron as its cell. Coordinates The simplest set of Cartesian coordinates is: (2,0,0,0), (0,2,0,0), (0,0,2,0), (0,0,0,2), (𝜙,𝜙,𝜙,𝜙), with edge length 2, where 𝜙 is the golden ratio. While these coordinates are not origin-centered, subtracting from each translates the 4-polytope's circumcenter to the origin with radius , with the following coordinates: The following set of origin-centered coordinates with the same radius and edge length as above can be seen as a hyperpyramid with a regular tetrahedral base in 3-space: Scaling these or the previous set of coordinates by give unit-radius origin-centered regular 5-cells with edge lengths . The hyperpyramid has coordinates: Coordinates for the vertices of another origin-centered regular 5-cell with edge length 2 and radius are: Scaling these by to unit-radius and edge length gives: The vertices of a 4-simplex (with edge and radius 1) can be more simply constructed on a hyperplane in 5-space, as (distinct) permutations of (0,0,0,0,1) or (0,1,1,1,1); in these positions it is a facet of, respectively, the 5-orthoplex or the rectified penteract. Compound The compound of two 5-cells in dual configurations can be seen in this A5 Coxeter plane projection, with a red and blue 5-cell vertices and edges. This compound has [[3,3,3]] symmetry, order 240. The intersection of these two 5-cells is a uniform bitruncated 5-cell. = ∩ . This compound can be seen as the 4D analogue of the 2D hexagram &lbrace;&rbrace; and the 3D compound of two tetrahedra. Related polytopes and honeycombs The pentachoron (5-cell) is the simplest of 9 uniform polychora constructed from the [3,3,3] Coxeter group. It is in the {p,3,3} sequence of regular polychora with a tetrahedral vertex figure: the tesseract {4,3,3} and 120-cell {5,3,3} of Euclidean 4-space, and the hexagonal tiling honeycomb {6,3,3} of hyperbolic space. It is one of three {3,3,p} regular 4-polytopes with tetrahedral cells, along with the 16-cell {3,3,4} and 600-cell {3,3,5}. The order-6 tetrahedral honeycomb {3,3,6} of hyperbolic space also has tetrahedral cells. It is self-dual like the 24-cell {3,4,3}, having a palindromic {3,p,3} Schläfli symbol.
Mathematics
Four-dimensional space
null
513908
https://en.wikipedia.org/wiki/Megalosaurus
Megalosaurus
Megalosaurus (meaning "great lizard", from Greek , , meaning 'big', 'tall' or 'great' and , , meaning 'lizard') is an extinct genus of large carnivorous theropod dinosaurs of the Middle Jurassic Epoch (Bathonian stage, 166 million years ago) of southern England. Although fossils from other areas have been assigned to the genus, the only certain remains of Megalosaurus come from Oxfordshire and date to the late Middle Jurassic. The earliest remains of Megalosaurus were described in the 17th century, and were initially interpreted as the remains of elephants or giants. Megalosaurus was named in 1824 by William Buckland, becoming the first genus of (non-bird) dinosaur to be validly named. The type species is M. bucklandii, named in 1827 by Gideon Mantell, after Buckland. In 1842, Megalosaurus was one of three genera on which Richard Owen based his Dinosauria. On Owen's directions a model was made as one of the Crystal Palace Dinosaurs, which greatly increased the public interest for prehistoric reptiles. Over 50 other species would eventually be classified under the genus; at first, this was because so few types of dinosaur had been identified, but the practice continued even into the 20th century after many other dinosaurs had been discovered. Today it is understood that none of these additional species was directly related to M. bucklandii, which is the only true Megalosaurus species. Because a complete skeleton of it has never been found, much is still unclear about its build. The first naturalists who investigated Megalosaurus mistook it for a gigantic lizard in length. In 1842, Owen concluded that it was no longer than . He still thought it was a quadruped, though. Modern scientists were able to obtain a more accurate picture, by comparing Megalosaurus with its direct relatives in the Megalosauridae. Megalosaurus was about long, weighing about . It was bipedal, walking on stout hindlimbs, its horizontal torso balanced by a horizontal tail. Its forelimbs were short, though very robust. Megalosaurus had a rather large head, equipped with long curved teeth. It was generally a robust and heavily muscled animal. At the time Megalosaurus lived, Europe formed an island archipelago around the Tethys Ocean, with Megalosaurus inhabiting an island formed by the London–Brabant Massif, where it likely served as the apex predator of its ecosystem, coexisting with other dinosaurs like the large sauropod Cetiosaurus. Discovery and naming Edward Lhuyd's tooth (specimen OU 1328) In 1699, Edward Lhuyd described what he believed to have been a fish tooth (called Plectronites), later believed to be part of a belemnite, that was illustrated alongside the holotype tooth of the sauropod "Rutellum impicatum" and another tooth, from a theropod, in 1699. Later studies found that the theropod tooth, known as specimen 1328 (University of Oxford coll. #1328; lost?) almost certainly was a tooth crown that belonged to an unknown species of Megalosaurus. OU 1328 has since been lost and it was not confidently assigned to Megalosaurus until the tooth was re-described by Delair & Sarjeant (2002). OU 1328 was collected near Caswell, near Witney, Oxfordshire sometime during the 17th century and became the third dinosaur fossil to ever be illustrated, after "Scrotum humanum" in 1677 and "Rutellum impicatum" in 1699. "Scrotum humanum" Megalosaurus may have been the first non avian dinosaur to be described in the scientific literature. The earliest possible fossil of the genus, from the Taynton Limestone Formation, was the lower part of a femur, discovered in the 17th century. It was originally described by Robert Plot as a thigh bone of a Roman war elephant, and then as a biblical giant. Part of a bone was recovered from the Taynton Limestone Formation of Stonesfield limestone quarry, Oxfordshire in 1676. Sir Thomas Pennyson gave the fragment to Robert Plot, Professor of Chemistry at the University of Oxford and first curator of the Ashmolean Museum, who published a description and illustration in his Natural History of Oxfordshire in 1676. It was the first illustration of a dinosaur bone published. Plot correctly identified the bone as the lower extremity of the thighbone or femur of a large animal and he recognised that it was too large to belong to any species known to be living in England. He therefore at first concluded it to be the thigh bone of a Roman war elephant and later that of a giant human, such as those mentioned in the Bible. The bone has since been lost, but the illustration is detailed enough that some have since identified it as that of Megalosaurus. It has also been argued that this possible Megalosaurus bone was given the very first species name ever applied to an extinct dinosaur. Plot's engraving of the Cornwell bone was again used in a book by Richard Brookes in 1763. Brookes, in a caption, called it "Scrotum humanum", apparently comparing its appearance to a pair of "human testicles". However, it is possible that the attribution of this name stemmed from illustrator error, not Richard Brookes. In 1970, paleontologist Lambert Beverly Halstead pointed out that the similarity of Scrotum humanum to a modern species name, a so-called Linnaean "binomen" that has two parts, was not a coincidence. Linnaeus, the founder of modern taxonomy, had in the eighteenth century not merely devised a system for naming living creatures, but also for classifying geological objects. The book by Brookes was all about applying this latter system to curious stones found in England. According to Halstead, Brookes thus had deliberately used binomial nomenclature, and had in fact indicated the possible type specimen of a new biological genus. According to the rules of the International Code of Zoological Nomenclature (ICZN), the name Scrotum humanum in principle had priority over Megalosaurus because it was published first. That Brookes understood that the stone did not actually represent a pair of petrified testicles was irrelevant. Merely the fact that the name had not been used in subsequent literature meant that it could be removed from competition for priority, because the ICZN states that if a name has never been considered valid after 1899, it can be made a nomen oblitum, an invalid "forgotten name". In 1993, after the death of Halstead, his friend William A.S. Sarjeant submitted a petition to the International Commission on Zoological Nomenclature to formally suppress the name Scrotum in favour of Megalosaurus. He wrote that the supposed junior synonym Megalosaurus bucklandii should be made a conserved name to ensure its priority. However, the Executive Secretary of the ICZN at the time, Philip K. Tubbs, did not consider the petition to be admissible, concluding that the term "Scrotum humanum", published merely as a label for an illustration, did not constitute the valid creation of a new name, and stated that there was no evidence it was ever intended as such. Furthermore, the partial femur was too incomplete to definitely be referred to Megalosaurus and not a different, contemporary theropod. Buckland's research During the last part of the eighteenth century, the number of fossils in British collections quickly increased. According to a hypothesis published by science historian Robert Gunther in 1925, among them was a partial lower jaw of Megalosaurus. It was discovered about underground in a Stonesfield Slate mine during the early 1790s and was acquired in October 1797 by Christopher Pegge for 10s.6d. and added to the collection of the Anatomy School of Christ Church, Oxford. In the early nineteenth century, more discoveries were made. In 1815, John Kidd reported the find of bones of giant tetrapods, again at the Stonesfield quarry. The layers there are currently considered part of the Taynton Limestone Formation, dating to the mid-Bathonian stage of the Jurassic Period. The bones were apparently acquired by William Buckland, Professor of Geology at the University of Oxford and dean of Christ Church. Buckland also studied a lower jaw, according to Gunther the one bought by Pegge. Buckland did not know to what animal the bones belonged but, in 1818, after the Napoleonic Wars, the French comparative anatomist Georges Cuvier visited Buckland in Oxford and realised that they were those of a giant lizard-like creature. Buckland further studied the remains with his friend William Conybeare who in 1821 referred to them as the "Huge Lizard". In 1822 Buckland and Conybeare, in a joint article to be included in Cuvier's Ossemens, intended to provide scientific names for both gigantic lizard-like creatures known at the time: the remains found near Maastricht would be named Mosasaurus – then seen as a land-dwelling animal – while for the British lizard Conybeare had devised the name "Megalosaurus", from the Greek μέγας, megas, "large". That year a publication failed to occur, but the physician James Parkinson already in 1822 announced the name "Megalosaurus", illustrating one of the teeth and revealing the creature was 40 feet long and eight feet high. It is generally considered that the name in 1822 was still a nomen nudum ("naked name"). Buckland, urged on by an impatient Cuvier, continued to work on the subject during 1823, letting his later wife Mary Morland provide drawings of the bones, that were to be the basis of illustrating lithographies. Finally, on 20 February 1824, during the same meeting of the Geological Society of London in which Conybeare described a very complete specimen of Plesiosaurus, Buckland formally announced Megalosaurus. The descriptions of the bones in the Transactions of the Geological Society, in 1824, constitute a valid publication of the name. Megalosaurus was the first non-avian dinosaur genus named; the first of which the remains had with certainty been scientifically described was Streptospondylus, in 1808 by Cuvier. By 1824, the material available to Buckland consisted of specimen OUM J13505, a piece of a right lower jaw with a single erupted tooth; OUM J13577, a posterior dorsal vertebra; OUM J13579, an anterior caudal vertebra; OUM J13576, a sacrum of five sacral vertebrae; OUM J13585, a cervical rib; OUM J13580, a rib; OUM J29881, an ilium of the pelvis, OUM J13563, a piece of the pubic bone; OUM J13565, a part of the ischium; OUM J13561, a thigh bone and OUM J13572, the lower part of a second metatarsal. As he himself was aware, these did not all belong to the same individual; only the sacrum was articulated. Because they represented several individuals, the described fossils formed a syntype series. By modern standards, from these a single specimen has to be selected to serve as the type specimen on which the name is based. In 1990, Ralph Molnar chose the famous dentary (front part of the lower jaw), OUM J13505, as such a lectotype. Because he was unaccustomed to the deep dinosaurian pelvis, much taller than with typical reptiles, Buckland misidentified several bones, interpreting the pubic bone as a fibula and mistaking the ischium for a clavicle. Buckland identified the organism as being a giant animal belonging to the Sauria – the Lizards, at the time seen as including the crocodiles – and he placed it in the new genus Megalosaurus, repeating an estimate by Cuvier that the largest pieces he described, indicated an animal 12 metres long in life. Etymology Buckland had not provided a specific name, as was not uncommon in the early nineteenth century, when the genus was still seen as the more essential concept. In 1826, Ferdinand von Ritgen gave this dinosaur a complete binomial, Megalosaurus conybeari, which however was not much used by later authors and is now considered a nomen oblitum. A year later, in 1827, Gideon Mantell included Megalosaurus in his geological survey of southeastern England, and assigned the species its current valid binomial name, Megalosaurus bucklandii. Until recently, the form Megalosaurus bucklandi was often used, a variant first published in 1832 by Christian Erich Hermann von Meyer – and sometimes erroneously ascribed to von Ritgen – but the more original M. bucklandii has priority. Early reconstructions The first reconstruction was given by Buckland himself. He considered Megalosaurus to be a quadruped. He thought it was an "amphibian", i.e. an animal capable of both swimming in the sea and walking on land. Generally, in his mind Megalosaurus resembled a gigantic lizard, but Buckland already understood from the form of the thigh bone head that the legs were not so much sprawling as held rather upright. In the original description of 1824, Buckland repeated Cuvier's size estimate that Megalosaurus would have been 40 feet long with the weight of a seven foot tall elephant. However, this had been based on the remains present at Oxford. Buckland had also been hurried into naming his new reptile by a visit he had made to the fossil collection of Mantell, who during the lecture announced to have acquired a fossil thigh bone of enormous magnitude, twice as long as that just described. Today, this is known to have belonged to Iguanodon, or at least some iguanodontid, but at the time both men assumed this bone belonged to Megalosaurus also. Even taking into account the effects of allometry, heavier animals having relatively stouter bones, Buckland was forced in the printed version of his lecture to estimate the maximum length of Megalosaurus at 60 to 70 feet. The existence of Megalosaurus posed some problems for Christian orthodoxy, which typically held that suffering and death had only come into the world through Original Sin, which seemed irreconcilable with the presence of a gigantic devouring reptile during a pre-Adamitic phase of history. Buckland rejected the usual solution, that such carnivores would originally have been peaceful vegetarians, as infantile and claimed in one of the Bridgewater Treatises that Megalosaurus had played a beneficial role in creation by ending the lives of old and ill animals, "to diminish the aggregate amount of animal suffering". Around 1840, it became fashionable in England to espouse the concept of the transmutation of species as part of a general progressive development through time, as expressed in the work of Robert Chambers. In reaction, on 2 August 1841 Richard Owen during a lecture to the British Association for the Advancement of Science claimed that certain prehistoric reptilian groups had already attained the organisational level of present mammals, implying there had been no progress. Owen presented three examples of such higher level reptiles: Iguanodon, Hylaeosaurus and Megalosaurus. For these, the "lizard model" was entirely abandoned: they would have had an upright stance and a high metabolism. This also meant that earlier size estimates had been exaggerated. By simply adding the known length of the vertebrae, instead of extrapolating from a lizard, Owen arrived at a total body length for Megalosaurus of 30 feet. In the printed version of the lecture published in 1842, Owen united the three reptiles into a separate group: the Dinosauria. Megalosaurus was thus one of the three original dinosaurs. In 1852, Benjamin Waterhouse Hawkins was commissioned to build a life-sized concrete model of Megalosaurus for the exhibition of prehistoric animals at the Crystal Palace Park in Sydenham, where it remains to this day. Hawkins worked under the direction of Owen and the statue reflected Owen's ideas that Megalosaurus would have been a mammal-like quadruped. The sculpture in Crystal Palace Park shows a conspicuous hump on the shoulders and it has been suggested this was inspired by a set of high vertebral spines acquired by Owen in the early 1850s. Today, they are seen as a separate genus Becklespinax, but Owen referred them to Megalosaurus. The models at the exhibition created a general public awareness for the first time, at least in England, that ancient reptiles had existed. The presumption that carnivorous dinosaurs, like Megalosaurus, were quadrupeds was first challenged by the find of Compsognathus in 1859. That, however, was a very small animal, the significance of which for gigantic forms could be denied. In 1870, near Oxford, the type specimen of Eustreptospondylus was discovered – the first reasonably intact skeleton of a large theropod. It was clearly bipedal. Shortly afterwards, John Phillips created the first public display of a theropod skeleton in Oxford, arranging the known Megalosaurus bones, held by recesses in cardboard sheets, in a more or less natural position. During the 1870s, North American discoveries of large theropods, like Allosaurus, confirmed that they were bipedal. The Oxford University Museum of Natural History display contains most of the specimens from the original description by Buckland. Later finds of Megalosaurus bucklandii The quarries at Stonesfield, which were worked until 1911, continued to produce Megalosaurus bucklandii fossils, mostly single bones from the pelvis and hindlimbs. Vertebrae and skull bones are rare. In 2010, Roger Benson counted a total of 103 specimens from the Stonesfield Slate, from a minimum of seven individuals. It has been contentious whether this material represents just a single taxon. In 2004, Julia Day and Paul Barrett claimed that there were two morphotypes present, based on small differences in the thighbones. In 2008 Benson favoured this idea, but in 2010 concluded the differences were illusory. A maxilla fragment, specimen OUM J13506, was, in 1869 assigned, by Thomas Huxley, to M. bucklandii. In 1992 Robert Thomas Bakker claimed it represented a member of the Sinraptoridae; in 2007, Darren Naish thought it was a separate species belonging to the Abelisauroidea. In 2010, Benson pointed out that the fragment was basically indistinguishable from other known M. bucklandii maxillae, to which it had in fact not been compared by the other authors. Apart from the finds in the Taynton Limestone Formation, in 1939 Sidney Hugh Reynolds referred remains to Megalosaurus that had been found in the older Chipping Norton Limestone Formation dating from the early Bathonian, about 30 single teeth and bones. Though the age disparity makes it problematic to assume an identity with Megalosaurus bucklandii, in 2009 Benson could not establish any relevant anatomical differences with M. bucklandii among the remains found at one site, the New Park Quarry, and therefore affirmed the reference to that species. However, in another site, the Oakham Quarry, the material contained one bone, an ilium, that was clearly dissimilar. Sometimes trace fossils have been referred to Megalosaurus or to the ichnogenus Megalosauripus. In 1997, a famous group of fossilised footprints (ichnites) was found in a limestone quarry at Ardley, 20 kilometres northeast of Oxford. They were thought to have been made by Megalosaurus and possibly also some left by Cetiosaurus. There are replicas of some of these footprints, set across the lawn of the Oxford University Museum of Natural History. One track was of a theropod accelerating from walking to running. According to Benson, such referrals are unprovable, as the tracks show no traits unique to Megalosaurus. Certainly they should be limited to finds that are of the same age as Megalosaurus bucklandii. In 2024 five more sets of tracks were discovered at a nearby Bicester quarry, with one of them showing clear features of large tridactyl theropod feet distinctive of Megalosaurus. Finds from sites outside England, especially in France, have in the nineteenth and twentieth century been referred to M. bucklandii. In 2010 Benson considered these as either clearly different or too fragmentary to establish an identity. Description Since the first finds, many other Megalosaurus bones have been recovered; however, no complete skeleton has yet been found. Therefore, the details of its physical appearance cannot be certain. However, a full osteology of all known material was published in 2010 by Benson. Size and general build Traditionally, most texts, following Owen's estimate of 1841, give a body length of 30 feet or nine metres for Megalosaurus. The lack of an articulated dorsal vertebral series makes it difficult to determine an exact size. David Bruce Norman in 1984 thought Megalosaurus was seven to eight metres long. Gregory S. Paul in 1988 estimated the weight tentatively at 1.1 tonnes, given a thigh bone 76 centimetres long. The trend in the early twenty-first century to limit the material to the lectotype inspired even lower estimates, disregarding outliers of uncertain identity. Paul in 2010 estimated the size of Megalosaurus at in length and . However, the same year Benson claimed that Megalosaurus, though medium-sized, was still among the largest of Middle Jurassic theropods. Specimen NHMUK PV OR 31806, a thigh bone 803 millimetres long, would indicate a body weight of 943 kilogrammes, using the extrapolation method of J.F. Anderson — which method, optimised for mammals, tends to underestimate theropod masses by at least a third. Furthermore, thigh bone specimen OUM J13561 has a length of about 86 centimetres. In general, Megalosaurus had the typical build of a large theropod. It was bipedal, the horizontal torso being balanced by a long horizontal tail. The hindlimbs were long and strong with three forward-facing weight-bearing toes, the forelimbs relatively short but exceptionally robust and probably carrying three digits. Being a carnivore, its large elongated head bore long dagger-like teeth to slice the flesh of its prey. The skeleton of Megalosaurus is highly ossified, indicating a robust and muscular animal, though the lower leg was not as heavily built as that of Torvosaurus, a close relative. Skull and lower jaws The skull of Megalosaurus is poorly known. The discovered skull elements are generally rather large in relation to the rest of the material. This can either be coincidental or indicate that Megalosaurus had an uncommonly large head. The praemaxilla is not known, making it impossible to determine whether the snout profile was curved or rectangular. A rather stubby snout is suggested by the fact that the front branch of the maxilla was short. In the depression around the antorbital fenestra to the front, a smaller non-piercing hollowing can be seen that is probably homologous to the fenestra maxillaris. The maxilla bears 13 teeth. The teeth are relatively large, with a crown length up to seven centimetres. The teeth are supported from behind by tall, triangular, unfused interdental plates. The cutting edges bear 18 to 20 denticula per centimetre. The tooth formula is probably 4, 13–14/13–14. The jugal bone is pneumatised, pierced by a large foramen from the direction of the antorbital fenestra. It was probably hollowed out by an outgrowth of an air sac in the nasal bone. Such a level of pneumatisation of the jugal is not known from other megalosaurids and might represent a separate autapomorphy. The lower jaw is rather robust. It is also straight in top view, without much expansion at the jaw tip, suggesting the lower jaws as a pair, the mandibula, were narrow. Several traits in 2008 identified as autapomorphies, later transpired to have been the result of damage. However, a unique combination of traits is present in the wide longitudinal groove on the outer side (shared with Torvosaurus), the small third dentary tooth and a vascular channel, below the row of interdental plates, that only is closed from the fifth tooth position onwards. The number of dentary teeth was probably 13 or 14, though the preserved damaged specimens show at most 11 tooth sockets. The interdental plates have smooth inner sides, whereas those of the maxilla are vertically grooved; the same combination is shown by Piatnitzkysaurus. The surangular has no bony shelf, or even ridge, on its outer side. There is laterally an oval opening present in front of the jaw joint, a foramen surangulare posterior, but a second foramen surangulare anterior to the front of it is lacking. Vertebral column Although the exact numbers are unknown, the vertebral column of Megalosaurus was probably divided into 10 neck vertebrae, 13 dorsal vertebrae, five sacral vertebrae and 50 to 60 tail vertebrae, as is common for basal Tetanurae. The Stonesfield Slate material contains no neck vertebrae; but a single broken anterior cervical vertebra is known from the New Park Quarry, specimen NHMUK PV R9674. The breakage reveals large internal air chambers. The vertebra is also otherwise heavily pneumatised, with large pleurocoels, pneumatic excavations, on its sides. The rear facet of the centrum is strongly concave. The neck ribs are short. The front dorsal vertebrae are slightly opisthocoelous, with convex front centrum facets and concave rear centrum facets. They are also deeply keeled, with the ridge on the underside representing about 50% of the total centrum height. The front dorsals perhaps have a pleurocoel above the diapophysis, the lower rib joint process. The rear dorsal vertebrae, according to Benson, are not pneumatised. They are slightly amphicoelous, with hollow centrum facets. They have secondary joint processes, forming a hyposphene–hypantrum complex, the hyposphene having a triangular transverse cross-section. The height of the dorsal spines of the rear dorsals is unknown, but a high spine on a tail vertebra of the New Park Quarry material, specimen NHMUK PV R9677, suggests the presence of a crest on the hip area. The spines of the five vertebrae of the sacrum form a supraneural plate, fused at the top. The undersides of the sacral vertebrae are rounded but the second sacral is keeled; normally it is the third or fourth sacral having a ridge. The sacral vertebrae seem not to be pneumatised but have excavations at their sides. The tail vertebrae are slightly amphicoelous, with hollow centrum facets on both the front and rear side. They have excavations at their sides and a longitudinal groove on the underside. The neural spines of the tail basis are transversely thin and tall, representing more than half of the total vertebral height. Appendicular skeleton The shoulderblade or scapula is short and wide, its length about 6.8 times the minimum width; this is a rare and basal trait within Tetanurae. Its top curves slightly to the rear in side view. On the lower outer side of the blade a broad ridge is present, running from just below the shoulder joint to about mid-length where it gradually merges with the blade surface. The middle front edge over about 30% of its length is thinned, forming a slightly protruding crest. The scapula constitutes about half of the shoulder joint, which is orientated obliquely sideways and to below. The coracoid is in all known specimens fused with the scapula into a scapulocoracoid, lacking a visible suture. The coracoid as such is an oval bone plate, with its longest side attached to the scapula. It is pierced by a large oval foramen but the usual boss for the attachment of the upper arm muscles is lacking. The humerus is very robust with strongly expanded upper and lower ends. Humerus specimen OUMNH J.13575 has a length of 388 millimetres. Its shaft circumference equals about half of the total humerus length. The humerus head continues to the front and the rear into large bosses, together forming a massive bone plate. On the front outer side of the shaft a large triangular deltopectoral crest is present, the attachment for the Musculus pectoralis major and the Musculus deltoideus. It covers about the upper half of the shaft length, its apex positioned rather low. The ulna is extremely robust, for its absolute size more heavily built than with any other known member of the Tetanurae. The only known specimen, NHMUK PV OR 36585, has a length of 232 millimetres and a minimal shaft circumference of 142 millimetres. The ulna is straight in front view and has a large olecranon, the attachment process for the Musculus triceps brachii. Radius, wrist and hand are unknown. In the pelvis, the ilium is long and low, with a convex upper profile. Its front blade is triangular and rather short; at the front end there is a small drooping point, separated by a notch from the pubic peduncle. The rear blade is roughly rectangular. The outer side of the ilium is concave, serving as an attachment surface for the Musculus iliofemoralis, the main thigh muscle. Above the hip joint, on this surface a low vertical ridge is present with conspicuous vertical grooves. The bottom of the rear blade is excavated by a narrow but deep trough forming a bony shelf for the attachment of the Musculus caudofemoralis brevis. The outer side of the rear blade does not match the inner side, which thus can be seen as a separate "medial blade" that in side view is visible in two places: in the corner between outer side and the ischial peduncle and as a small surface behind the extreme rear tip of the outer side of the rear blade. The pubic bone is straight. The pubic bones of both pelvis halves are connected via narrow bony skirts that originated at a rather high position on the rear side and continued downwards to a point low on the front side of the shaft. The ischium is S-shaped in side view, showing at the transition point between the two curvatures a rough boss on the outer side. On the front edge of the ischial shaft an obturator process is present in the form of a low ridge, at its top separated from the shaft by a notch. To below, this ridge continues into an exceptionally thick bony skirt at the inner rear side of the shaft, covering over half of its length. Towards the end of the shaft, this skirt gradually merges with it. The shaft eventually ends in a sizeable "foot" with a convex lower profile. The thigh bone is straight in front view. Seen from the same direction its head is perpendicular to the shaft, seen from above it is orientated 20° to the front. The greater trochanter is relatively wide and separated from the robust lesser trochanter in front of it, by a fissure. At the front base of the lesser trochanter a low accessory trochanter is present. At the lower end of the thigh bone a distinct front, extensor, groove separates the condyles. At the upper inner side of this groove a rough area is present continuing inwards into a longitudinal ridge, a typical megalosauroid trait. The shinbone, or tibia, is relatively straight, slightly curving inwards. To below, its shaft progressively flattens from front to rear, resulting in a generally oval cross-section. For about an eighth of its length the front lower end of the shaft is covered by a vertical branch of the astragalus. Of the foot, only the second, third and fourth metatarsals are known, the bone elements that were connected to the three weight-bearing toes. They are straight and robust, showing ligament pits at their lower sides. The third metatarsal has no clear condyles at its lower end, resulting in a more flexible joint, allowing for a modicum of horizontal movement. The top inner side of the third metatarsal carries a unique ridge that fits into a groove along the top outer side of the second metatarsal, causing a tighter connection. Diagnosis For decades after its discovery, Megalosaurus was seen by researchers as the definitive or typical large carnivorous dinosaur. As a result, it began to function as a "wastebasket taxon", and many large or small carnivorous dinosaurs from Europe and elsewhere were assigned to the genus. This slowly changed during the 20th century, when it became common to restrict the genus to fossils found in the middle Jurassic of England. Further restriction occurred in the late 20th and early 21st centuries, researchers such as Ronan Allain and Dan Chure suggesting that the Stonesfield Slate fossils perhaps belonged to several, possibly not directly related, species of theropod dinosaur. Subsequent research seemed to confirm this hypothesis, and the genus Megalosaurus and species M. bucklandii became generally regarded as limited to the taxon having produced the lectotype, the dentary of the lower jaw. Furthermore, several researchers failed to find any characteristics in that jaw that could be used to distinguish Megalosaurus from its relatives, which would mean the genus were a nomen dubium. However, a comprehensive study by Roger Benson and colleagues in 2008, and several related analyses published in subsequent years, overturned the previous consensus by identifying several autapomorphies, or unique distinguishing characteristics, in the lower jaw that could be used to separate Megalosaurus from other megalosaurids. Various distinguishing traits of the lower jaw have been established. The longitudinal groove on the outer surface of the dentary is wide. The third tooth socket of the dentary is not enlarged. Seen from above, the dentary is straight without an expanded jaw tip. The interdental plates, reinforcing the teeth from behind, of the lower jaw are tall. Benson also concluded it would be most parsimonious to assume that the Stonesfield Slate material represents a single species. If so, several additional distinctive traits can be observed in other parts of the skeleton. The low vertical ridge on the outer side of the ilium, above the hip joint, shows parallel vertical grooves. The bony skirts between the shafts of the ischia are thick and touch each other forming an almost flat surface. There is a boss present on the lower outer side of the ischium shaft with a rough surface. The underside of the second sacral vertebra has an angular longitudinal keel. A ridge on the upper side of the third metatarsal connected to a groove in the side of the second metatarsal. The middle of the front edge of the scapula forms a thin crest. Phylogeny In 1824, Buckland assigned Megalosaurus to the Sauria, assuming within the Sauria a close affinity with modern lizards, more than with crocodiles. In 1842, Owen made Megalosaurus one of the first three genera placed in the Dinosauria. In 1850, Prince Charles Lucien Bonaparte coined a separate family Megalosauridae with Megalosaurus as the type genus. For a long time, the precise relationships of Megalosaurus remained vague. It was seen as a "primitive" member of the Carnosauria, the group in which most large theropods were united. In the late 20th century the new method of cladistics allowed for the first time to exactly calculate how closely various taxa were related to each other. In 2012, Matthew Carrano et al. showed that Megalosaurus was the sister species of Torvosaurus within the Megalosaurinae, giving this cladogram: Paleobiology Megalosaurus lived in what is now Europe during the Bathonian stage of the Middle Jurassic (~166-168 million years ago). Repeated descriptions during the nineteenth and early twentieth century of Megalosaurus hunting Iguanodon (another of the earliest dinosaurs named) through the forests that then covered the continent are now known to be inaccurate, because Iguanodon skeletons are found in much younger Early Cretaceous formations. The only specimens belonging to Megalosaurus bucklandii are from the Lower/Middle Bathonian of Oxfordshire and Gloucestershire. No material from outside the Bathonian formations of England can be referred to Megalosaurus. Other roughly contemporaneous dinosaur species known from the Bathonian of Britain include the theropods Cruxicheiros (a large sized taxon), Iliosuchus (a dubious taxon only known from fragmentary remains), the small tyrannosauroid Proceratosaurus, and other indeterminate theropods known from teeth, suggested to include dromaeosaurs, troodontids, and therizinosaurs, indeterminate ornithischians primarily known from teeth, including heterodontosaurids, stegosaurs, and ankylosaurs, and the sauropods Cardiodon (only known from teeth) and Cetiosaurus. Megalosaurus may have hunted stegosaurs and sauropods. Benson in 2010 concluded from its size and common distribution that Megalosaurus was the apex predator of its habitat. He saw the absence of Cetiosaurus on the French Armorican Massif as an indication that Megalosaurus too did not live on that island and was limited to the London-Brabant Massif, a tectonic high that during this period formed an island landmass including parts of southern Britain and adjacent areas of northern France, the Netherlands, Belgium and western Gerrmany, suggested to be comparable in size to Cuba with an area of around . It has been questioned why the dinosaurs of the island did not experience insular dwarfism, as would be expected for an island of this size. A possible explanation for this is that the island remained ecologically connected to the much larger landmass comprising northern Britain (the Scottish Massif), the Fennoscandian Shield and the now submerged region in the North Sea between them. Plant fossils from the Taynton Limestone Formation from which many Megalosaurus fossils originate, representing the nearshore vegetation are largely dominated by conifers (including the living family Araucariaceae and the extinct family Cheirolepidiaceae) as well as the extinct seed plant group Bennettitales, representing a probably seasonally dry environment including mangroves. Paleopathology A Megalosaurus rib figured in 1856 and 1884 by Sir Richard Owen has a pathological swollen spot near the base of its capitular process. The swollen spot appears to have been caused by a healed fracture and is located at the point where it would have articulated with its vertebra. Species and synonyms During the later nineteenth century, Megalosaurus was seen as the typical carnivorous dinosaur. If remains were found that were not deemed sufficiently distinct to warrant a separate genus, often single teeth, these were classified under Megalosaurus, which thus began to function as a wastebasket taxon, a sort of default genus. Eventually, Megalosaurus contained more species than any other non-avian dinosaur genus, most of them of dubious validity. During the twentieth century, this practice was gradually discontinued; but scientists discovering theropods that had been mistakenly classified under a different animal group in older literature, still felt themselves forced to rename them, again choosing Megalosaurus as the default generic name. Species named in the 19th century In 1857, Joseph Leidy renamed Deinodon horridus (Leidy, 1856) into Megalosaurus horridus, the "frightening one", a genus based on teeth. In 1858, Friedrich August Quenstedt named Megalosaurus cloacinus, based on a probable Late Triassic theropod tooth found near Bebenhausen, specimen SMNH 52457. It is a nomen dubium. In 1869 Eugène Eudes-Deslongchamps named Megalosaurus insignis, the "significant", based on a theropod tooth found near La Hève in Normandy that was 12 centimetres long, a third longer than the teeth of M. bucklandii. The name at first remained a nomen nudum, but a description was provided, in 1870, by Gustave Lennier. Today, it is considered a nomen dubium, an indeterminate member of the Theropoda, the specimen having in 1944 been destroyed by a bombardment. In 1870, Jean-Baptiste Greppin named Megalosaurus meriani based on specimen MH 350, a premaxillary tooth found near Moutier and part of the collection of Peter Merian. Today, this is either referred to Amanzia, Ceratosaurus or seen as a nomen dubium, an indeterminate member of the Ceratosauria. In 1871, Emanuel Bunzel named remains found near Schnaitheim Megalosaurus schnaitheimi. It is a nomen nudum, the fossils possibly belonging to Dakosaurus maximus. In 1876, J. Henry, a science teacher at Besançon, in a published dissertation named four Late Triassic possible dinosaur teeth found near Moissey Megalosaurus obtusus, "the blunt one". It is a nomen dubium, perhaps a theropod or some indeterminate predatory archosaur. In 1881, Harry Govier Seeley named two possible theropod teeth found in Austria Megalosaurus pannoniensis. The specific name refers to Pannonia. It is a nomen dubium, possibly an indeterminate member of the Dromaeosauridae or Tyrannosauroidea. In 1883, Seeley named Megalosaurus bredai, based on a thigh bone, specimen NHMUK PV OR 42997 found near Maastricht, the Netherlands. The specific name honours Jacob Gijsbertus Samuël van Breda. In 1932, this was made a separate genus Betasuchus by Friedrich von Huene. In 1882, Henri-Émile Sauvage named remains found at Louppy-le-Château, teeth and vertebrae from the Early Cretaceous, Megalosaurus superbus, "the proud one". In 1923, this became the genus Erectopus. In 1884/1885, Wilhelm Barnim Dames, based on specimen UM 84, a tooth from the Early Cretaceous, named Megalosaurus dunkeri, the specific name honouring Wilhelm Dunker. In 1923, this was made a separate genus Altispinax. In 1885, Joseph Henri Ferdinand Douvillé renamed Dakosaurus gracilis Quenstedt 1885 into Megalosaurus gracilis. Today the renaming is generally rejected. In 1889, Richard Lydekker named Megalosaurus oweni, the specific name honouring Owen, based on a series of metatarsals from the Early Cretaceous, specimen BMNH R?2556?. In 1991, this was made a separate genus Valdoraptor. In 1892, Edward Drinker Cope renamed Ceratosaurus nasicornis Marsh 1884 into Megalosaurus nasicornis. This had been largely motivated by a desire to annoy his rival Othniel Charles Marsh and the name has found no acceptance. In 1896, Charles Jean Julien Depéret named Megalosaurus crenatissimus, "the much crenelated", based on remains from the Late Cretaceous found in Madagascar. In 1955 this was made a separate genus Majungasaurus. The generic name Laelaps, used by Cope to denote a theropod, had been preoccupied by a mite. Marsh had therefore provided the replacement name Dryptosaurus, but Henry Fairfield Osborn, a partisan of Cope, rejected this replacement and thus in 1898 renamed Laelaps aquilunguis Cope 1866 into Megalosaurus aquilunguis. Species named in the 20th century In 1901 Baron Franz Nopcsa renamed Laelaps trihedrodon Cope 1877 into Megalosaurus trihedrodon. In the same publication Nopcsa renamed Poekilopleuron valens Leidy 1870 into Megalosaurus valens; this probably represents fossil material of Allosaurus. In 1902, Nopcsa named Megalosaurus hungaricus based on two teeth found in Transylvania, then part of the Kingdom of Hungary. The specimens, MAFI ob. 3106, were later lost. It represents an indeterminate theropod. In 1903, Louis Dollo named Megalosaurus lonzeensis based on a manual claw found near Lonzee in Belgium. He had first reported this claw in 1883, and as a result some sources by mistake indicate this year as the date of the naming. It perhaps represents a member of the Noasauridae, or an indeterminate member of the Coelurosauria. In 1907 or 1908, von Huene renamed Streptospondylus cuvieri, based on a presently lost partial vertebra, into Megalosaurus cuvieri. This is today seen as a nomen dubium, an indeterminate member of the Tetanurae. In 1909, Richard Lydekker named Megalosaurus woodwardi, based on a maxilla with tooth, specimen NHMUK PV OR 41352. This is today seen as a nomen dubium, an indeterminate member of the Theropoda. In 1910, Arthur Smith Woodward named Megalosaurus bradleyi based on a skull from the Middle Jurassic, the specific name honouring the collector F. Lewis Bradley. In 1926, this was made a separate genus Proceratosaurus. In 1920, Werner Janensch named Megalosaurus ingens, "the enormous", based on specimen MB R 1050, a 12 centimetre long tooth from German East Africa. It possibly represents a large member of the Carcharodontosauridae; Carrano e.a. saw it as an indeterminate member of the Tetanurae. M. ingens is now seen as a specimen of Torvosaurus. In 1923, von Huene renamed Poekilopleuron bucklandii Eudes-Deslongchamps 1838 into Megalosaurus poikilopleuron. Today, the genus Poekilopleuron is generally seen as valid. In the same publication, von Huene named two additional Megalosaurus species. The first was Megalosaurus parkeri, its specific name honouring William Kitchen Parker and based on a pelvis, leg bones and vertebrae from the Late Cretaceous. This was made the separate genus Metriacanthosaurus in 1964. The second was Megalosaurus nethercombensis, named after its provenance from Nethercombe and based on two dentaries, leg bones, a pelvis and vertebrae from the Middle Jurassic, which von Huene himself in 1932 made the separate genus Magnosaurus. In 1925, Depéret, based on two teeth from Algeria, named Megalosaurus saharicus. In 1931/1932 this was made the separate genus Carcharodontosaurus. In 1956 von Huene by mistake named the same species as Megalosaurus africanus, intending to base it on remains from Morocco but referring the Algerian teeth; this implies that M. africanus is a junior objective synonym of M. saharicus. In 1926, von Huene named Megalosaurus lydekkeri, its specific name honouring Richard Lydekker, based on NHMUK OR 41352, i.e. the same specimen that had already been made the holotype of M. woodwardi (Lydekker, 1909). This implies that M. lydekkeri is a junior objective synonym of M. woodwardi. It is likewise seen as a nomen dubium. In the same publication von Huene named Megalosaurus terquemi based on three teeth found near Hettingen, its specific name honouring Olry Terquem. It is seen as a nomen dubium, the fossil material probably representing some member of the Phytosauria or some other archosaur. In 1932, a work by von Huene mentioned a Megalosaurus (Magnosaurus) woodwardi, a synonym of Magnosaurus woodwardi named in the same book. Its type specimen is differing from the earlier Megalosaurus woodwardi (Lydekker, 1909), the two names are not synonyms. In 1954 Samuel Welles named Megalosaurus wetherilli. This species is exceptional in being based on a rather complete skeleton, found in Arizona, from the Early Jurassic. Its specific name honours John Wetherill. In 1970, Welles made this the separate genus Dilophosaurus. In 1955, Albert-Félix de Lapparent named Megalosaurus mersensis based on a series of 23 vertebrae found near Tizi n'Juillerh in a layer of the El Mers Formation of Morocco. This probably represents a member of the Mesosuchia. In 1956, Alfred Sherwood Romer renamed Aggiosaurus nicaeensis Ambayrac 1913, based on a lower jaw found near Nice, on the authority of von Huene into Megalosaurus nicaeensis. Originally it had been considered to be some crocodilian; present opinion confirms this. In 1957, de Lapparent named Megalosaurus pombali based on three teeth found near Pombal in the Jurassic of Portugal. Today it is seen as a nomen dubium, an indeterminate member of the Theropoda. In 1965, Oskar Kuhn renamed Zanclodon silesiacus Jaekel 1910 into Megalosaurus? silesiacus. It is a nomen dubium based on the tooth of some indeterminate predatory Triassic archosaur, found in Silesia, perhaps a theropod. In 1966, Guillermo del Corro named Megalosaurus inexpectatus, named "the unexpected" as it was discovered on a sauropod site with remains of Chubutisaurus, based on specimen MACN 18.172, a tooth found in Argentina. It might represent a member of the Carcharodontosauridae. In 1970, Rodney Steel named two Megalosaurus species. Firstly, he renamed Iliosuchus incognitus Huene 1932 into Megalosaurus incognitus. Secondly, he renamed Nuthetes destructor Owen 1854 into Megalosaurus destructor. Both genera are today seen as not identical to Megalosaurus. Michael Waldman in 1974 renamed Sarcosaurus andrewsi Huene 1932 into Megalosaurus andrewsi. Indeed, Sarcosaurus andrewsi is today by some scientists not seen as directly related to the type species of Sarcosaurus: Sarcosaurus woodi. In the same publication Waldman named Megalosaurus hesperis, "the western one", based on skull fragments from the Middle Jurassic. In 2008 this was made the separate genus Duriavenator. Del Corro in 1974 named Megalosaurus chubutensis, based on specimen MACN 18.189, a tooth found in Chubut Province. It is a nomen dubium, a possible carcharodontosaurid, or a very large abelisaurid. In 1985, Zhao Xijin named two Megalosaurus species found in Tibet. He had earlier mentioned these species in an unpublished dissertation of 1983, implying they initially were invalid nomina ex dissertatione. However, his 1985 publication did not contain descriptions so the names are still nomina nuda. The first species was Megalosaurus "dapukaensis", named for the Dapuka Group. It was, in the second edition of The Dinosauria, by mistake spelled as Megalosaurus cachuensis. The second species was Megalosaurus "tibetensis". In 1987/1988, Monique Vianey-Liaud renamed Massospondylus rawesi (Lydekker, 1890), based on specimen NHMUK R4190, a tooth from the Maastrichtian of India, into Megalosaurus rawesi. This is a nomen dubium, a possible member of the Abelisauridae. In 1988, Gregory S. Paul renamed Torvosaurus tanneri Galton & Jensen 1979 into Megalosaurus tanneri. The change has found no acceptance. In 1973, Anatoly Konstantinovich Rozhdestvensky had renamed Poekilopleuron schmidti Kiprijanow 1883 into a Megalosaurus sp. However, as it is formally impossible to change a named species into an unnamed one, George Olshevsky in 1991 used the new combination Megalosaurus schmidti. It is a chimaera. In 1993, Ernst Probst and Raymund Windolf by mistake renamed Plateosaurus ornatus Huene 1905 into Megalosaurus ornatus by mentioning the latter name in a species list. This can be seen as a nomen vanum. The same publication listed the ichnospecies Megalosauropus teutonicus Kaever & Lapparent 1974 as a Megalosaurus teutonicus. In 1997, Windolf renamed Saurocephalus monasterii Münster 1846, based on a tooth found near Hannover, into Megalosaurus monasterii. It is a nomen dubium, an indeterminate member of the Theropoda. In 1998, Peter Malcolm Galton renamed Zanclodon cambrensis Newton 1899, based on a left lower jaw, specimen BGS 6532 found at Bridgend, into ?Megalosaurus cambrensis because it was not a basal sauropodomorph. It is a senior synonym of Gressylosaurus cambrensis Olshevsky 1991. The specific name refers to Cambria, the Latin name of Wales. It probably represents a member of the Coelophysoidea, or some other predatory archosaur. Species list The complex naming history can be summarised in a formal species list. The naming authors are directly mentioned behind the name. If the name has been changed, they are placed in parentheses and the authors of the changed name are mentioned behind them. The list also indicates whether a name has been insufficiently described (nomen nudum), is not taxonomically identifiable at the generic level (nomen dubium), or fallen out of use (nomen oblitum). Reclassifications under a different genus are mentioned behind the "=" sign; if the reclassification is today considered valid, it is listed under Reassigned species.
Biology and health sciences
Theropods
Animals
513965
https://en.wikipedia.org/wiki/Rammed%20earth
Rammed earth
Rammed earth is a technique for constructing foundations, floors, and walls using compacted natural raw materials such as earth, chalk, lime, or gravel. It is an ancient method that has been revived recently as a sustainable building method. Under its French name of pisé it is also a material for sculptures, usually small and made in molds. It has been especially used in Central Asia and Tibetan art, and sometimes in China. Edifices formed of rammed earth are found on every continent except Antarctica, in a range of environments including temperate, wet, semiarid desert, montane, and tropical regions. The availability of suitable soil and a building design appropriate for local climatic conditions are two factors that make its use favourable. The French term "pisé de terre" or "terre pisé" was sometimes used in English for architectural uses, especially in the 19th century. Building process Making rammed earth involves compacting a damp mixture of subsoil that has suitable proportions of sand, gravel, clay, silt, and stabilizer if any, into a formwork (an externally supported frame or mold). Historically, additives such as lime or animal blood were used to stabilize it. Soil mix is poured into the formwork to a depth of and then compacted to approximately 50% of its original volume. The soil is compacted iteratively in batches or courses so as to gradually erect the wall up to the top of the formwork. Tamping was historically manual with a long ramming pole by hand, but modern construction systems can employ pneumatically-powered tampers. After a wall is complete, it is sufficiently strong to immediately remove the formwork. This is necessary if a surface texture is to be applied, e.g., by wire brushing, carving, or mold impression because the walls become too hard to work after approximately one hour. The compressive strength of rammed earth increases as it cures. Cement-stabilized rammed earth is cured for a minimum period of 28 days. In modern rammed earth buildings, the walls are constructed on top of conventional footings or a reinforced concrete slab base. The construction of an entire wall begins with a temporary frame, the "formwork", which is usually made of wood or plywood, as a mold for each wall section's desired shape and dimensions. The form must be durable and well-braced, and the two opposing faces must be clamped together to prevent bulging or deformation caused by the large compressing forces. Formwork plays an important role in building rammed earth walls. Historically, wooden planks tied using rope were used to build walls. Modern builders use plywood and/or steel to build formwork. Characteristics The compressive strength of rammed earth is dictated by factors such as soil type, particle size distribution, amount of compaction, moisture content of the mix and type/amount of stabiliser used. Well-produced cement-stabilised rammed earth walls can be anywhere between . Higher compressive strength might require more cement. But addition of more cement can affect the permeability of the walls. Indeed, properly constructed rammed earth endures for thousands of years, as many ancient structures that are still standing around the world demonstrate. Rammed earth walls are reinforced with rebars in areas of high seismic activity. Adding cement to soil mixtures low in clay can also increase the load-bearing capacity of rammed-earth edifices. The United States Department of Agriculture observed in 1925 that rammed-earth structures endure indefinitely and can be constructed for less than two-thirds of the cost of standard frame houses. Rammed earth works require at least one skilled person for quality control. All other workers can be unskilled or semi-skilled. One significant benefit of rammed earth is its high thermal mass: like brick or concrete, it absorbs heat during the day and releases heat at night. This action moderates daily temperature variations and reduces the need for air conditioning and heating. In colder climates, rammed-earth walls can be insulated by inserting insulation such as styrofoam or rigid fibreglass panels within internal and external layers of rammed earth. Depending on the type and content of binder, it must also be protected from heavy rain and insulated with vapour barriers. Rammed earth can effectively regulate humidity if unclad walls containing clay are exposed to an internal space. Humidity is regulated between 40% and 60%. The material mass and clay content of rammed earth allows an edifice to breathe more than concrete edifices. This avoids problems of condensation and prevents significant loss of heat. Rammed-earth walls have the colour and texture of natural earth. Moisture-impermeable finishes, such as cement render, are not used by some people because they impair the ability of a wall to desorb moisture, which quality is necessary to preserve its strength. Blemishes can be repaired using the soil mixture as a plaster and sanded smooth. The thickness varies widely based on region and code. It can be as little as for non load-bearing walls and up to for load-bearing walls. The thickness and density of rammed-earth walls make them suitable for soundproofing. They are also inherently fireproof, resistant to termite damage, and non-toxic. Environmental effects and sustainability Edifices of rammed earth are more sustainable and environmentally friendly than other building techniques that use more cement and other chemicals. Because rammed-earth edifices use locally available materials, they usually have low embodied energy and generate very little waste. The soils used are typically subsoil which conserve the topsoil for agriculture. When the soil excavated in preparation for a foundation can be used, the cost and energy consumption of transportation are minimal. Rammed earth is probably the least environmentally detrimental construction material and technique that is readily and commercially available today to construct solid edifices. Rammed earth has potentially low manufacturing impact, contingent on the amount of cement and the amount that is locally sourced; it is often quarried aggregates rather than "earth". Rammed earth can contribute to the overall energy efficiency of edifices: the density, thickness, and thermal conductivity of rammed earth render it an especially suitable material for passive solar heating. Warmth requires almost 12 hours to be conducted through a wall thick. Mixing cement with the soil can counteract sustainable benefits such as low embodied energy because manufacture of the cement itself creates 1.25 tonnes of carbon dioxide per tonne of cement produced. Although it has low greenhouse gas emissions in theory, transportation and the production of cement can add significantly to the overall emissions of modern rammed earth construction. The most basic kind of traditional rammed earth has very low greenhouse gas emissions but the more engineered and processed variant of rammed earth has the potential for significant emissions. History Evidence of ancient use of rammed earth has been found in Neolithic archaeological sites such as those of the Fertile Crescent, dating to the 9th–7th millennium BC, and of the Yangshao and Longshan cultures in China, dating to 5000 BCE. By 2000 BCE, rammed-earth architectural techniques (夯土 Hāng tǔ) were commonly used for walls and foundations in China. United States and Canada In the 1800s, rammed earth was popularized in the United States by the book Rural Economy by S. W. Johnson. The technique was used to construct the Borough House Plantation and the Church of the Holy Cross in Stateburg, South Carolina, both being National Historic Landmarks. An outstanding example of a rammed-earth edifice in Canada is St. Thomas Anglican Church in Shanty Bay, Ontario, erected between 1838 and 1841. From the 1920s through the 1940s rammed-earth construction in the US was studied. South Dakota State College extensively researched and constructed almost one hundred weathering walls of rammed earth. For over 30 years the college investigated the use of paints and plasters in relation to colloids in soil. In 1943, Clemson Agricultural College of South Carolina published the results of their research of rammed earth in a pamphlet titled "Rammed Earth Building Construction". In 1936, on a homestead near Gardendale, Alabama, the United States Department of Agriculture constructed experimental rammed-earth edifices with architect Thomas Hibben. The houses were inexpensively constructed and were sold to the public along with sufficient land for gardens and small plots for livestock. The project successfully provided homes to low-income families. The US Agency for International Development is working with developing countries to improve the engineering of rammed-earth houses. It also financed the authorship of the Handbook of Rammed Earth by Texas A&M University and the Texas Transportation Institute. Interest in rammed earth declined after World War II when the cost of modern construction materials decreased. Rammed earth is considered substandard, and is opposed by many contractors, engineers, and tradesmen. The prevailing perception that such materials and techniques perform poorly in regions prone to earthquakes has prevented their use in much of the world. In Chile, for example, rammed earth edifices normally cannot be conventionally insured against damage or even be approved by the government. A notable example of 21st-century use of rammed earth is the façade of the Nk'Mip Desert Cultural Centre in southern British Columbia, Canada. As of 2014 it is the longest rammed earth wall in North America. 20th century China Rammed earth construction was both practically and ideologically important during the rapid construction of the Daqing oil field and the related development of Daqing. The "Daqing Spirit" represented deep personal commitment in pursuing national goals, self-sufficient and frugal living, and urban-rural integrated land use. Daqing's urban-rural landscape was said to embody the ideal communist society described by Karl Marx because it eliminated (1) the gap between town and country, (2) the gap between workers and peasants, and (3) the gap between manual and mental labor. Drawing on the Daqing experience, China encouraged rammed earth construction in the mid-1960s. Starting in 1964, Mao Zedong advocated for a "mass design revolution movement". In the context of the Sino-Soviet split, Mao urged that planners should avoid the use of Soviet-style prefabricated materials and instead embrace the proletarian spirit of on-site construction using rammed earth. The Communist Party promoted the use of rammed earth construction as a low-cost method which was indigenous to China and required little technical skill. During the Third Front campaign to develop strategic industries in China's rugged interior to prepare for potential invasion by the United States or Soviet Union, Planning Commission Director Li Fuchun instructed project leaders to make do with what was available, including building rammed earth housing so that more resources could be directed to production. This policy came to be expressed through the slogan, "First build the factory and afterward housing."
Technology
Building materials
null
514072
https://en.wikipedia.org/wiki/Herding%20dog
Herding dog
A herding dog, also known as a stock dog or working dog, is a type of dog that either has been trained in herding livestock or belongs to one of the breeds that were developed for herding. A dog specifically trained to herd sheep is known as a sheep dog or shepherd dog, and one trained to herd cattle is known as a cattle dog or cow dog. Herding behavior All herding behavior is modified predatory behavior. Through selective breeding, humans have been able to minimize the dog's natural inclination to treat cattle and sheep as prey while simultaneously maintaining the dog's hunting skills, thereby creating an effective herding dog. Dogs can work other animals in a variety of ways. Some breeds, such as the Australian Cattle Dog, typically nip at the heels of animals (for this reason they are called heelers) and the Cardigan and Pembroke Welsh Corgis were historically used in a similar fashion in the cattle droves that moved cattle from Wales to the Smithfield Meat Market in London but are rarely used for herding today. Other breeds, notably the Border Collie, get in front of the animals and use what is called strong eye to stare down the animals; they are known as headers. The headers or fetching dogs keep livestock in a group. They consistently go to the front or head of the animals to turn or stop the animal's movement. The heelers or driving dogs keep pushing the animals forward. Typically, they stay behind the herd. The Australian Kelpie and Australian Koolie use both these methods and also run along the backs of sheep so are said to head, heel, and back. Other types such as the Australian Shepherd, English Shepherd and Welsh Sheepdog are moderate to loose eyed, working more independently. The New Zealand Huntaway uses its loud, deep bark to muster mobs of sheep. Belgian Malinois, German Shepherd Dogs and Briards are historically tending dogs, who act as a "living fence", guiding large flocks of sheep to graze, while preventing them from eating valuable crops and wandering onto roads. Herding instincts and trainability can be measured when introducing a dog to livestock or at noncompetitive herding tests. Individuals exhibiting basic herding instincts can be trained to compete in herding trials. Terminology In Australia, New Zealand and the United States herding dogs are known as working dogs irrespective of their breeding. Some herding breeds work well with any kind of animals; others have been bred for generations to work with specific kinds of animals and have developed physical characteristics or styles of working that enhance their ability to handle these animals. Commonly mustered animals include cattle, sheep, goats and reindeer, although it is not unusual for poultry to be handled by dogs. The term "herding dog" is sometimes erroneously used to describe livestock guardian dogs, whose primary function is to guard flocks and herds from predation and theft, and they lack the herding instinct. Although herding dogs may guard flocks their primary purpose is to move them; both herding dogs and livestock guardian dogs may be called "sheep dogs". In general terms when categorizing dog breeds, herding dogs are considered a subcategory of working dogs, but for conformation shows they usually form a separate group. Australia has the world's largest cattle stations and sheep stations and some of the best-known herding dogs, such as the Koolie, Kelpie, Red and Blue Heelers are bred and found there. Origins of herding dogs Creating herding dog breeds is associated with the development of cattle breeding. Domestication of sheep and goats began in the 8–7th millennium BC. Originally this process began in Western Asia, on the territory of modern Iran and Iraq. Shepherding was a difficult task: primitive herders did not have horses and moved their cattle for grazing on foot as horses and donkeys were not yet fully domesticated and obedient enough. Dogs that were previously helping humans in hunting, became assistants in livestock maintenance. The main task for dogs in the early stages of cattle breeding was protecting herds from a variety of wild predators, that were very numerous. This function predetermined herding dogs' characteristics: they had to be strong, vicious, courageous, decisive, able to stand alone against a large predator and, most importantly, ready to defend their herd. The history of the ancestors of herding dogs can be traced back to six thousand years ago, archaeological findings of the joint remains of sheep and dogs date back to 3685 BC. The place of their origin is considered to be the territories of modern Turkey, Iraq and Syria. Shepherd dogs are mentioned in the Old Testament, the writings of Cato the Elder and Varro, their images are found in works of art created more than two thousand years ago. These dogs were used not only to guard herds, but also for military purposes. From the regions of Western Asia, herding spread to west and north, followed by an increase in the number of domestic animals. On the territory of Europe, the progenitors of herding dogs appeared in the 6th to 7th centuries BC. According to archaeological research, cattle breeding and agriculture spread across Europe in different ways: along the Danube and Rhine rivers to the territory of modern Germany, northern France and the Netherlands, through the Mediterranean Sea to the Alps, up the Rhone to central and southwestern France. The development of agriculture, increasing number of settlements and foundation of cities have led to a decrease in the number of predators. After the extinction of large predators in most of Europe and Great Britain, with the massive spread of sheep breeding and with an increase in the share of cultivated and populated land, the main task of herding dogs was to protect crops, private and protected areas from harm during grazing and moving herds. Shepherd dogs were more suitable for this work than larger and stronger breeds, being medium-sized and mobile. Such dogs managed small and large livestock, as well as domestic birds. In addition to the Central European type of shepherd, another type of dog has emerged, often with thick hair, more suitable for colder areas. These dogs have shown not only the ability to manage the herd, but also to protect it. With the spread of reindeer breeding among the northern peoples, hunting spitz-like dogs were "retrained" into shepherds. Most breeds of Central European shepherd dogs – with erect ears and short hair on the head, similar to wolves, were mainly formed in the 16th to 17th centuries, the breeds of curly-haired dogs of the Northern European type were formed later. Physical characteristics During the selection process, the physical characteristics of the dogs were formed, allowing them to do their job in the best possible way. Regardless of the conditions in which herding dogs work and what function they perform, they all have a number of common characteristics. Herding dogs are strong and have a lot of stamina. Their paws are well protected from thorns and sharp stones: toes are compressed into a tight lump, paw pads are thick, claws are strong. The coat has structure and density to protect from getting wet and temperature extremes common in the region of the breed origin. All herding dogs have excellent eyesight and hearing. Cattle dog colors are varied and depend on local breeders' preferences, but all herding dogs should have well-pigmented eyelids, lips, nose and paw pads, because pink skin is too delicate and prone to wounds and sunburn. In the modern world In countries where herding is preserved, herding dogs continue to work for their main purpose and are appreciated as effective and even irreplaceable helpers that can save labor costs and avoid investments in expensive equipment. Economic studies in Australia have shown that herding dogs are worth more than five times their cost, including training and maintenance. Meanwhile, the popularity and the number of herding dogs are growing, and the scope of work for them is narrowing. In the 21st century herding dogs are often chosen as family pets. The collie breeds including the Bearded Collie and Border Collie are well known, as are the Australian kelpie and Australian Working kelpie, Welsh Corgis. They make good family dogs and are at their best when they have a job to do. These dogs have been bred as working dogs and need to be physically and mentally active. They retain their herding instincts and may sometimes nip at people's heels or bump them in an effort to 'herd' their family, and may need to be trained not to do so. Their activity level and intelligence makes them excellent canine athletes. The Australian Shepherd, Shetland Sheepdog, Rough Collie, Smooth Collie and Old English Sheepdog are more popular as family companion dogs. Dogs of herding breeds now often live in urban or suburban neighbourhoods. Their owners need to maintain their physical and mental health, taking into consideration their herding instinct and qualities. The services of dog-trainers are in demand, along with the training centres for working and sporting herding dogs, offering sheep rental and walks in the pasture. Dogs living in the suburbs and villages can work with small groups of animals or poultry. Sometimes owners even buy a few sheep so that their dogs can enjoy what they were originally bred for. The combination of quick learning ability, physical strength, endurance, predatory behavior with dedication to the owner and a desire to work has led to the widespread use of large European Shepherds for a number of other civil and military jobs. These are the most common police and military dogs employed in the guard, search, rescue and other types of services. The modern world presents people with new tasks, which are successfully solved with the help of dogs. For example, in the United States, legally protected geese often pose serious problems for life and work. Here, border collies and other strong-eyed herding dogs are used to patrol crops, residential and recreational areas, parks, beaches, golf courses and, above all, airports. Protection from birds with the help of herding dogs turned out to be the most effective and only easily implemented way: walking through the patrolled area several times a day, the dogs force the geese to settle in places where they cause less trouble, while the nature is not being harmed. All shepherd dogs are born athletes. Their high need for physical and intellectual activity can be replenished not only by sports grazing, but also by other types of cynological sports. Border Collies as owners of outstanding sports qualities, Belgian Shepherds, Australian Shepherds invariably occupy leading positions in agility, flyball, frisbee, dog dancing, obedience. At the same time, in service, sport and show dogs of herding breeds that do not interact with livestock, the herding instinct is gradually weakened. Competitive herding The competitive dog sport in which herding dogs move animals around a field, fences, gates, or enclosures as directed by their handlers is called a sheepdog trial, herding test or stockdog trial depending on the area. Such events are particularly associated with hill farming areas, where sheep range widely on largely unfenced land. These trials are popular in the United Kingdom, Ireland, South Africa, Chile, Canada, the USA, Australia, New Zealand and other farming nations, and have occasionally even become primetime television fare. In the US, regular events are run by the United States Border Collie Handler's Association, Australian Shepherd Club of America, American Kennel Club and many others. The world record price for a working sheep dog was broken February 2011 at the auction at Skipton Market, England, with £6,300 ($10,270) for Dewi Fan. The previous record was £5,145 ($8,390) Basic herding dog commands Come by or just by - go to the left of the stock, or clockwise around them. Away to me, or just away or way - go to the right of the stock, or counterclockwise around them. Stand - stop, although when said gently may also mean just to slow down. Wait, (lie) down or sit or stay - stop, but remain with that contact on the stock...do not take it off by leaving. Steady or take time - slow down. Cast - gather the stock into a group. Good working dogs will cast over a large area. This is not a command but an attribute. Find - search for stock. A good dog will hold the stock until the shepherd arrives. Some will bark when the stock have been located. Get out or back - move away from the stock. Used when the dog is working too close to the stock, potentially causing the stock stress. Occasionally used as a reprimand. Keep away or keep - Used by some handlers as a direction and a distance from the sheep. Hold - keep stock where they are. Bark or speak up - bark at stock. Useful when more force is needed, and usually not essential for working cattle and sheep. Look back - return for a missed animal. Also used after a shed is completed and rejoined the flock or packet of sheep. In here or here - go through a gap in the flock. Used when separating stock. Walk up, walk on or just walk - move in closer to the stock. That will do - stop working and return to handler. These commands may be indicated by a hand movement, whistle or voice. There are many other commands that are also used when working stock and in general use away from stock. Herding dog commands are generally taught using livestock as the modus operandi. Urban owners without access to livestock are able to teach basic commands through herding games. These are not the only commands used: there are many variations. When whistles are used, each individual dog usually has a different set of commands to avoid confusion when several dogs are being worked at one time.
Biology and health sciences
Dogs
null
514097
https://en.wikipedia.org/wiki/Lithium%20aluminium%20hydride
Lithium aluminium hydride
Lithium aluminium hydride, commonly abbreviated to LAH, is an inorganic compound with the chemical formula or . It is a white solid, discovered by Finholt, Bond and Schlesinger in 1947. This compound is used as a reducing agent in organic synthesis, especially for the reduction of esters, carboxylic acids, and amides. The solid is dangerously reactive toward water, releasing gaseous hydrogen (H2). Some related derivatives have been discussed for hydrogen storage. Properties, structure, preparation LAH is a colourless solid but commercial samples are usually gray due to contamination. This material can be purified by recrystallization from diethyl ether. Large-scale purifications employ a Soxhlet extractor. Commonly, the impure gray material is used in synthesis, since the impurities are innocuous and can be easily separated from the organic products. The pure powdered material is pyrophoric, but not its large crystals. Some commercial materials contain mineral oil to inhibit reactions with atmospheric moisture, but more commonly it is packed in moisture-proof plastic sacks. LAH violently reacts with water, including atmospheric moisture, to liberate dihydrogen gas. The reaction proceeds according to the following idealized equation: This reaction provides a useful method to generate hydrogen in the laboratory. Aged, air-exposed samples often appear white because they have absorbed enough moisture to generate a mixture of the white compounds lithium hydroxide and aluminium hydroxide. Structure LAH crystallizes in the monoclinic space group P21/c. The unit cell has the dimensions: a = 4.82, b = 7.81, and c = 7.92 Å, α = γ = 90° and β = 112°. In the structure, cations are surrounded by five anions, which have tetrahedral molecular geometry. The cations are bonded to one hydrogen atom from each of the surrounding tetrahedral anion creating a bipyramid arrangement. At high pressures (>2.2 GPa) a phase transition may occur to give β-LAH. Preparation was first prepared from the reaction between lithium hydride (LiH) and aluminium chloride: In addition to this method, the industrial synthesis entails the initial preparation of sodium aluminium hydride from the elements under high pressure and temperature: is then prepared by a salt metathesis reaction according to: which proceeds in a high yield. LiCl is removed by filtration from an ethereal solution of LAH, with subsequent precipitation of LAH to yield a product containing around 1% w/w LiCl. An alternative preparation starts from LiH, and metallic Al instead of . Catalyzed by a small quantity of (0.2%), the reaction proceeds well using dimethylether as solvent. This method avoids the cogeneration of salt. Solubility data LAH is soluble in many ethereal solutions. However, it may spontaneously decompose due to the presence of catalytic impurities, though, it appears to be more stable in tetrahydrofuran (THF). Thus, THF is preferred over, e.g., diethyl ether, despite the lower solubility. Thermal decomposition LAH is metastable at room temperature. During prolonged storage it slowly decomposes to (lithium hexahydridoaluminate) and LiH. This process can be accelerated by the presence of catalytic elements, such as titanium, iron or vanadium. When heated LAH decomposes in a three-step reaction mechanism: is usually initiated by the melting of LAH in the temperature range 150–170 °C, immediately followed by decomposition into solid , although is known to proceed below the melting point of as well. At about 200 °C, decomposes into LiH () and Al which subsequently convert into LiAl above 400 °C (). Reaction R1 is effectively irreversible. is reversible with an equilibrium pressure of about 0.25 bar at 500 °C. and can occur at room temperature with suitable catalysts. Thermodynamic data The table summarizes thermodynamic data for LAH and reactions involving LAH, in the form of standard enthalpy, entropy, and Gibbs free energy change, respectively. Applications Use in organic chemistry Lithium aluminium hydride (LAH) is widely used in organic chemistry as a reducing agent. It is more powerful than the related reagent sodium borohydride owing to the weaker Al-H bond compared to the B-H bond. Often as a solution in diethyl ether and followed by an acid workup, it will convert esters, carboxylic acids, acyl chlorides, aldehydes, and ketones into the corresponding alcohols (see: carbonyl reduction). Similarly, it converts amide, nitro, nitrile, imine, oxime, and organic azides into the amines (see: amide reduction). It reduces quaternary ammonium cations into the corresponding tertiary amines. Reactivity can be tuned by replacing hydride groups by alkoxy groups. Due to its pyrophoric nature, instability, toxicity, low shelf life and handling problems associated with its reactivity, it has been replaced in the last decade, both at the small-industrial scale and for large-scale reductions by the more convenient related reagent sodium bis (2-methoxyethoxy)aluminium hydride, which exhibits similar reactivity but with higher safety, easier handling and better economics. LAH is most commonly used for the reduction of esters and carboxylic acids to primary alcohols; prior to the advent of LAH this was a difficult conversion involving sodium metal in boiling ethanol (the Bouveault-Blanc reduction). Aldehydes and ketones can also be reduced to alcohols by LAH, but this is usually done using milder reagents such as ; α, β-unsaturated ketones are reduced to allylic alcohols. When epoxides are reduced using LAH, the reagent attacks the less hindered end of the epoxide, usually producing a secondary or tertiary alcohol. Epoxycyclohexanes are reduced to give axial alcohols preferentially. Partial reduction of acid chlorides to give the corresponding aldehyde product cannot proceed via LAH, since the latter reduces all the way to the primary alcohol. Instead, the milder lithium tri-tert-butoxyaluminum hydride, which reacts significantly faster with the acid chloride than with the aldehyde, must be used. For example, when isovaleric acid is treated with thionyl chloride to give isovaleroyl chloride, it can then be reduced via lithium tri-tert-butoxyaluminum hydride to give isovaleraldehyde in 65% yield. Lithium aluminium hydride also reduces alkyl halides to alkanes. Alkyl iodides react the fastest, followed by alkyl bromides and then alkyl chlorides. Primary halides are the most reactive followed by secondary halides. Tertiary halides react only in certain cases. Lithium aluminium hydride does not reduce simple alkenes or arenes. Alkynes are reduced only if an alcohol group is nearby, and alkenes are reduced in the presence of catalytic TiCl4. It was observed that the reduces the double bond in the N-allylamides. Inorganic chemistry LAH is widely used to prepare main group and transition metal hydrides from the corresponding metal halides. LAH also reacts with many inorganic ligands to form coordinated alumina complexes associated with lithium ions. LiAlH4 + 4NH3 → Li[Al(NH2)4] + 4H2 Hydrogen storage LiAlH4 contains 10.6 wt% hydrogen, thereby making LAH a potential hydrogen storage medium for future fuel cell-powered vehicles. The high hydrogen content, as well as the discovery of reversible hydrogen storage in Ti-doped NaAlH4, have sparked renewed research into LiAlH4 during the last decade. A substantial research effort has been devoted to accelerating the decomposition kinetics by catalytic doping and by ball milling. In order to take advantage of the total hydrogen capacity, the intermediate compound LiH must be dehydrogenated as well. Due to its high thermodynamic stability this requires temperatures in excess of 400 °C, which is not considered feasible for transportation purposes. Accepting LiH + Al as the final product, the hydrogen storage capacity is reduced to 7.96 wt%. Another problem related to hydrogen storage is the recycling back to LiAlH4 which, owing to its relatively low stability, requires an extremely high hydrogen pressure in excess of 10000 bar. Cycling only reaction R2 — that is, using Li3AlH6 as starting material — would store 5.6 wt% hydrogen in a single step (vs. two steps for NaAlH4 which stores about the same amount of hydrogen). However, attempts at this process have not been successful so far. Other tetrahydridoaluminiumates A variety of salts analogous to LAH are known. NaH can be used to efficiently produce sodium aluminium hydride (NaAlH4) by metathesis in THF: LiAlH4 + NaH → NaAlH4 + LiH Potassium aluminium hydride (KAlH4) can be produced similarly in diglyme as a solvent: LiAlH4 + KH → KAlH4 + LiH The reverse, i.e., production of LAH from either sodium aluminium hydride or potassium aluminium hydride can be achieved by reaction with LiCl or lithium hydride in diethyl ether or THF: NaAlH4 + LiCl → LiAlH4 + NaCl KAlH4 + LiCl → LiAlH4 + KCl "Magnesium alanate" (Mg(AlH4)2) arises similarly using MgBr2: 2 LiAlH4 + MgBr2 → Mg(AlH4)2 + 2 LiBr Red-Al (or SMEAH, NaAlH2(OC2H4OCH3)2) is synthesized by reacting sodium aluminum tetrahydride (NaAlH4) and 2-methoxyethanol:
Physical sciences
Hydride salts
Chemistry
514203
https://en.wikipedia.org/wiki/Collie
Collie
Collies form a distinctive type of herding dogs, including many related landraces and standardized breeds. The type originated in Scotland and Northern England. Collies are medium-sized, fairly lightly-built dogs, with pointed snouts. Many types have a distinctive white color over the shoulders. Collies are very active and agile, and most types of collies have a very strong herding instinct. Collie breeds have spread through many parts of the world (especially North America and Australia), and have diversified into many varieties, sometimes mixed with other dog types. Some collie breeds have remained as working dogs for herding cattle, sheep, and other livestock, while others are kept as pets, show dogs or for dog sports, in which they display great agility, stamina and trainability. While the American Kennel Club has a breed they call "collie", in fact collie dogs are a distinctive type of herding dog inclusive of many related landraces and formal breeds. There are usually major distinctions between show dogs and those bred for herding trials or dog sports: The latter typically display great agility, stamina, and trainability, and most importantly intelligence. Common use of the unmodified name "collie" in some areas is limited largely to certain breeds – the name means Rough Collie by default in parts of the United States, and Border Collie by default in many rural parts of Great Britain. Many collie dog types do not actually include "collie" in their name – for example the Welsh Sheepdog. Name The exact origin of the name collie is uncertain; it may derive from the Scots word for 'coal'. Alternatively it may come from the related word coolly, referring to the black-faced mountain sheep of Scotland. The collie name usually refers to dogs of Scottish origin which have spread into many other parts of the world, often being called sheepdog or shepherd dog elsewhere. Iris Combe, in her book, “Border Collies,” says that in old Gaelic “collie” was the rural term for anything useful — a “collie dog” was a useful dog. Description Appearance Collies are generally medium-sized dogs of about and light to medium-boned. Cattle-herding types are stockier than sheep-herding types. The fur may be short, or long, and the tail may be smooth, feathered, or bushy. In the 1800s, the occasional naturally bob-tailed dog would occur. The tail can be carried low with an upward swirl, or may be carried higher but never over the back. Each breed can vary in coloration, with the usual base colors being black, black-and-tan, red, red-and-tan, white with a colored head with it without other body coloration of sable, black and tan, blue merle, sable merle sable. They often have white along with the main color, usually under the belly and chest, over the shoulders, and on parts of the face and legs, but sometimes leaving only the head colored – or white may be absent (unusual) or limited to the chest and toes (as in the Australian Kelpie). Merle coloration may also be present over any of the other color combinations, even in landrace types. The most widespread patterns include sable, black and white, black and tan and tricolour (black-and-tan and white). Temperament Collies range in trainability from the "average" to very biddable. The Border Collie is the breed most in need of a "job" to stimulate its brain, lest it become anxious and hyper, while many other collie breeds fit well into an active family lifestyle (though all collie types still require some mental stimulation). Collie-type breeds are also known for their sensitivity and awareness of emotions in people; they may require gentler handling than other types of dogs. Working type temperaments A working member of a collie breed, such as the Border Collie, is an energetic and agile dog with great stamina. When in fit, working condition they are able to run all day without tiring, even over very rough or steep ground. Working collies display a keen intelligence for the job at hand and are instinctively highly motivated. They are often intensely loyal. Dogs of collie type or derivation occupy four of the first sixteen ranks in Stanley Coren's The Intelligence of Dogs, with the Border Collie being first. These characteristics generally make working strains suitable for agility; in addition to herding work they are well suited to active sports such as sheepdog trials, flyball, disc dog and dog agility. Working strains have strong herding instincts, and some individuals can be single-minded to the point of obsessiveness. Collies can compete in herding events. Border Collies are used as search dogs in mountain rescue in Britain. They are particularly useful for searching large areas of hillside and avalanche debris. H. MacInnes believed that dark coated dogs are less prone to snow blindness. Show and pet type temperaments Certain types of collie (for example Rough Collies, Smooth Collies, Shetland Sheepdogs and some strains of Border Collie and other breeds) have been bred for many generations as pets and for the sport of conformation showing, not as herding dogs. All collie dog breeds have proved to be highly trainable, gentle, loyal, intelligent, and well suited as pets. Their gentleness and devotion also make them quite compatible with children. They are often more suitable as watchdogs than as guard dogs, though the individual personalities of these dogs vary. The temperament of these breeds has been featured in literature, film, and popular television programs. The novels of Albert Payson Terhune, which were very popular in the United States during the 1920s and 1930s, celebrated the temperament and companionship of his early AKC collies. More famously, the temperament and intelligence of the Rough Collie were exaggerated to mythic proportions in the character Lassie, which has been the subject of many films, books, and television shows from 1938 to the present. The Lassie character was featured in a book titled Lassie Come Home by Eric P. Knight. Knight's collie "Tootsie" was the inspiration for the book, which was a collection of stories based on her and other collie legends he collected from talking to friends and neighbors. One such story was most likely the documented tale of "Silverton Bobbie", the Oregon collie who crossed the US to get to his owners. While the dogs who played Lassie on-screen were from AKC lines, the actual Tootsie looked nothing like them, although she did come from a collie breeder. Health Some collie breeds (especially the Rough Collie, Smooth Collie, and the Australian Shepherd) are affected by a genetic defect, a mutation within the MDR1 gene, formerly known as "ivermectin sensitivity", but now known to cause lowered tolerance to a wide variety of different veterinary drugs. Approximately 70% of collies are affected, making them very sensitive to some drugs, such as Ivermectin, as well as to some antibiotics, opioids including loperamide, and steroids – over 100 drugs in total. The MDR1 status of individual dogs can be easily tested for. In addition, the intestinal functional system of this breed is also very fragile, and compared with similar medium and large dogs, they are easy to receive food stimulation, which leads to vomiting and excretion abnormalities or gastrointestinal diseases. Therefore, breeders need to ensure strict hygiene for dogs to eat fresh ingredients, and rich nutrition. The Verband für das Deutsche Hundewesen (The German Kennel Club) encourages breed clubs to test all breeding stock and avoid breeding from affected dogs. Collies may have a genetic disease, named canine cyclic neutropenia, or grey collie syndrome. This is a stem cell disorder. Puppies with this disorder are quite often mistaken for healthy Blue Merles, even though their colour is a silver grey. Affected puppies rarely live more than 6 months. For a puppy to be affected, both the sire and the dam have to be carriers of the disorder. Canine familial dermatomyositis is an inherited idiopathic condition affecting the skin and muscle and in rare cases the blood vessels. The condition causes dermatitis throughout the body and proceeds to myositis which in severe cases leads to megaesophagus. Collies alongside the Beauceron and Shetland Sheepdog are known to have a predilection to the condition although it has been described in other breeds. Collie eye anomaly is an autosomal recessive condition caused by a mutation in the NHEJ1 gene that affects Collies and related breeds. Collie types and breeds Herding dogs of collie type have long been widespread in Britain, and these can be regarded as a landrace from which a number of other landraces, types, and formal breeds have been derived, both in Britain and elsewhere. Many of them are working herding dogs, but some have been bred for conformation showing and as pets, sometimes losing their working instincts in the course of selection for appearance or for a more subdued temperament. Herding types tend to vary in appearance more than conformation and pet types, as they are bred primarily for their working ability, and appearance is thus of lower importance. Dogs of collie type or ancestry include: Australian Kelpie Developed in Australia from collies originally brought from Scotland and northern England. Erect ears, short-haired, usually black, black-and-tan or red-and-tan, with white limited to chest and toes. Australian Shepherd Derives its name from the sheep imported from Australia in the 19th century, but native to the Western United States. Used as both a drover and guardian of sheep and cattle. Ancestry almost certainly includes British collie types and Basque and Spanish sheepdogs. Shaggy mid length coat in every colour including merle, half prick ears, bobbed tail, and (very important) eyes of different colour, heterochromia very common. Bearded Collie Now largely a pet and show breed, but still of the collie type, and some are used as working dogs. The Beardie has a flat, harsh, strong and shaggy outer coat and a soft, furry undercoat. The coat falls naturally to either side without need of a part. Long hair on the cheeks, lower lips, and under the chin forms the beard for which it is known. All Bearded Collies are born black, blue, brown, or fawn, with or without white markings. Some carry a fading gene, and as they mature, the coat lightens, darkening again slightly after one year of age. A puppy born black may become any shade of gray from black to slate to silver. The dogs that are born brown will lighten from chocolate to sandy, and the blues and fawns show shades from dark to light. Dogs without the fading gene stay the color they were when they were born. The white only occurs as a blaze on the face, on the head, on the tip of the tail, on the chest, legs, feet, and around the neck. Tan markings occasionally appear on the eyebrows, inside the ears, on the cheeks, under the root of the tail and on the legs where the white joins the main color. Blue Lacy Grey or red all over, short hair, floppy ears. Derived partly from the English Shepherd, with other non-collie breeds. Border Collie The most well known breed for herding sheep throughout the world. Originally developed in Scotland and Northern England. Not always suitable for herding cattle. Ears semi-erect or floppy, fur silky or fairly long, but short on face and legs; red, black, black-and-tan or merle, all usually with white over shoulders, alternatively mostly white with coloured patches on head. Coat can be either long or short. Cumberland Sheepdog An extinct breed similar to the Border Collie and possibly absorbed into that breed. An ancestor of the Australian Shepherd. Erect or semi-erect ears, dense fur, black with white only on face and chest. English Shepherd Developed in the U.S. from stock of Farm Collie type originally from Britain. Floppy ears, thick fur, red, black or black-and-tan, with white over shoulders. Not to be confused with the very different Old English Sheepdog. German Coolie Also called Koolie, or German Collie. Developed in Australia, probably from British collies, but may have included dogs from Germany and Spain. Erect ears, short fur, black, red, black-and-tan or merle, often with some white on neck or over shoulders. (Note: the name "German Collie" is also applied to a cross between a German Shepherd and a Border Collie.) Huntaway Developed in New Zealand from a mixture of breeds, probably including some collie – but it is not of the collie type. Larger and more heavily built than most collies, floppy ears, most commonly black-and-tan with little white. Lurcher Not an established breed, but a cross of collie (or other herding dog or terrier) with Greyhound or other sight hound. Traditionally bred for poaching, with the speed of a sight hound but more obedient and less conspicuous. Variable in appearance, but with greyhound build: Floppy ears, tall, slender, with small head, deep chest and "herring gut"; smooth, silky or rough coat, often brindled. McNab Shepherd Developed in the U.S. from Scotch Collies and dogs imported by Basque sheepherders. Variable in size, erect or semi-erect ears, short to medium fur, black or red with some white on face, chest and/or feet. New Zealand Heading Dog Also called New Zealand Eye Dog. Developed in New Zealand from Border Collie heritage and used to bring sheep towards the shepherd, especially with strong eye contact and no barking. Old English Sheepdog Derived from "Shags", hairy herding dogs, themselves derived from "Beards", the ancestors of the Bearded Collie. Modern dogs larger than most collies, no tail, floppy ears, long silky hair (including on face), usually grey and white. Not to be confused with the English Shepherd. Scotch Collie Scotch collies are separated into two varieties or breeds: Rough Collie and Smooth Collie. They are rather a different type to other collies with a long narrow face, tall, profuse coat and semi-erect ears. They are still used for herding as well as for showing. They were developed in the highlands of Scotland which is why they needed a profuse coat. There are four recognised colors: Sable, tri-color, blue merle, and color headed white. Non-recognized colors are: Bi-black, sable merle, harlequin, red merle, red tricolor, and black and tan. Both the Rough and Smooth Collies are double-coated with Smooths having a shorter or "smooth" outer coat. There are three different coat types of Rough Collies: Brandwyn (fluffy coats), Parader (flat long coats) and the working type (medium-length coats). Shetland Sheepdog A small show and pet breed developed in England partly from herding dogs originating in Shetland. The original Shetland dogs were not collies, but instead working herding dogs of Spitz type, similar to the Icelandic Sheepdog. However, in the development of the modern Shetland breed these Spitz-type dogs were heavily mixed with the Rough Collie and toy breeds, and now are similar in appearance to a miniature Rough Collie. Very small, nearly erect ears, long silky fur on body, most commonly sable or merle, with white over shoulders. Smithfield Originally a British type, now extinct used for droving cattle in the south-east of England, especially the Smithfield Market in London. They were large, strong collies, with white or black-and-white fur, and floppy-ears. Occasionally the name is used for modern dogs of a somewhat similar type in Australia. The name "Smithfield" is used to describe the shaggy Tasmanian farm dog of Bearded Collie type; and is also applied to the Australian Stumpy Tail Cattle Dog and may have contributed to the Australian Koolie. Welsh Sheepdog Landrace herding dog from Wales. Erect or semi-erect ears, short or silky fur, red, black, black-and-tan, or merle, all usually with white over shoulders. Famous collies Blanco, pet of Lyndon Johnson. Kep, pet of Beatrix Potter. He is depicted in the book The Tale of Jemima Puddle-Duck. Lad, pet of Albert Payson Terhune. He is chronicled through several short stories, most famously in the collection Lad, A Dog. Pickles, known for his role in finding the stolen Jules Rimet Trophy in March 1966, four months before the 1966 FIFA World Cup kicked off in England. Pal, who played Lassie (see below). Peter, awarded the Dickin Medal for conspicuous gallantry or devotion to duty while serving in military conflict. Reveille, a Rough Collie, official mascot of Texas A&M University. Rob, awarded the Dickin Medal for conspicuous gallantry or devotion to duty while serving in military conflict. Seamus, pet of Humble Pie front-man, Steve Marriott. Seamus' howling was recorded by Pink Floyd and the resulting song, "Seamus" was released on their album, Meddle (1971). Sheila, awarded the Dickin Medal for conspicuous gallantry or devotion to duty while serving in military conflict. Shep, Blue Peter dog. Silverton Bobbie, the Wonder Dog who in 1923, traveled 2,800 miles from Indiana back home to Silverton, Oregon. Two famous white Collies owned by United States President and Mrs. Calvin Coolidge. A large oil painting hangs in The White House of First Lady Mrs. Coolidge and one of their white Collies. Collies in fiction Lassie was a fictional Rough Collie dog character created by Eric Knight who originally was featured in a short story expanded to novel length called Lassie Come-Home. The character then went on to star in numerous MGM movies, a long running classic TV series, and various remakes/spinoffs/revivals. Bessy, a long-running Belgian comics series which also was very successful in French, German and Swedish translations. It also featured a collie, obviously based on Lassie, but in a Wild West setting. Fly and Rex, herding dogs of the movie, Babe. The Dog, the Border Collie of the comic strip Footrot Flats. Colleen, a female collie in Road Rovers. Nana, a female Border Collie in Snow Dogs Shadow, collie from Enid Blyton's book Shadow the Sheepdog. The collie type is not identified in the text, but the illustrations in an early edition look vaguely like a border collie. Fly, the sheep dog featured in Arthur Waterhouse's "Fells" trilogy for children, Raiders of the Fells (1948), Rogues of the Fells (1951) and Fly of the Fells (1957). The collie type is not specified, but the illustrations look rather like a Rough Collie. The eponymous dog from the film Bingo. Flo, a collie in All Dogs Go to Heaven Murray, the male collie from the TV series Mad About You. A collie in White Fang by Jack London is the mate of the wolfdog White Fang Courageous Collie Carlo, a Rough Collie from Martha Speaks Scouty, a blue Border Collie in Strawberry Shortcake's Berry Bitty Adventures Winona, a collie from My Little Pony: Friendship is Magic Mackenzie, a Border Collie in Bluey Roger, a Border Collie in Cats & Dogs 3: Paws Unite!
Biology and health sciences
Dogs
Animals
514355
https://en.wikipedia.org/wiki/Rod%20cell
Rod cell
Rod cells are photoreceptor cells in the retina of the eye that can function in lower light better than the other type of visual photoreceptor, cone cells. Rods are usually found concentrated at the outer edges of the retina and are used in peripheral vision. On average, there are approximately 92 million rod cells (vs ~6 million cones) in the human retina. Rod cells are more sensitive than cone cells and are almost entirely responsible for night vision. However, rods have little role in color vision, which is the main reason why colors are much less apparent in dim light. Structure Rods are a little longer and leaner than cones but have the same basic structure. Opsin-containing disks lie at the end of the cell adjacent to the retinal pigment epithelium, which in turn is attached to the inside of the eye. The stacked-disc structure of the detector portion of the cell allows for very high efficiency. Rods are much more common than cones, with about 120 million rod cells compared to 6 to 7 million cone cells. Like cones, rod cells have a synaptic terminal, an inner segment, and an outer segment. The synaptic terminal forms a synapse with another neuron, usually a bipolar cell or a horizontal cell. The inner and outer segments are connected by a cilium, which lines the distal segment. The inner segment contains organelles and the cell's nucleus, while the rod outer segment (abbreviated to ROS), which is pointed toward the back of the eye, contains the light-absorbing materials. A human rod cell is about 2 microns in diameter and 100 microns long. Rods are not all morphologically the same; in mice, rods close to the outer plexiform synaptic layer display a reduced length due to a shortened synaptic terminal. Function Photoreception In vertebrates, activation of a photoreceptor cell is a hyperpolarization (inhibition) of the cell. When they are not being stimulated, such as in the dark, rod cells and cone cells depolarize and release a neurotransmitter spontaneously. This neurotransmitter hyperpolarizes the bipolar cell. Bipolar cells exist between photoreceptors and ganglion cells and act to transmit signals from the photoreceptors to the ganglion cells. As a result of the bipolar cell being hyperpolarized, it does not release its transmitter at the bipolar-ganglion synapse and the synapse is not excited. Activation of photopigments by light sends a signal by hyperpolarizing the rod cell, leading to the rod cell not sending its neurotransmitter, which leads to the bipolar cell then releasing its transmitter at the bipolar-ganglion synapse and exciting the synapse. Depolarization of rod cells (causing release of their neurotransmitter) occurs because in the dark, cells have a relatively high concentration of cyclic guanosine 3'-5' monophosphate (cGMP), which opens ion channels (largely sodium channels, though calcium can enter through these channels as well). The positive charges of the ions that enter the cell down its electrochemical gradient change the cell's membrane potential, cause depolarization, and lead to the release of the neurotransmitter glutamate. Glutamate can depolarize some neurons and hyperpolarize others, allowing photoreceptors to interact in an antagonistic manner. When light hits photoreceptive pigments within the photoreceptor cell, the pigment changes shape. The pigment, called rhodopsin (conopsin is found in cone cells) comprises a large protein called opsin (situated in the plasma membrane), attached to which is a covalently bound prosthetic group: an organic molecule called retinal (a derivative of vitamin A). The retinal exists in the 11-cis-retinal form when in the dark, and stimulation by light causes its structure to change to all-trans-retinal. This structural change causes an increased affinity for the regulatory protein called transducin (a type of G protein). Upon binding to rhodopsin, the alpha subunit of the G protein replaces a molecule of GDP with a molecule of GTP and becomes activated. This replacement causes the alpha subunit of the G protein to dissociate from the beta and gamma subunits of the G protein. As a result, the alpha subunit is now free to bind to the cGMP phosphodiesterase (an effector protein). The alpha subunit interacts with the inhibitory PDE gamma subunits and prevents them from blocking catalytic sites on the alpha and beta subunits of PDE, leading to the activation of cGMP phosphodiesterase, which hydrolyzes cGMP (the second messenger), breaking it down into 5'-GMP. Reduction in cGMP allows the ion channels to close, preventing the influx of positive ions, hyperpolarizing the cell, and stopping the release of the neurotransmitter glutamate. Though cone cells primarily use the neurotransmitter substance acetylcholine, rod cells use a variety. The entire process by which light initiates a sensory response is called visual phototransduction. Activation of a single unit of rhodopsin, the photosensitive pigment in rods, can lead to a large reaction in the cell because the signal is amplified. Once activated, rhodopsin can activate hundreds of transducin molecules, each of which in turn activates a phosphodiesterase molecule, which can break down over a thousand cGMP molecules per second. Thus, rods can have a large response to a small amount of light. As the retinal component of rhodopsin is derived from vitamin A, a deficiency of vitamin A causes a deficit in the pigment needed by rod cells. Consequently, fewer rod cells are able to sufficiently respond in darker conditions, and as the cone cells are poorly adapted for sight in the dark, night-blindness can result. Reversion to the resting state Rods make use of three inhibitory mechanisms (negative feedback mechanisms) to allow a rapid revert to the resting state after a flash of light. Firstly, there exists a rhodopsin kinase (RK) which would phosphorylate the cytosolic tail of the activated rhodopsin on the multiple serines, partially inhibiting the activation of transducin. Also, an inhibitory protein, arrestin, then binds to the phosphorylated rhodopsins to further inhibit the rhodopsin activity. While arrestin shuts off rhodopsin, an RGS protein (functioning as a GTPase-activating protein (GAP)) drives the transducin (G-protein) into an "off" state by increasing the rate of hydrolysis of the bonded GTP to GDP. When the cGMP concentration falls, the previously open cGMP sensitive channels close, leading to a reduction in the influx of calcium ions. The associated decrease in the concentration of calcium ions stimulates the calcium ion-sensitive proteins, which then activate the guanylyl cyclase to replenish the cGMP, rapidly restoring it to its original concentration. This opens the cGMP sensitive channels and causes a depolarization of the plasma membrane. Desensitization When the rods are exposed to a high concentration of photons for a prolonged period, they become desensitized (adapted) to the environment. As rhodopsin is phosphorylated by rhodopsin kinase (a member of the GPCR kinases (GRKs) ), it binds with high affinity to the arrestin. The bound arrestin can contribute to the desensitization process in at least two ways. First, it prevents the interaction between the G protein and the activated receptor. Second, it serves as an adaptor protein to aid the receptor to the clathrin-dependent endocytosis machinery (to induce receptor-mediated endocytosis). Sensitivity A rod cell is sensitive enough to respond to a single photon of light and is about 100 times more sensitive to a single photon than cones. Since rods require less light to function than cones, they are the primary source of visual information at night (scotopic vision). Cone cells, on the other hand, require tens to hundreds of photons to become activated. Additionally, multiple rod cells converge on a single interneuron, collecting and amplifying the signals. However, this convergence comes at a cost to visual acuity (or image resolution) because the pooled information from multiple cells is less distinct than it would be if the visual system received information from each rod cell individually. Rod cells also respond more slowly to light than cones and the stimuli they receive are added over roughly 100 milliseconds. While this makes rods more sensitive to smaller amounts of light, it also means that their ability to sense temporal changes, such as quickly changing images, is less accurate than that of cones. Experiments by George Wald and others showed that rods are most sensitive to wavelengths of light around 498 nm (green-blue), and insensitive to wavelengths longer than about 640 nm (red). This is responsible for the Purkinje effect: as intensity dims at twilight, the rods take over, and before color disappears completely, peak sensitivity of vision shifts towards the rods' peak sensitivity (blue-green).
Biology and health sciences
Visual system
Biology
514402
https://en.wikipedia.org/wiki/IUPAC%20nomenclature%20of%20organic%20chemistry
IUPAC nomenclature of organic chemistry
In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry. To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas. Basic principles In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound. The steps for naming an organic compound are: Identification of the most senior group. If more than one functional group, if any, is present, the one with highest group precedence should be used. Identification of the ring or chain with the maximum number of senior groups. Identification of the ring or chain with the most senior elements (In order: N, P, Si, B, O, S, C). Identification of the parent compound. Rings are senior to chains if composed of the same elements. For cyclic systems: Identification of the parent cyclic ring. The cyclic system must obey these rules, in order of precedence: It should have the most senior heteroatom (in order: N, O, S, P, Si, B). It should have the maximum number of rings. It should have the maximum number of atoms. It should have the maximum number of heteroatoms. It should have the maximum number of senior heteroatoms (in order: O, S, N, P, Si, B). For chains: Identification of the parent hydrocarbon chain. This chain must obey the following rules, in order of precedence: It should have the maximum length. It should have the maximum number of heteroatoms. It should have the maximum number of senior heteroatoms (in order: O, S, N, P, Si, B). For cyclic systems and chains after previous rules: It should have the maximum number of multiple, then double bonds. It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used. Identification of the side-chains. Side chains are the carbon chains that are not in the parent chain, but are branched off from it. Identification of the remaining functional groups, if any, and naming them by their ionic prefixes (such as hydroxy for , oxy for , oxyalkane for , etc.).Different side-chains and functional groups will be grouped together in alphabetical order. (The multiplier prefixes di-, tri-, etc. are not taken into consideration for grouping alphabetically. For example, ethyl comes before dihydroxy or dimethyl, as the "e" in "ethyl" precedes the "h" in "dihydroxy" and the "m" in "dimethyl" alphabetically. The "di" is not considered in either case). When both side chains and secondary functional groups are present, they should be written mixed together in one group rather than in two separate groups. Identification of double/triple bonds. Numbering of the chain. This is done by first numbering the chain in both directions (left to right and right to left), and then choosing the numbering which follows these rules, in order of precedence. Not every rule will apply to every compound, rules can be skipped if they do not apply. Has the lowest-numbered locant (or locants) for heteroatoms. Locants are the numbers on the carbons to which the substituent is directly attached. Has the lowest-numbered locants for the indicated hydrogen. The indicated hydrogen is for some unsaturated heterocyclic compounds. It refers to the hydrogen atoms not attached to atoms with double bonds in the ring system. Has the lowest-numbered locants for the suffix functional group. Has the lowest-numbered locants for multiple bonds ('ene', 'yne'), and hydro prefixes. (The locant of a multiple bond is the number of the adjacent carbon with a lower number). Has the lowest-numbered locants for all substituents cited by prefixes. Has the lowest-numbered locants for substituents in order of citation (for example: in a cyclic ring with only bromine and chlorine functional groups, alphabetically bromo- is cited before chloro- and would receive the lower locant). Numbering of the various substituents and bonds with their locants. If there is more than one of the same type of substituent/double bond, a prefix is added showing how many there are (di – 2, tri – 3, tetra – 4, then as for the number of carbons below with 'a' added at the end) The numbers for that type of side chain will be grouped in ascending order and written before the name of the side-chain. If there are two side-chains with the same alpha carbon, the number will be written twice. Example: 2,2,3-trimethyl- . If there are both double bonds and triple bonds, "en" (double bond) is written before "yne" (triple bond). When the main functional group is a terminal functional group (a group which can exist only at the end of a chain, like formyl and carboxyl groups), there is no need to number it. Arrangement in this form: Group of side chains and secondary functional groups with numbers made in step 6 + prefix of parent hydrocarbon chain (eth, meth) + double/triple bonds with numbers (or "ane") + primary functional group suffix with numbers.Wherever it says "with numbers", it is understood that between the word and the numbers, the prefix (di-, tri-) is used. Adding of punctuation: Commas are put between numbers (2 5 5 becomes 2,5,5) Hyphens are put between a number and a letter (2 5 5 trimethylheptane becomes 2,5,5-trimethylheptane) Successive words are merged into one word (trimethyl heptane becomes trimethylheptane) Note: IUPAC uses one-word names throughout. This is why all parts are connected. The resulting name appears as: #,#-di<side chain>-#-<secondary functional group>-#-<side chain>-#,#,#-tri<secondary functional group><parent chain prefix><If all bonds are single bonds, use "ane">-#,#-di<double bonds>-#-<triple bonds>-#-<primary functional group> where each "#" represents a number. The group secondary functional groups and side chains may not look the same as shown here, as the side chains and secondary functional groups are arranged alphabetically. The di- and tri- have been used just to show their usage. (di- after #,#, tri- after #,#,#, etc.) Example Here is a sample molecule with the parent carbons numbered: For simplicity, here is an image of the same molecule, where the hydrogens in the parent chain are removed and the carbons are shown by their numbers: Now, following the above steps: The parent hydrocarbon chain has 23 carbons. It is called tricosa-. The functional groups with the highest precedence are the two ketone groups. The groups are on carbon atoms 3 and 9. As there are two, we write 3,9-dione. The numbering of the molecule is based on the ketone groups. When numbering from left to right, the ketone groups are numbered 3 and 9. When numbering from right to left, the ketone groups are numbered 15 and 21. 3 is less than 15, therefore the ketones are numbered 3 and 9. The smaller number is always used, not the sum of the constituents numbers. The side chains are: an ethyl- at carbon 4, an ethyl- at carbon 8, and a butyl- at carbon 12. Note: the at carbon atom 15 is not a side chain, but it is a methoxy functional group. There are two ethyl- groups. They are combined to create, 4,8-diethyl. The side chains are grouped like this: 12-butyl-4,8-diethyl. (But this is not necessarily the final grouping, as functional groups may be added in between to ensure all groups are listed alphabetically.) The secondary functional groups are: a hydroxy- at carbon 5, a chloro- at carbon 11, a methoxy- at carbon 15, and a bromo- at carbon 18. Grouped with the side chains, this gives 18-bromo-12-butyl-11-chloro-4,8-diethyl-5-hydroxy-15-methoxy. There are two double bonds: one between carbons 6 and 7, and one between carbons 13 and 14. They would be called "6,13-diene", but the presence of alkynes switches it to 6,13-dien. There is one triple bond between carbon atoms 19 and 20. It will be called 19-yne. The arrangement (with punctuation) is: 18-bromo-12-butyl-11-chloro-4,8-diethyl-5-hydroxy-15-methoxytricosa-6,13-dien-19-yne-3,9-dione Finally, due to cis-trans isomerism, we have to specify the relative orientation of functional groups around each double bond. For this example, both double bonds are trans isomers, so we have (6E,13E) The final name is (6E,13E)-18-bromo-12-butyl-11-chloro-4,8-diethyl-5-hydroxy-15-methoxytricosa-6,13-dien-19-yne-3,9-dione. Hydrocarbons Alkanes Straight-chain alkanes take the suffix "-ane" and are prefixed depending on the number of carbon atoms in the chain, following standard rules. The first few are: For example, the simplest alkane is methane, and the nine-carbon alkane is named nonane. The names of the first four alkanes were derived from methanol, ether, propionic acid and butyric acid, respectively. The rest are named with a Greek numeric prefix, with the exceptions of nonane which has a Latin prefix, and undecane which has mixed-language prefixes. Cyclic alkanes are simply prefixed with "cyclo-": for example, is cyclobutane (not to be confused with butene) and is cyclohexane (not to be confused with hexene). Branched alkanes are named as a straight-chain alkane with attached alkyl groups. They are prefixed with a number indicating the carbon the group is attached to, counting from the end of the alkane chain. For example, , commonly known as isobutane, is treated as a propane chain with a methyl group bonded to the middle (2) carbon, and given the systematic name 2-methylpropane. However, although the name 2-methylpropane could be used, it is easier and more logical to call it simply methylpropane – the methyl group could not possibly occur on any of the other carbon atoms (that would lengthen the chain and result in butane, not propane) and therefore the use of the number "2" is unnecessary. If there is ambiguity in the position of the substituent, depending on which end of the alkane chain is counted as "1", then numbering is chosen so that the smaller number is used. For example, (isopentane) is named 2-methylbutane, not 3-methylbutane. If there are multiple side-branches of the same size alkyl group, their positions are separated by commas and the group prefixed with multiplier prefixes depending on the number of branches. For example, (neopentane) is named 2,2-dimethylpropane. If there are different groups, they are added in alphabetical order, separated by commas or hyphens. The longest possible main alkane chain is used; therefore 3-ethyl-4-methylhexane instead of 2,3-diethylpentane, even though these describe equivalent structures. The di-, tri- etc. prefixes are ignored for the purpose of alphabetical ordering of side chains (e.g. 3-ethyl-2,4-dimethylpentane, not 2,4-dimethyl-3-ethylpentane). Alkenes Alkenes are named for their parent alkane chain with the suffix "-ene" and a numerical root indicating the position of the carbon with the lower number for each double bond in the chain: is but-1-ene. Multiple double bonds take the form -diene, -triene, etc., with the size prefix of the chain taking an extra "a": is buta-1,3-diene. Simple cis and trans isomers may be indicated with a prefixed cis- or trans-: cis-but-2-ene, trans-but-2-ene. However, cis- and trans- are relative descriptors. It is IUPAC convention to describe all alkenes using absolute descriptors of Z- (same side) and E- (opposite) with the Cahn–Ingold–Prelog priority rules (see also E–Z notation). Alkynes Alkynes are named using the same system, with the suffix "-yne" indicating a triple bond: ethyne (acetylene), propyne (methylacetylene). Functional groups Haloalkanes and haloarenes In haloalkanes and haloarenes (), Halogen functional groups are prefixed with the bonding position and take the form of fluoro-, chloro-, bromo-, iodo-, etc., depending on the halogen. Multiple groups are dichloro-, trichloro-, etc., and dissimilar groups are ordered alphabetically as before. For example, (chloroform) is trichloromethane. The anesthetic halothane () is 2-bromo-2-chloro-1,1,1-trifluoroethane. Alcohols Alcohols () take the suffix "-ol" with a numerical suffix indicating the bonding position: is propan-1-ol. The suffixes , , , etc., are used for multiple groups: Ethylene glycol is ethane-1,2-diol. If higher precedence functional groups are present (see order of precedence, below), the prefix "hydroxy" is used with the bonding position: is 2-hydroxypropanoic acid. Ethers Ethers () consist of an oxygen atom between the two attached carbon chains. The shorter of the two chains becomes the first part of the name with the -ane suffix changed to -oxy, and the longer alkane chain becomes the suffix of the name of the ether. Thus, is methoxymethane, and is methoxyethane (not ethoxymethane). If the oxygen is not attached to the end of the main alkane chain, then the whole shorter alkyl-plus-ether group is treated as a side-chain and prefixed with its bonding position on the main chain. Thus is 2-methoxypropane. Alternatively, an ether chain can be named as an alkane in which one carbon is replaced by an oxygen, a replacement denoted by the prefix "oxa". For example, could also be called 2-oxabutane, and an epoxide could be called oxacyclopropane. This method is especially useful when both groups attached to the oxygen atom are complex. Aldehydes Aldehydes () take the suffix "-al". If other functional groups are present, the chain is numbered such that the aldehyde carbon is in the "1" position, unless functional groups of higher precedence are present. If a prefix form is required, "oxo-" is used (as for ketones), with the position number indicating the end of a chain: is 3-oxopropanoic acid. If the carbon in the carbonyl group cannot be included in the attached chain (for instance in the case of cyclic aldehydes), the prefix "formyl-" or the suffix "-carbaldehyde" is used: is cyclohexanecarbaldehyde. If an aldehyde is attached to a benzene and is the main functional group, the suffix becomes benzaldehyde. Ketones In general ketones () take the suffix "-one" (pronounced own, not won) with a suffixed position number: is pentan-2-one. If a higher precedence suffix is in use, the prefix "oxo-" is used: is 3-oxohexanal. Carboxylic acids In general, carboxylic acids () are named with the suffix -oic acid (etymologically a back-formation from benzoic acid). As with aldehydes, the carboxyl functional group must take the "1" position on the main chain and so the locant need not be stated. For example, (lactic acid) is named 2-hydroxypropanoic acid with no "1" stated. Some traditional names for common carboxylic acids (such as acetic acid) are in such widespread use that they are retained in IUPAC nomenclature, though systematic names like ethanoic acid are also used. Carboxylic acids attached to a benzene ring are structural analogs of benzoic acid () and are named as one of its derivatives. If there are multiple carboxyl groups on the same parent chain, multiplying prefixes are used: Malonic acid, , is systematically named propanedioic acid. Alternatively, the suffix can be used in place of "oic acid", combined with a multiplying prefix if necessary – mellitic acid is benzenehexacarboxylic acid, for example. In the latter case, the carbon atoms in the carboxyl groups do not count as being part of the main chain, a rule that also applies to the prefix form "carboxy-". Citric acid serves as an example: it is formally named rather than . Carboxylates Salts of carboxylic acids are named following the usual cation-then-anion conventions used for ionic compounds in both IUPAC and common nomenclature systems. The name of the carboxylate anion () is derived from that of the parent acid by replacing the "–oic acid" ending with "–oate" or "carboxylate." For example, , the sodium salt of benzoic acid (), is called sodium benzoate. Where an acid has both a systematic and a common name (like , for example, which is known as both acetic acid and as ethanoic acid), its salts can be named from either parent name. Thus, can be named as potassium acetate or as potassium ethanoate. The prefix form, is "carboxylato-". Esters Esters () are named as alkyl derivatives of carboxylic acids. The alkyl (R') group is named first. The part is then named as a separate word based on the carboxylic acid name, with the ending changed from "-oic acid" to "-oate" or "-carboxylate" For example, is methyl pentanoate, and is ethyl 4-methylpentanoate. For esters such as ethyl acetate (), ethyl formate () or dimethyl phthalate that are based on common acids, IUPAC recommends use of these established names, called retained names. The "-oate" changes to "-ate." Some simple examples, named both ways, are shown in the figure above. If the alkyl group is not attached at the end of the chain, the bond position to the ester group is suffixed before "-yl": may be called butan-2-yl propanoate or butan-2-yl propionate.. The prefix form is "oxycarbonyl-" with the (R') group preceding. Acyl groups Acyl groups are named by stripping the "-ic acid" of the corresponding carboxylic acid and replacing it with "-yl." For example, is called ethanoyl-R. Acyl halides Simply add the name of the attached halide to the end of the acyl group. For example, is ethanoyl chloride. An alternate suffix is "-carbonyl halide" as opposed to "-oyl halide". The prefix form is "halocarbonyl-". Acid anhydrides Acid anhydrides () have two acyl groups linked by an oxygen atom. If both acyl groups are the same, then the name of the carboxylic acid with the word acid is replaced with the word anhydride and the IUPAC name consists of two words. If the acyl groups are different, then they are named in alphabetical order in the same way, with anhydride replacing acid and IUPAC name consists of three words. For example, is called ethanoic anhydride and is called ethanoic propanoic anhydride. Amines Amines () are named for the attached alkane chain with the suffix "-amine" (e.g., methanamine). If necessary, the bonding position is suffixed: propan-1-amine, propan-2-amine. The prefix form is "amino-". For secondary amines (of the form ), the longest carbon chain attached to the nitrogen atom becomes the primary name of the amine; the other chain is prefixed as an alkyl group with location prefix given as an italic N: is N-methylethanamine. Tertiary amines () are treated similarly: is N-ethyl-N-methylpropanamine. Again, the substituent groups are ordered alphabetically. Amides Amides () take the suffix "-amide", or "-carboxamide" if the carbon in the amide group cannot be included in the main chain. The prefix form is "carbamoyl-". e.g., methanamide, ethanamide. Amides that have additional substituents on the nitrogen are treated similarly to the case of amines: they are ordered alphabetically with the location prefix N: is N,N-dimethylmethanamide, is N,N-dimethylethanamide. Nitriles Nitriles () are named by adding the suffix "-nitrile" to the longest hydrocarbon chain (including the carbon of the cyano group). It can also be named by replacing the "-oic acid" of their corresponding carboxylic acids with "-carbonitrile." The prefix form is "cyano-." Functional class IUPAC nomenclature may also be used in the form of alkyl cyanides. For example, is called pentanenitrile or butyl cyanide. Cyclic compounds Cycloalkanes and aromatic compounds can be treated as the main parent chain of the compound, in which case the positions of substituents are numbered around the ring structure. For example, the three isomers of xylene , commonly the ortho-, meta-, and para- forms, are 1,2-dimethylbenzene, 1,3-dimethylbenzene, and 1,4-dimethylbenzene. The cyclic structures can also be treated as functional groups themselves, in which case they take the prefix "cycloalkyl-" (e.g. "cyclohexyl-") or for benzene, "phenyl-". The IUPAC nomenclature scheme becomes rapidly more elaborate for more complex cyclic structures, with notation for compounds containing conjoined rings, and many common names such as phenol being accepted as base names for compounds derived from them. Order of precedence of group When compounds contain more than one functional group, the order of precedence determines which groups are named with prefix or suffix forms. The table below shows common groups in decreasing order of precedence. The highest-precedence group takes the suffix, with all others taking the prefix form. However, double and triple bonds only take suffix form (-en and -yn) and are used with other suffixes. Prefixed substituents are ordered alphabetically (excluding any modifiers such as di-, tri-, etc.), e.g. chlorofluoromethane, not fluorochloromethane. If there are multiple functional groups of the same type, either prefixed or suffixed, the position numbers are ordered numerically (thus ethane-1,2-diol, not ethane-2,1-diol.) The N position indicator for amines and amides comes before "1", e.g., is N,2-dimethylpropanamine. *Note: These suffixes, in which the carbon atom is counted as part of the preceding chain, are the most commonly used. See individual functional group articles for more details. The order of remaining functional groups is only needed for substituted benzene and hence is not mentioned here. Common nomenclature – trivial names Common nomenclature uses the older names for some organic compounds instead of using the prefixes for the carbon skeleton above. The pattern can be seen below. Ketones Common names for ketones can be derived by naming the two alkyl or aryl groups bonded to the carbonyl group as separate words followed by the word ketone. Acetone Acetophenone Benzophenone Ethyl isopropyl ketone Diethyl ketone The first three of the names shown above are still considered to be acceptable IUPAC names. Aldehydes The common name for an aldehyde is derived from the common name of the corresponding carboxylic acid by dropping the word acid and changing the suffix from -ic or -oic to -aldehyde. Formaldehyde Acetaldehyde Ions The IUPAC nomenclature also provides rules for naming ions. Hydron Hydron is a generic term for hydrogen cation; protons, deuterons and tritons are all hydrons. The hydrons are not found in heavier isotopes, however. Parent hydride cations Simple cations formed by adding a hydron to a hydride of a halogen, chalcogen or pnictogen are named by adding the suffix "-onium" to the element's root: is ammonium, is oxonium, and H2F+ is fluoronium. Ammonium was adopted instead of nitronium, which commonly refers to . If the cationic center of the hydride is not a halogen, chalcogen or pnictogen then the suffix "-ium" is added to the name of the neutral hydride after dropping any final 'e'. is methanium, is dioxidanium (HO-OH is dioxidane), and is diazanium ( is diazane). Cations and substitution The above cations except for methanium are not, strictly speaking, organic, since they do not contain carbon. However, many organic cations are obtained by substituting another element or some functional group for a hydrogen. The name of each substitution is prefixed to the hydride cation name. If many substitutions by the same functional group occur, then the number is indicated by prefixing with "di-", "tri-" as with halogenation. is trimethyloxonium. is trifluoromethylammonium.
Physical sciences
Nomenclature
Chemistry
514458
https://en.wikipedia.org/wiki/Wound%20healing
Wound healing
Wound healing refers to a living organism's replacement of destroyed or damaged tissue by newly produced tissue. In undamaged skin, the epidermis (surface, epithelial layer) and dermis (deeper, connective layer) form a protective barrier against the external environment. When the barrier is broken, a regulated sequence of biochemical events is set into motion to repair the damage. This process is divided into predictable phases: blood clotting (hemostasis), inflammation, tissue growth (cell proliferation), and tissue remodeling (maturation and cell differentiation). Blood clotting may be considered to be part of the inflammation stage instead of a separate stage. The wound-healing process is not only complex but fragile, and it is susceptible to interruption or failure leading to the formation of non-healing chronic wounds. Factors that contribute to non-healing chronic wounds are diabetes, venous or arterial disease, infection, and metabolic deficiencies of old age. Wound care encourages and speeds wound healing via cleaning and protection from reinjury or infection. Depending on each patient's needs, it can range from the simplest first aid to entire nursing specialties such as wound, ostomy, and continence nursing and burn center care. Stages Hemostasis (blood clotting): Within the first few minutes of injury, platelets in the blood begin to stick to the injured site. They change into an amorphous shape, more suitable for clotting, and they release chemical signals to promote clotting. This results in the activation of fibrin, which forms a mesh and acts as "glue" to bind platelets to each other. This makes a clot that serves to plug the break in the blood vessel, slowing/preventing further bleeding. Inflammation: During this phase, damaged and dead cells are cleared out, along with bacteria and other pathogens or debris. This happens through the process of phagocytosis, where white blood cells engulf debris and destroy it. Platelet-derived growth factors are released into the wound that cause the migration and division of cells during the proliferative phase. Proliferation (growth of new tissue): In this phase, angiogenesis, collagen deposition, granulation tissue formation, epithelialization, and wound contraction occur. In angiogenesis, vascular endothelial cells form new blood vessels. In fibroplasia and granulation tissue formation, fibroblasts grow and form a new, provisional extracellular matrix (ECM) by excreting collagen and fibronectin. Concurrently, re-epithelialization of the epidermis occurs, in which epithelial cells proliferate and 'crawl' atop the wound bed, providing cover for the new tissue. In wound contraction, myofibroblasts decrease the size of the wound by gripping the wound edges and contracting using a mechanism that resembles that in smooth muscle cells. When the cells' roles are close to complete, unneeded cells undergo apoptosis. Maturation (remodeling): During maturation and remodeling, collagen is realigned along tension lines, and cells that are no longer needed are removed by programmed cell death, or apoptosis. Timing and re-epithelialization Timing is important to wound healing. Critically, the timing of wound re-epithelialization can decide the outcome of the healing. If the epithelization of tissue over a denuded area is slow, a scar will form over many weeks, or months; If the epithelization of a wounded area is fast, the healing will result in regeneration. Early vs cellular phase Wound healing is classically divided into hemostasis, inflammation, proliferation, and remodeling. Although a useful construct, this model employs considerable overlapping among individual phases. A complementary model has recently been described where the many elements of wound healing are more clearly delineated. The importance of this new model becomes more apparent through its utility in the fields of regenerative medicine and tissue engineering (see Research and development section below). In this construct, the process of wound healing is divided into two major phases: the early phase and the cellular phase: The early phase, which begins immediately following skin injury, involves cascading molecular and cellular events leading to hemostasis and formation of an early, makeshift extracellular matrix that provides structural staging for cellular attachment and subsequent cellular proliferation. The cellular phase involves several types of cells working together to mount an inflammatory response, synthesize granulation tissue, and restore the epithelial layer. Subdivisions of the cellular phase are: Macrophages and inflammatory components (within 1–2 days) Epithelial-mesenchymal interaction: re-epithelialization (phenotype change within hours, migration begins on day 1 or 2) Fibroblasts and myofibroblasts: progressive alignment, collagen production, and matrix contraction (between day 4 and day 14) Endothelial cells and angiogenesis (begins on day 4) Dermal matrix: elements of fabrication (begins on day 4, lasting 2 weeks) and alteration/remodeling (begins after week 2, lasting weeks to months—depending on wound size). Inflammatory phase Just before the inflammatory phase is initiated, the clotting cascade occurs in order to achieve hemostasis, or the stopping of blood loss by way of a fibrin clot. Thereafter, various soluble factors (including chemokines and cytokines) are released to attract cells that phagocytise debris, bacteria, and damaged tissue, in addition to releasing signaling molecules that initiate the proliferative phase of wound healing. Clotting cascade When tissue is first wounded, blood comes in contact with collagen, triggering blood platelets to begin secreting inflammatory factors. Platelets also express sticky glycoproteins on their cell membranes that allow them to aggregate, forming a mass. Fibrin and fibronectin cross-link together and form a plug that traps proteins and particles and prevents further blood loss. This fibrin-fibronectin plug is also the main structural support for the wound until collagen is deposited. Migratory cells use this plug as a matrix to crawl across, and platelets adhere to it and secrete factors. The clot is eventually lysed and replaced with granulation tissue and then later with collagen. Platelets, the cells present in the highest numbers shortly after a wound occurs, release mediators into the blood, including cytokines and growth factors. Growth factors stimulate cells to speed their rate of division. Platelets release other proinflammatory factors like serotonin, bradykinin, prostaglandins, prostacyclins, thromboxane, and histamine, which serve several purposes, including increasing cell proliferation and migration to the area and causing blood vessels to become dilated and porous. In many ways, extravasated platelets in trauma perform a similar function to tissue macrophages and mast cells exposed to microbial molecular signatures in infection: they become activated, and secrete molecular mediators – vasoactive amines, eicosanoids, and cytokines – that initiate the inflammatory process. Vasoconstriction and vasodilation Immediately after a blood vessel is breached, ruptured cell membranes release inflammatory factors like thromboxanes and prostaglandins that cause the vessel to spasm to prevent blood loss and to collect inflammatory cells and factors in the area. This vasoconstriction lasts five to ten minutes and is followed by vasodilation, a widening of blood vessels, which peaks at about 20 minutes post-wounding. Vasodilation is the result of factors released by platelets and other cells. The main factor involved in causing vasodilation is histamine. Histamine also causes blood vessels to become porous, allowing the tissue to become edematous because proteins from the bloodstream leak into the extravascular space, which increases its osmolar load and draws water into the area. Increased porosity of blood vessels also facilitates the entry of inflammatory cells like leukocytes into the wound site from the bloodstream. Polymorphonuclear neutrophils Within an hour of wounding, polymorphonuclear neutrophils (PMNs) arrive at the wound site and become the predominant cells in the wound for the first two days after the injury occurs, with especially high numbers on the second day. They are attracted to the site by fibronectin, growth factors, and substances such as kinins. Neutrophils phagocytise debris and kill bacteria by releasing free radicals in what is called a respiratory burst. They also cleanse the wound by secreting proteases that break down damaged tissue. Functional neutrophils at the wound site only have life-spans of around two days, so they usually undergo apoptosis once they have completed their tasks and are engulfed and degraded by macrophages. Other leukocytes to enter the area include helper T cells, which secrete cytokines to cause more T cells to divide and to increase inflammation and enhance vasodilation and vessel permeability. T cells also increase the activity of macrophages. Macrophages One of the roles of macrophages is to phagocytize other expended phagocytes, bacteria and damaged tissue, and they also debride damaged tissue by releasing proteases. Macrophages function in regeneration and are essential for wound healing. They are stimulated by the low oxygen content of their surroundings to produce factors that induce and speed angiogenesis and they also stimulate cells that reepithelialize the wound, create granulation tissue, and lay down a new extracellular matrix. By secreting these factors, macrophages contribute to pushing the wound healing process into the next phase. They replace PMNs as the predominant cells in the wound by two days after injury. The spleen contains half the body's monocytes in reserve ready to be deployed to injured tissue. Attracted to the wound site by growth factors released by platelets and other cells, monocytes from the bloodstream enter the area through blood vessel walls. Numbers of monocytes in the wound peak one to one and a half days after the injury occurs. Once they are in the wound site, monocytes mature into macrophages. Macrophages also secrete a number of factors such as growth factors and other cytokines, especially during the third and fourth post-wounding days. These factors attract cells involved in the proliferation stage of healing to the area. In wound healing that result in incomplete repair, scar contraction occurs, bringing varying gradations of structural imperfections, deformities and problems with flexibility. Macrophages may restrain the contraction phase. Scientists have reported that removing the macrophages from a salamander resulted in failure of a typical regeneration response (limb regeneration), instead bringing on a repair (scarring) response. Decline of inflammatory phase As inflammation dies down, fewer inflammatory factors are secreted, existing ones are broken down, and numbers of neutrophils and macrophages are reduced at the wound site. These changes indicate that the inflammatory phase is ending and the proliferative phase is underway. In vitro evidence, obtained using the dermal equivalent model, suggests that the presence of macrophages actually delays wound contraction and thus the disappearance of macrophages from the wound may be essential for subsequent phases to occur. Because inflammation plays roles in fighting infection, clearing debris and inducing the proliferation phase, it is a necessary part of healing. However, inflammation can lead to tissue damage if it lasts too long. Thus the reduction of inflammation is frequently a goal in therapeutic settings. Inflammation lasts as long as there is debris in the wound. Thus, if the individual's immune system is compromised and is unable to clear the debris from the wound and/or if excessive detritus, devitalized tissue, or microbial biofilm is present in the wound, these factors may cause a prolonged inflammatory phase and prevent the wound from properly commencing the proliferation phase of healing. This can lead to a chronic wound. Proliferative phase About two or three days after the wound occurs, fibroblasts begin to enter the wound site, marking the onset of the proliferative phase even before the inflammatory phase has ended. As in the other phases of wound healing, steps in the proliferative phase do not occur in a series but rather partially overlap in time. Angiogenesis Also called neovascularization, the process of angiogenesis occurs concurrently with fibroblast proliferation when endothelial cells migrate to the area of the wound. Because the activity of fibroblasts and epithelial cells requires oxygen and nutrients, angiogenesis is imperative for other stages in wound healing, like epidermal and fibroblast migration. The tissue in which angiogenesis has occurred typically looks red (is erythematous) due to the presence of capillaries. Angiogenesis occurs in overlapping phases in response to inflammation: Latent period: During the haemostatic and inflammatory phase of the wound healing process, vasodilation and permeabilisation allow leukocyte extravasation and phagocytic debridement and decontamination of the wound area. Tissue swelling aids later angiogenesis by expanding and loosening the existing collagenous extracellular matrix. Endothelial activation: As the wound macrophages switches from inflammatory to healing mode, it begins to secrete endothelial chemotactic and growth factors to attract adjacent endothelial cells. Activated endothelial cells respond by retracting and reducing cell junctions, loosening themselves from their embedded endothelium. Characteristically the activated endothelial cells show enlarged nucleoli. Degradation of endothelial basement membrane: The wound macrophages, mast cells and the endothelial cells themselves secrete proteases to break down existing vascular basal lamina. Vascular sprouting: With the breakdown of endothelial basement membrane, detached endothelial cells from pre-existing capillaries and post-capillary venules can divide and migrate chemotactically towards the wound, laying down new vessels in the process. Vascular sprouting can be aided by ambient hypoxia and acidosis in the wound environment, as hypoxia stimulates the endothelial transcription factor, hypoxia inducible factor (HIF) to transactivate angiogenic genes such as VEGF and GLUT1. Sprouted vessels can self-organise into luminal morphologies, and fusion of blind channels give rise to new capillary networks. Vascular maturation: the endothelium of vessels mature by laying down new endothelial extracellular matrix, followed by basal lamina formation. Lastly the vessel establishes a pericyte layer. Stem cells of endothelial cells, originating from parts of uninjured blood vessels, develop pseudopodia and push through the ECM into the wound site to establish new blood vessels. Endothelial cells are attracted to the wound area by fibronectin found on the fibrin scab and chemotactically by angiogenic factors released by other cells, e.g. from macrophages and platelets when in a low-oxygen environment. Endothelial growth and proliferation is also directly stimulated by hypoxia, and presence of lactic acid in the wound. For example, hypoxia stimulates the endothelial transcription factor, hypoxia-inducible factor (HIF) to transactivate a set of proliferative genes including vascular endothelial growth factor (VEGF) and glucose transporter 1 (GLUT1). To migrate, endothelial cells need collagenases and plasminogen activator to degrade the clot and part of the ECM. Zinc-dependent metalloproteinases digest basement membrane and ECM to allow cell migration, proliferation and angiogenesis. When macrophages and other growth factor-producing cells are no longer in a hypoxic, lactic acid-filled environment, they stop producing angiogenic factors. Thus, when tissue is adequately perfused, migration and proliferation of endothelial cells is reduced. Eventually blood vessels that are no longer needed die by apoptosis. Fibroplasia and granulation tissue formation Simultaneously with angiogenesis, fibroblasts begin accumulating in the wound site. Fibroblasts begin entering the wound site two to five days after wounding as the inflammatory phase is ending, and their numbers peak at one to two weeks post-wounding. By the end of the first week, fibroblasts are the main cells in the wound. Fibroplasia ends two to four weeks after wounding. As a model the mechanism of fibroplasia may be conceptualised as an analogous process to angiogenesis (see above) - only the cell type involved is fibroblasts rather than endothelial cells. Initially there is a latent phase where the wound undergoes plasma exudation, inflammatory decontamination and debridement. Oedema increases the wound histologic accessibility for later fibroplastic migration. Second, as inflammation nears completion, macrophage and mast cells release fibroblast growth and chemotactic factors to activate fibroblasts from adjacent tissue. Fibroblasts at this stage loosen themselves from surrounding cells and ECM. Phagocytes further release proteases that break down the ECM of neighbouring tissue, freeing the activated fibroblasts to proliferate and migrate towards the wound. The difference between vascular sprouting and fibroblast proliferation is that the former is enhanced by hypoxia, whilst the latter is inhibited by hypoxia. The deposited fibroblastic connective tissue matures by secreting ECM into the extracellular space, forming granulation tissue (see below). Lastly collagen is deposited into the ECM. In the first two or three days after injury, fibroblasts mainly migrate and proliferate, while later, they are the main cells that lay down the collagen matrix in the wound site. Origins of these fibroblasts are thought to be from the adjacent uninjured cutaneous tissue (although new evidence suggests that some are derived from blood-borne, circulating adult stem cells/precursors). Initially fibroblasts utilize the fibrin cross-linking fibers (well-formed by the end of the inflammatory phase) to migrate across the wound, subsequently adhering to fibronectin. Fibroblasts then deposit ground substance into the wound bed, and later collagen, which they can adhere to for migration. Granulation tissue functions as rudimentary tissue, and begins to appear in the wound already during the inflammatory phase, two to five days post wounding, and continues growing until the wound bed is covered. Granulation tissue consists of new blood vessels, fibroblasts, inflammatory cells, endothelial cells, myofibroblasts, and the components of a new, provisional extracellular matrix (ECM). The provisional ECM is different in composition from the ECM in normal tissue and its components originate from fibroblasts. Such components include fibronectin, collagen, glycosaminoglycans, elastin, glycoproteins and proteoglycans. Its main components are fibronectin and hyaluronan, which create a very hydrated matrix and facilitate cell migration. Later this provisional matrix is replaced with an ECM that more closely resembles that found in non-injured tissue. Growth factors (PDGF, TGF-β) and fibronectin encourage proliferation, migration to the wound bed, and production of ECM molecules by fibroblasts. Fibroblasts also secrete growth factors that attract epithelial cells to the wound site. Hypoxia also contributes to fibroblast proliferation and excretion of growth factors, though too little oxygen will inhibit their growth and deposition of ECM components, and can lead to excessive, fibrotic scarring. Collagen deposition One of fibroblasts' most important duties is the production of collagen. Collagen deposition is important because it increases the strength of the wound; before it is laid down, the only thing holding the wound closed is the fibrin-fibronectin clot, which does not provide much resistance to traumatic injury. Also, cells involved in inflammation, angiogenesis, and connective tissue construction attach to, grow and differentiate on the collagen matrix laid down by fibroblasts. Type III collagen and fibronectin generally begin to be produced in appreciable amounts at somewhere between approximately 10 hours and 3 days, depending mainly on wound size. Their deposition peaks at one to three weeks. They are the predominating tensile substances until the later phase of maturation, in which they are replaced by the stronger type I collagen. Even as fibroblasts are producing new collagen, collagenases and other factors degrade it. Shortly after wounding, synthesis exceeds degradation so collagen levels in the wound rise, but later production and degradation become equal so there is no net collagen gain. This homeostasis signals the onset of the later maturation phase. Granulation gradually ceases and fibroblasts decrease in number in the wound once their work is done. At the end of the granulation phase, fibroblasts begin to commit apoptosis, converting granulation tissue from an environment rich in cells to one that consists mainly of collagen. Epithelialization The formation of granulation tissue into an open wound allows the reepithelialization phase to take place, as epithelial cells migrate across the new tissue to form a barrier between the wound and the environment. Basal keratinocytes from the wound edges and dermal appendages such as hair follicles, sweat glands and sebaceous (oil) glands are the main cells responsible for the epithelialization phase of wound healing. They advance in a sheet across the wound site and proliferate at its edges, ceasing movement when they meet in the middle. In healing that results in a scar, sweat glands, hair follicles and nerves do not form. With the lack of hair follicles, nerves and sweat glands, the wound, and the resulting healing scar, provide a challenge to the body with regards to temperature control. Keratinocytes migrate without first proliferating. Migration can begin as early as a few hours after wounding. However, epithelial cells require viable tissue to migrate across, so if the wound is deep it must first be filled with granulation tissue. Thus the time of onset of migration is variable and may occur about one day after wounding. Cells on the wound margins proliferate on the second and third day post-wounding in order to provide more cells for migration. If the basement membrane is not breached, epithelial cells are replaced within three days by division and upward migration of cells in the stratum basale in the same fashion that occurs in uninjured skin. However, if the basement membrane is ruined at the wound site, reepithelization must occur from the wound margins and from skin appendages such as hair follicles and sweat and oil glands that enter the dermis that are lined with viable keratinocytes. If the wound is very deep, skin appendages may also be ruined and migration can only occur from wound edges. Migration of keratinocytes over the wound site is stimulated by lack of contact inhibition and by chemicals such as nitric oxide. Before they begin to migrate, cells must dissolve their desmosomes and hemidesmosomes, which normally anchor the cells by intermediate filaments in their cytoskeleton to other cells and to the ECM. Transmembrane receptor proteins called integrins, which are made of glycoproteins and normally anchor the cell to the basement membrane by its cytoskeleton, are released from the cell's intermediate filaments and relocate to actin filaments to serve as attachments to the ECM for pseudopodia during migration. Thus keratinocytes detach from the basement membrane and are able to enter the wound bed. Before they begin migrating, keratinocytes change shape, becoming longer and flatter and extending cellular processes like lamellipodia and wide processes that look like ruffles. Actin filaments and pseudopodia form. During migration, integrins on the pseudopod attach to the ECM, and the actin filaments in the projection pull the cell along. The interaction with molecules in the ECM through integrins further promotes the formation of actin filaments, lamellipodia, and filopodia. Epithelial cells climb over one another in order to migrate. This growing sheet of epithelial cells is often called the epithelial tongue. The first cells to attach to the basement membrane form the stratum basale. These basal cells continue to migrate across the wound bed, and epithelial cells above them slide along as well. The more quickly this migration occurs, the less of a scar there will be. Fibrin, collagen, and fibronectin in the ECM may further signal cells to divide and migrate. Like fibroblasts, migrating keratinocytes use the fibronectin cross-linked with fibrin that was deposited in inflammation as an attachment site to crawl across. As keratinocytes migrate, they move over granulation tissue but stay underneath the scab, thereby separating the scab from the underlying tissue. Epithelial cells have the ability to phagocytize debris such as dead tissue and bacterial matter that would otherwise obstruct their path. Because they must dissolve any scab that forms, keratinocyte migration is best enhanced by a moist environment, since a dry one leads to formation of a bigger, tougher scab. To make their way along the tissue, keratinocytes must dissolve the clot, debris, and parts of the ECM in order to get through. They secrete plasminogen activator, which activates plasminogen, turning it into plasmin to dissolve the scab. Cells can only migrate over living tissue, so they must excrete collagenases and proteases like matrix metalloproteinases (MMPs) to dissolve damaged parts of the ECM in their way, particularly at the front of the migrating sheet. Keratinocytes also dissolve the basement membrane, using instead the new ECM laid down by fibroblasts to crawl across. As keratinocytes continue migrating, new epithelial cells must be formed at the wound edges to replace them and to provide more cells for the advancing sheet. Proliferation behind migrating keratinocytes normally begins a few days after wounding and occurs at a rate that is 17 times higher in this stage of epithelialization than in normal tissues. Until the entire wound area is resurfaced, the only epithelial cells to proliferate are at the wound edges. Growth factors, stimulated by integrins and MMPs, cause cells to proliferate at the wound edges. Keratinocytes themselves also produce and secrete factors, including growth factors and basement membrane proteins, which aid both in epithelialization and in other phases of healing. Growth factors are also important for the innate immune defense of skin wounds by stimulation of the production of antimicrobial peptides and neutrophil chemotactic cytokines in keratinocytes. Keratinocytes continue migrating across the wound bed until cells from either side meet in the middle, at which point contact inhibition causes them to stop migrating. When they have finished migrating, the keratinocytes secrete the proteins that form the new basement membrane. Cells reverse the morphological changes they underwent in order to begin migrating; they reestablish desmosomes and hemidesmosomes and become anchored once again to the basement membrane. Basal cells begin to divide and differentiate in the same manner as they do in normal skin to reestablish the strata found in reepithelialized skin. Contraction Contraction is a key phase of wound healing with repair. If contraction continues for too long, it can lead to disfigurement and loss of function. Thus there is a great interest in understanding the biology of wound contraction, which can be modelled in vitro using the collagen gel contraction assay or the dermal equivalent model. Contraction commences approximately a week after wounding, when fibroblasts have differentiated into myofibroblasts. In full thickness wounds, contraction peaks at 5 to 15 days post wounding. Contraction can last for several weeks and continues even after the wound is completely reepithelialized. A large wound can become 40 to 80% smaller after contraction. Wounds can contract at a speed of up to 0.75 mm per day, depending on how loose the tissue in the wounded area is. Contraction usually does not occur symmetrically; rather most wounds have an 'axis of contraction' which allows for greater organization and alignment of cells with collagen. At first, contraction occurs without myofibroblast involvement. Later, fibroblasts, stimulated by growth factors, differentiate into myofibroblasts. Myofibroblasts, which are similar to smooth muscle cells, are responsible for contraction. Myofibroblasts contain the same kind of actin as that found in smooth muscle cells. Myofibroblasts are attracted by fibronectin and growth factors and they move along fibronectin linked to fibrin in the provisional ECM in order to reach the wound edges. They form connections to the ECM at the wound edges, and they attach to each other and to the wound edges by desmosomes. Also, at an adhesion called the fibronexus, actin in the myofibroblast is linked across the cell membrane to molecules in the extracellular matrix like fibronectin and collagen. Myofibroblasts have many such adhesions, which allow them to pull the ECM when they contract, reducing the wound size. In this part of contraction, closure occurs more quickly than in the first, myofibroblast-independent part. As the actin in myofibroblasts contracts, the wound edges are pulled together. Fibroblasts lay down collagen to reinforce the wound as myofibroblasts contract. The contraction stage in proliferation ends as myofibroblasts stop contracting and commit apoptosis. The breakdown of the provisional matrix leads to a decrease in hyaluronic acid and an increase in chondroitin sulfate, which gradually triggers fibroblasts to stop migrating and proliferating. These events signal the onset of the maturation stage of wound healing. Maturation and remodeling When the levels of collagen production and degradation equalize, the maturation phase of tissue repair is said to have begun. During maturation, type III collagen, which is prevalent during proliferation, is replaced by type I collagen. Originally disorganized collagen fibers are rearranged, cross-linked, and aligned along tension lines. The onset of the maturation phase may vary extensively, depending on the size of the wound and whether it was initially closed or left open, ranging from approximately three days to three weeks. The maturation phase can last for a year or longer, similarly depending on wound type. As the phase progresses, the tensile strength of the wound increases. Collagen will reach approximately 20% of its tensile strength after three weeks, increasing to 80% after 12 months. The maximum scar strength is 80% of that of unwounded skin. Since activity at the wound site is reduced, the scar loses its red appearance as blood vessels that are no longer needed are removed by apoptosis. The phases of wound healing normally progress in a predictable, timely manner; if they do not, healing may progress inappropriately to either a chronic wound such as a venous ulcer or pathological scarring such as a keloid scar. Factors affecting wound healing Many factors controlling the efficacy, speed, and manner of wound healing fall under two types: local and systemic factors. Local factors Moisture; keeping a wound moist rather than dry makes wound healing more rapid and with less pain and less scarring Mechanical factors Oedema Ionizing radiation Faulty technique of wound closure Ischemia and necrosis Foreign bodies. Sharp, small foreign bodies can penetrate the skin leaving little surface wound but causing internal injury and internal bleeding. For a glass foreign body, "frequently, an innocent skin wound disguises the extensive nature of the injuries beneath". First-degree nerve injury requires a few hours to a few weeks to recover. If a foreign body passes by a nerve and causes first-degree nerve injury during entry, then the sensation of the foreign body or pain due to internal wounding may be delayed by a few hours to a few weeks after entry. A sudden increase in pain during the first few weeks of wound healing could be a sign of a recovered nerve reporting internal injuries rather than a newly developed infection. Low oxygen tension Perfusion Systemic factors Inflammation Diabetes – Individuals with diabetes demonstrate reduced capability in the healing of acute wounds. Additionally, diabetic individuals are susceptible to developing chronic diabetic foot ulcers, a serious complication of diabetes which affects 15% of people with diabetes and accounts for 84% of all diabetes-related lower leg amputations. The impaired healing abilities of diabetics with diabetic foot ulcers and/or acute wounds involves multiple pathophysiological mechanisms. This impaired healing involves hypoxia, fibroblast and epidermal cell dysfunction, impaired angiogenesis and neovascularization, high levels of metalloproteases, damage from reactive oxygen species and AGEs (advanced glycation end-products), decreased host immune resistance, and neuropathy. Nutrients – Malnutrition or nutritional deficiencies have a recognizable impact on wound healing post trauma or surgical intervention. Nutrients including proteins, carbohydrates, arginine, glutamine, polyunsaturated fatty acids, vitamin A, vitamin C, vitamin E, magnesium, copper, zinc and iron all play significant roles in wound healing. Fats and carbohydrates provide the majority of energy required for wound healing. Glucose is the most prominent source of fuel and it is used to create cellular ATP, providing energy for angiogenesis and the deposition of new tissues. As the nutritional needs of each patient and their associated wound are complex, it is suggested that tailored nutritional support would benefit both acute and chronic wound healing. Metabolic diseases Immunosuppression Connective tissue disorders Smoking – Smoking causes a delay in the speed of wound repair notably in the proliferative and inflammatory phases. It also increases the likelihood of certain complications such as wound rupture, wound and flap necrosis, decrease in wound tensile strength and infection. Passive smoking also impairs a proper wound healing process. Age – Increased age (over 60 years) is a risk factor for impaired wound healing. It is recognized that, in older adults of otherwise overall good health, the effects of aging causes a temporal delay in healing, but no major impairment with regard to the quality of healing. Delayed wound healing in patients of increasing age is associated with altered inflammatory response; for example delayed T-cell infiltration of the wound with alterations in the production of chemokines, and reduced macrophage phagocytic capacity. Alcohol – Alcohol consumption impairs wound healing and also increases the chances of infection. Alcohol affects the proliferative phase of healing. A single unit of alcohol causes a negative effect on re-epithelialization, wound closure, collagen production and angiogenesis. In the 2000s there arose the first Mathematical models of the healing process, based on simplified assumptions and on a system of differential equations solved through MATLAB. The models show that the "rate of the healing process" appears to be "highly influenced by the activity and size of the injury itself as well as the activity of the healing agent." Research and development Up until about 2000, the classic paradigm of wound healing, involving stem cells restricted to organ-specific lineages, had never been seriously challenged. Since then, the notion of adult stem cells having cellular plasticity or the ability to differentiate into non-lineage cells has emerged as an alternative explanation. To be more specific, hematopoietic progenitor cells (that give rise to mature cells in the blood) may have the ability de-differentiate back into hematopoietic stem cells and/or transdifferentiate into non-lineage cells, such as fibroblasts. Stem cells and cellular plasticity Multipotent adult stem cells have the capacity to be self-renewing and give rise to different cell types. Stem cells give rise to progenitor cells, which are cells that are not self-renewing, but can generate several types of cells. The extent of stem cell involvement in cutaneous (skin) wound healing is complex and not fully understood. Stem cell injection leads to wound healing primarily through stimulation of angiogenesis. It is thought that the epidermis and dermis are reconstituted by mitotically active stem cells that reside at the apex of rete ridges (basal stem cells or BSC), the bulge of hair follicles (hair follicular stem cell or HFSC), and the papillary dermis (dermal stem cells). Moreover, bone marrow may also contain stem cells that play a major role in cutaneous wound healing. In rare circumstances, such as extensive cutaneous injury, self-renewal subpopulations in the bone marrow are induced to participate in the healing process, whereby they give rise to collagen-secreting cells that seem to play a role during wound repair. These two self-renewal subpopulations are (1) bone marrow-derived mesenchymal stem cells (MSC) and (2) hematopoietic stem cells (HSC). Bone marrow also harbors a progenitor subpopulation (endothelial progenitor cells or EPC) that, in the same type of setting, are mobilized to aid in the reconstruction of blood vessels. Moreover, it is thought that extensive injury to skin also promotes the early trafficking of a unique subclass of leukocytes (circulating fibrocytes) to the injured region, where they perform various functions related to wound healing. Wound repair versus regeneration An injury is an interruption of morphology and/or functionality of a given tissue. After injury, structural tissue heals with incomplete or complete regeneration. Tissue without an interruption to the morphology almost always completely regenerates. An example of complete regeneration without an interruption of the morphology is non-injured tissue, such as skin. Non-injured skin has a continued replacement and regeneration of cells which always results in complete regeneration. There is a subtle distinction between 'repair' and 'regeneration'. Repair means incomplete regeneration. Repair or incomplete regeneration, refers to the physiologic adaptation of an organ after injury in an effort to re-establish continuity without regards to exact replacement of lost/damaged tissue. True tissue regeneration or complete regeneration, refers to the replacement of lost/damaged tissue with an 'exact' copy, such that both morphology and functionality are completely restored. Though after injury mammals can completely regenerate spontaneously, they usually do not completely regenerate. An example of a tissue regenerating completely after an interruption of morphology is the endometrium; the endometrium after the process of breakdown via the menstruation cycle heals with complete regeneration. In some instances, after a tissue breakdown, such as in skin, a regeneration closer to complete regeneration may be induced by the use of biodegradable (collagen-glycoaminoglycan) scaffolds. These scaffolds are structurally analogous to extracellular matrix (ECM) found in normal/un-injured dermis. Fundamental conditions required for tissue regeneration often oppose conditions that favor efficient wound repair, including inhibition of (1) platelet activation, (2) inflammatory response, and (3) wound contraction. In addition to providing support for fibroblast and endothelial cell attachment, biodegradable scaffolds inhibit wound contraction, thereby allowing the healing process to proceed towards a more-regenerative/less-scarring pathway. Pharmaceutical agents have been investigated which may be able to turn off myofibroblast differentiation. A new way of thinking derived from the notion that heparan sulfates are key player in tissue homeostasis: the process that makes the tissue replace dead cells by identical cells. In wound areas, tissue homeostasis is lost as the heparan sulfates are degraded preventing the replacement of dead cells by identical cells. Heparan sulfate analogues cannot be degraded by all known heparanases and glycanases and bind to the free heparin sulfate binding spots on the ECM, therefore preserving the normal tissue homeostasis and preventing scarring. Repair or regeneration with regards to hypoxia-inducible factor 1-alpha (HIF-1a). In normal circumstances after injury HIF-1a is degraded by prolyl hydroxylases (PHDs). Scientists found that the simple up-regulation of HIF-1a via PHD inhibitors regenerates lost or damaged tissue in mammals that have a repair response; and the continued down-regulation of Hif-1a results in healing with a scarring response in mammals with a previous regenerative response to the loss of tissue. The act of regulating HIF-1a can either turn off, or turn on the key process of mammalian regeneration. Scarless wound healing Scarless wound healing is a concept based on the healing or repair of the skin (or other tissue/organs) after injury with the aim of healing with subjectively and relatively less scar tissue than normally expected. Scarless healing is sometimes mixed up with the concept of scar free healing, which is wound healing which results in absolutely no scar (free of scarring). However they are different concepts. A reverse to scarless wound healing is scarification (wound healing to scar more). Historically, certain cultures consider scarification attractive; however, this is generally not the case in the modern western society, in which many patients are turning to plastic surgery clinics with unrealistic expectations. Depending on scar type, treatment may be invasive (intralesional steroid injections, surgery) and/or conservative (compression therapy, topical silicone gel, brachytherapy, photodynamic therapy). Clinical judgment is necessary to successfully balance the potential benefits of the various treatments available against the likelihood of a poor response and possible complications resulting from these treatments. Many of these treatments may only have a placebo effect, and the evidence base for the use of many current treatments is poor. Since the 1960s, comprehension of the basic biologic processes involved in wound repair and tissue regeneration have expanded due to advances in cellular and molecular biology. Currently, the principal goals in wound management are to achieve rapid wound closure with a functional tissue that has minimal aesthetic scarring. However, the ultimate goal of wound healing biology is to induce a more perfect reconstruction of the wound area. Scarless wound healing only occurs in mammalian foetal tissues and complete regeneration is limited to lower vertebrates, such as salamanders, and invertebrates. In adult humans, injured tissue are repaired by collagen deposition, collagen remodelling and eventual scar formation, where fetal wound healing is believed to be more of a regenerative process with minimal or no scar formation. Therefore, foetal wound healing can be used to provide an accessible mammalian model of an optimal healing response in adult human tissues. Clues as to how this might be achieved come from studies of wound healing in embryos, where repair is fast and efficient and results in essentially perfect regeneration of any lost tissue. The etymology of the term scarless wound healing has a long history. In print the antiquated concept of scarless healing was brought up in the early 20th century and appeared in a paper published in the London Lancet. This process involved cutting at a surgical slant to the skin surface, rather than at a right angle it; the process was described in various Newspapers. Cancer After inflammation, restoration of normal tissue integrity and function is preserved by feedback interactions between diverse cell types mediated by adhesion molecules and secreted cytokines. Disruption of normal feedback mechanisms in cancer threatens tissue integrity and enables a malignant tumor to escape the immune system. An example of the importance of the wound healing response within tumors is illustrated in work by Howard Chang and colleagues at Stanford University studying breast cancers. Oral collagen supplements Preliminary results are promising for the short and long-term use of oral collagen supplements for wound healing and skin aging. Oral collagen supplements also increase skin elasticity, hydration, and dermal collagen density. Collagen supplementation is generally safe with no reported adverse events. Further studies are needed to elucidate medical use in skin barrier diseases such as atopic dermatitis and to determine optimal dosing regimens. Wound dressings Modern wound dressing to aid in wound repair has undergone considerable research and development in recent years. Scientists aim to develop wound dressings which have the following characteristics: Provide wound protection Remove excess exudate Possess antimicrobial properties Maintain a humid environment Have high permeability to oxygen Are easily removed from a wound site Possess non-anaphylactic characteristics Cotton gauze dressings have been the standard of care, despite their dry properties that can adhere to wound surfaces and cause discomfort upon removal. Recent research has set out to improve cotton gauze dressings to bring them closer in line to achieve modern wound dressing properties, by coating cotton gauze wound dressing with a chitosan/Ag/ZnO nanocomposite. These updated dressing provide increase water absorbency and improved antibacterial efficacy. Wound cleansing Dirt or dust on the surface of the wound, bacteria, tissue that has died, and fluid from the wound may be cleaned. The evidence supporting the most effective technique is not clear and there is insufficient evidence to conclude whether cleaning wounds is beneficial for promoting healing or whether wound cleaning solutions (polyhexamethylene biguanide, aqueous hydrogen peroxide, etc.) are better than sterile water or saline solutions to help venous leg ulcers heal. It is uncertain whether the choice of cleaning solution or method of application makes any difference to venous leg ulcer healing. Simulating wound healing from a growth perspective Considerable effort has been devoted to understanding the physical relationships governing wound healing and subsequent scarring, with mathematical models and simulations developed to elucidate these relationships. The growth of tissue around the wound site is a result of the migration of cells and collagen deposition by these cells. The alignment of collagen describes the degree of scarring; basket-weave orientation of collagen is characteristic of normal skin, whereas aligned collagen fibers lead to significant scarring. It has been shown that the growth of tissue and extent of scar formation can be controlled by modulating the stress at a wound site. The growth of tissue can be simulated using the aforementioned relationships from a biochemical and biomechanical point of view. The biologically active chemicals that play an important role in wound healing are modeled with Fickian diffusion to generate concentration profiles. The balance equation for open systems when modeling wound healing incorporates mass growth due to cell migration and proliferation. Here the following equation is used: Dtρ0 = Div (R) + R0, where ρ represents mass density, R represents a mass flux (from cell migration), and R0 represents a mass source (from cell proliferation, division, or enlargement). Relationships like these can be incorporated into an agent-based models, where the sensitivity to single parameters such as initial collagen alignment, cytokine properties, and cell proliferation rates can be tested. Wound closure intentions Successful wound healing is dependent on various cell types, molecular mediators and structural elements. Primary intention Primary intention is the healing of a clean wound without tissue loss. In this process, wound edges are brought together, so that they are adjacent to each other (re-approximated). Wound closure is performed with sutures (stitches), staples, or adhesive tape or glue. Primary intention can only be implemented when the wound is precise and there is minimal disruption to the local tissue and the epithelial basement membrane, e.g. surgical incisions. This process is faster than healing by secondary intention. There is also less scarring associated with primary intention, as there are no large tissue losses to be filled with granulation tissue, though some granulation tissue will form. Examples of primary intention include: well-repaired lacerations, well reduced bone fractures, healing after flap surgery. Early removal of dressings from clean or clean-contaminated wounds does affect primary healing of wounds. Secondary intention Secondary intention is implemented when primary intention is not possible because of significant tissue damage or loss, usually due to the wound having been created by major trauma. The wound is allowed to granulate. Surgeon may pack the wound with a gauze or use a drainage system. Granulation results in a broader scar. Healing process can be slow due to presence of drainage from infection. Wound care must be performed daily to encourage wound debris removal to allow for granulation tissue formation. Using antibiotics or antiseptics for the surgical wound healing by secondary intention is controversial. Examples: gingivectomy, gingivoplasty, tooth extraction sockets, poorly reduced fractures, burns, severe lacerations, pressure ulcers. There is insufficient evidence that the choice of dressings or topical agents affects the secondary healing of wounds. There is lack of evidence for the effectiveness of negative pressure wound therapy in wound healing by secondary intention. Tertiary intention (Delayed primary closure): The wound is initially cleaned, debrided and observed, typically 4 or 5 days before closure. The wound is purposely left open. Examples: healing of wounds by use of tissue grafts. If the wound edges are not reapproximated immediately, delayed primary wound healing transpires. This type of healing may be desired in the case of contaminated wounds. By the fourth day, phagocytosis of contaminated tissues is well underway, and the processes of epithelization, collagen deposition, and maturation are occurring. Foreign materials are walled off by macrophages that may metamorphose into epithelioid cells, which are encircled by mononuclear leukocytes, forming granulomas. Usually the wound is closed surgically at this juncture, or the scab is eaten, and if the "cleansing" of the wound is incomplete, chronic inflammation can ensue, resulting in prominent scarring. Overview of involved growth factors Following are the main growth factors involved in wound healing: Complications of wound healing The major complications are many: Deficient scar formation: Results in wound dehiscence or rupture of the wound due to inadequate formation of granulation tissue. Excessive scar formation: Hypertrophic scar, keloid, desmoid. Exuberant granulation (proud flesh). Deficient contraction (in skin grafts) or excessive contraction (in burns). Others: Dystrophic calcification, pigmentary changes, painful scars, incisional hernia Other complications can include infection and Marjolin's ulcer. Biologics, skin substitutes, biomembranes and scaffolds Advancements in the clinical understanding of wounds and their pathophysiology have commanded significant biomedical innovations in the treatment of acute, chronic, and other types of wounds. Many biologics, skin substitutes, biomembranes and scaffolds have been developed to facilitate wound healing through various mechanisms. This includes a number of products under the trade names such as Epicel, Laserskin, Transcyte, Dermagraft, AlloDerm/Strattice, Biobrane, Integra, Apligraf, OrCel, GraftJacket and PermaDerm.
Biology and health sciences
Basics
Biology
514493
https://en.wikipedia.org/wiki/Sebastinae
Sebastinae
Sebastinae is a subfamily of marine fish belonging to the family Scorpaenidae in the order Scorpaeniformes. Their common names include rockfishes, rock perches, ocean perches, sea perches, thornyheads, scorpionfishes, sea ruffes and rockcods. Despite the latter name, they are not closely related to the cods in the genus Gadus, nor the rock cod, Lotella rhacina. Taxonomy Sebastinae, or Sebastidae, was first formally recognised as a grouping in 1873 by the German naturalist Johann Jakob Kaup. Some authorities recognise this family as distinct from Scorpaenidae. FishBase, a finfish database generated by a consortium of academic institutions, does, but the United States Federal government's Integrated Taxonomic Information System and the 5th Edition of Fishes of the World do not, FotW classify it as a subfamily of the Scorpaenidae. Tribes and genera Sebastinae is divided into two tribes and seven genera: Tribe Sebastini Kaup, 1873 Helicolenus Goode & Bean, 1896 Hozukius Matsubara, 1934 Sebastes Cuvier, 1829 Sebastiscus Jordan & Starks, 1904 Tribe Sebastolobini Matsubara, 1943 Adelosebastes Eschmeyer, T. Abe & Nakano, 1979 Sebastolobus Gill, 1881 Trachyscorpia Ginsburg, 1953 Characteristics Sebastinae species have a compressed body with the head typically having ridges and spines. The gill membranes are not attached to the isthmus. There is a venom gland in the spines of the dorsal, anal and pelvic fins. The largest species is the shortraker rockfish ( Sebastes borealis) which attains a maximum total length of while the smallest species is Sebastes koreanus which reaches a maximum total length of . Distribution and habitat Sebastinae rockfishes are found in the Pacific, Indian and Atlantic Oceans with most species in the largest genus, the ovoviviparous Sebastes with over 100 species, in the North Pacific. They can be found in marine and brackish waters.
Biology and health sciences
Acanthomorpha
Animals
514806
https://en.wikipedia.org/wiki/Filefish
Filefish
The filefish (Monacanthidae) are a diverse family of tropical to subtropical tetraodontiform marine fish, which are also known as foolfish, leatherjackets or shingles. They live in the Atlantic, Pacific and Indian Oceans. Filefish are closely related to triggerfish, pufferfish and trunkfish. The filefish family comprises approximately 102 species in 27 genera. More than half of the species are found in Australian waters, with 58 species in 23 genera. Their laterally compressed bodies and rough, sandpapery skin inspired the filefish's common name. Description Appearing very much like their close relatives the triggerfish, filefish are rhomboid-shaped, with beautifully elaborate cryptic patterns. Deeply keeled bodies give a false impression of size when the fish are viewed facing the flanks. Filefish have soft, simple fins, with comparatively small pectoral fins and truncated, fan-shaped tail fins; a slender, retractable spine crowns the head. Although there are usually two of these spines, the second spine is greatly reduced, being used only to lock the first spine in the erect position. That gives rise to the family name Monacanthidae, from the Greek monos meaning "one" and akantha meaning "thorn". Some species also have recurved spines on the base of the tail (caudal peduncle). The small terminal mouths of filefish have specialized incisor teeth on the upper and lower jaw. In the upper jaw there are four teeth in the inner series and six in the outer series. In the lower jaw, there are four to six in an outer series only. The snout is tapered and projecting and the eyes are located high on the head. Filefish have rough non-overlapping scales with small spikes, which is why they are called filefish. Although scaled, some filefish have such small scales that they appear scaleless. Like the triggerfish, filefish have small gill openings and greatly elongated pelvic bones, creating a "dewlap" of skin running between the bone's sharply keeled termination and the belly. The pelvis is articulated with other bones of the "pelvic girdle" and is capable of moving upwards and downwards in many species to form a large dewlap, which is used to make the fish appear much deeper in the body than is actually the case. Some filefish erect the dorsal spine and pelvis simultaneously to make it more difficult for a predator to remove them from a cave. The largest filefish species is the scrawled filefish (Aluterus scriptus) at up to in length. Most species are less than in length. There is marked sexual dimorphism in some species, with the sexes possessing different coloration, different body shapes, and the males with larger caudal spines and bristles. Habitat and life history Adult filefish are generally shallow water fish, inhabiting depths of no more than about 30 metres. They may be found in lagoons or associated with seaward reefs and seagrass beds; some species may also enter estuaries. Some species are closely associated with dense mats of sargassum, a particularly ubiquitous "sea weed"; these filefish, notably the planehead filefish (Stephanolepis hispidus) are also coloured and patterned to match their weedy environments. Either solitary, in pairs or small groups depending on the species, filefish are not terribly good swimmers; their small fins confine the fish to a sluggish gait. Filefish are often observed drifting head downward amongst stands of seaweed, presumably in an effort to fool both predator and prey alike. When threatened, filefish may retreat into crevices in the reef. The feeding habits of filefish vary among the species, with some eating only algae and seagrass; others also eat small benthic invertebrates, such as tunicates, gorgonians, and hydrozoans; and some species eat corals (corallivores). It is the latter two habits which have largely precluded the introduction of filefish into the aquarium hobby. Filefish spawn at bottom sites prepared and guarded by the males; both he and the female may guard the brood, or the male alone, depending on the species. The young filefish are pelagic; that is, they frequent open water. Sargassum provides a safe retreat for many species, both fish and weed being at the current's mercy. Juvenile filefish are at risk from predation by tuna and dolphinfish. As food In FAO fisheries statistics, the largest category of filefish are Cantherhines spp. with annual landings around 200,000 tonnes in recent years, mostly by China. Landings of threadsail filefish (Stephanolepis cirrhifer) and smooth leatherjacket (Meuschenia scaber) are reported at species level, with the rest as "Filefishes, leatherjackets nei" (nei = not elsewhere included). Threadsail filefish (Stephanolepis cirrhifer) is a popular snack food in Korea. It is typically dried and made into a sweet and salty jerky called jwipo (), which is then roasted before eating. Genera Acanthaluteres Acreichthys Aluterus Amanses Anacanthus Arotrolepis Brachaluteres Cantherhines Cantheschenia Chaetodermis Colurodontis Enigmacanthus Eubalichthys Lalmohania Meuschenia Monacanthus Navodon Nelusetta Oxymonacanthus Paraluteres Paramonacanthus Pervagor Pseudalutarius Pseudomonacanthus Rudarius Scobinichthys Stephanolepis Thamnaconus
Biology and health sciences
Acanthomorpha
Animals
515339
https://en.wikipedia.org/wiki/Zinc%20oxide
Zinc oxide
Zinc oxide is an inorganic compound with the formula . It is a white powder which is insoluble in water. ZnO is used as an additive in numerous materials and products including cosmetics, food supplements, rubbers, plastics, ceramics, glass, cement, lubricants, paints, sunscreens, ointments, adhesives, sealants, pigments, foods, batteries, ferrites, fire retardants, semi conductors, and first-aid tapes. Although it occurs naturally as the mineral zincite, most zinc oxide is produced synthetically. History Early humans probably used zinc compounds in processed and unprocessed forms, as paint or medicinal ointment; however, their composition is uncertain. The use of pushpanjan, probably zinc oxide, as a salve for eyes and open wounds is mentioned in the Indian medical text the Charaka Samhita, thought to date from 500 BC or before. Zinc oxide ointment is also mentioned by the Greek physician Dioscorides (1st century AD). Galen suggested treating ulcerating cancers with zinc oxide, as did Avicenna in his The Canon of Medicine. It is used as an ingredient in products such as baby powder and creams against diaper rashes, calamine cream, anti-dandruff shampoos, and antiseptic ointments. The Romans produced considerable quantities of brass (an alloy of zinc and copper) as early as 200 BC by a cementation process where copper was reacted with zinc oxide. The zinc oxide is thought to have been produced by heating zinc ore in a shaft furnace. This liberated metallic zinc as a vapor, which then ascended the flue and condensed as the oxide. This process was described by Dioscorides in the 1st century AD. Zinc oxide has also been recovered from zinc mines at Zawar in India, dating from the second half of the first millennium BC. From the 12th to the 16th century, zinc and zinc oxide were recognized and produced in India using a primitive form of the direct synthesis process. From India, zinc manufacturing moved to China in the 17th century. In 1743, the first European zinc smelter was established in Bristol, United Kingdom. Around 1782, Louis-Bernard Guyton de Morveau proposed replacing lead white pigment with zinc oxide. The main usage of zinc oxide (zinc white) was in paints and as an additive to ointments. Zinc white was accepted as a pigment in oil paintings by 1834 but it did not mix well with oil. This problem was solved by optimizing the synthesis of ZnO. In 1845, Edme-Jean Leclaire in Paris was producing the oil paint on a large scale; by 1850, zinc white was being manufactured throughout Europe. The success of zinc white paint was due to its advantages over the traditional white lead: zinc white is essentially permanent in sunlight, it is not blackened by sulfur-bearing air, it is non-toxic and more economical. Because zinc white is so "clean" it is valuable for making tints with other colors, but it makes a rather brittle dry film when unmixed with other colors. For example, during the late 1890s and early 1900s, some artists used zinc white as a ground for their oil paintings. These paintings developed cracks over time. In recent times, most zinc oxide has been used in the rubber industry to resist corrosion. In the 1970s, the second largest application of ZnO was photocopying. High-quality ZnO produced by the "French process" was added to photocopying paper as a filler. This application was soon displaced by titanium. Chemical properties Pure ZnO is a white powder. However, in nature, it occurs as the rare mineral zincite, which usually contains manganese and other impurities that confer a yellow to red color. Crystalline zinc oxide is thermochromic, changing from white to yellow when heated in air and reverting to white on cooling. This color change is caused by a small loss of oxygen to the environment at high temperatures to form the non-stoichiometric Zn1+xO, where at 800 °C, x = 0.00007. Zinc oxide is an amphoteric oxide. It is nearly insoluble in water, but it will dissolve in most acids, such as hydrochloric acid: ZnO + 2 HCl → ZnCl2 + H2O Solid zinc oxide will also dissolve in alkalis to give soluble zincates: ZnO + 2 NaOH + H2O → Na2[Zn(OH)4] ZnO reacts slowly with fatty acids in oils to produce the corresponding carboxylates, such as oleate or stearate. When mixed with a strong aqueous solution of zinc chloride, ZnO forms cement-like products best described as zinc hydroxy chlorides. This cement was used in dentistry. ZnO also forms cement-like material when treated with phosphoric acid; related materials are used in dentistry. A major component of zinc phosphate cement produced by this reaction is hopeite, Zn3(PO4)2·4H2O. ZnO decomposes into zinc vapor and oxygen at around 1975 °C with a standard oxygen pressure. In a carbothermic reaction, heating with carbon converts the oxide into zinc vapor at a much lower temperature (around 950 °C). ZnO + C → Zn(Vapor) + CO Physical properties Structure Zinc oxide crystallizes in two main forms, hexagonal wurtzite and cubic zincblende. The wurtzite structure is most stable at ambient conditions and thus most common. The zincblende form can be stabilized by growing ZnO on substrates with cubic lattice structure. In both cases, the zinc and oxide centers are tetrahedral, the most characteristic geometry for Zn(II). ZnO converts to the rocksalt motif at relatively high pressures about 10 GPa. Hexagonal and zincblende polymorphs have no inversion symmetry (reflection of a crystal relative to any given point does not transform it into itself). This and other lattice symmetry properties result in piezoelectricity of the hexagonal and zincblende ZnO, and pyroelectricity of hexagonal ZnO. The hexagonal structure has a point group 6 mm (Hermann–Mauguin notation) or C6v (Schoenflies notation), and the space group is P63mc or C6v4. The lattice constants are a = 3.25 Å and c = 5.2 Å; their ratio c/a ~ 1.60 is close to the ideal value for hexagonal cell c/a = 1.633. As in most group II-VI materials, the bonding in ZnO is largely ionic (Zn2+O2−) with the corresponding radii of 0.074 nm for Zn2+ and 0.140 nm for O2−. This property accounts for the preferential formation of wurtzite rather than zinc blende structure, as well as the strong piezoelectricity of ZnO. Because of the polar Zn−O bonds, zinc and oxygen planes are electrically charged. To maintain electrical neutrality, those planes reconstruct at atomic level in most relative materials, but not in ZnO – its surfaces are atomically flat, stable and exhibit no reconstruction. However, studies using wurtzoid structures explained the origin of surface flatness and the absence of reconstruction at ZnO wurtzite surfaces in addition to the origin of charges on ZnO planes. Mechanical properties ZnO is a wide-band gap semiconductor of the II-VI semiconductor group. The native doping of the semiconductor due to oxygen vacancies or zinc interstitials is n-type. ZnO is a relatively soft material with approximate hardness of 4.5 on the Mohs scale. Its elastic constants are smaller than those of relevant III-V semiconductors, such as GaN. The high heat capacity and heat conductivity, low thermal expansion and high melting temperature of ZnO are beneficial for ceramics. The E2 optical phonon in ZnO exhibits an unusually long lifetime of 133 ps at 10 K. Among the tetrahedrally bonded semiconductors, it has been stated that ZnO has the highest piezoelectric tensor, or at least one comparable to that of GaN and AlN. This property makes it a technologically important material for many piezoelectrical applications, which require a large electromechanical coupling. Therefore, ZnO in the form of thin film has been one of the most studied and used resonator materials for thin-film bulk acoustic resonators. Electrical and optical properties Favourable properties of zinc oxide include good transparency, high electron mobility, wide band gap, and strong room-temperature luminescence. Those properties make ZnO valuable for a variety of emerging applications: transparent electrodes in liquid crystal displays, energy-saving or heat-protecting windows, and electronics as thin-film transistors and light-emitting diodes. ZnO has a relatively wide direct band gap of ~3.3 eV at room temperature. Advantages associated with a wide band gap include higher breakdown voltages, ability to sustain large electric fields, lower electronic noise, and high-temperature and high-power operation. The band gap of ZnO can further be tuned to ~3–4 eV by its alloying with magnesium oxide or cadmium oxide. Due to this large band gap, there have been efforts to create visibly transparent solar cells utilising ZnO as a light absorbing layer. However, these solar cells have so far proven highly inefficient. Most ZnO has n-type character, even in the absence of intentional doping. Nonstoichiometry is typically the origin of n-type character, but the subject remains controversial. An alternative explanation has been proposed, based on theoretical calculations, that unintentional substitutional hydrogen impurities are responsible. Controllable n-type doping is easily achieved by substituting Zn with group-III elements such as Al, Ga, In or by substituting oxygen with group-VII elements chlorine or iodine. Reliable p-type doping of ZnO remains difficult. This problem originates from low solubility of p-type dopants and their compensation by abundant n-type impurities. This problem is observed with GaN and ZnSe. Measurement of p-type in "intrinsically" n-type material is complicated by the inhomogeneity of samples. Current limitations to p-doping limit electronic and optoelectronic applications of ZnO, which usually require junctions of n-type and p-type material. Known p-type dopants include group-I elements Li, Na, K; group-V elements N, P and As; as well as copper and silver. However, many of these form deep acceptors and do not produce significant p-type conduction at room temperature. Electron mobility of ZnO strongly varies with temperature and has a maximum of ~2000 cm2/(V·s) at 80 K. Data on hole mobility are scarce with values in the range 5–30 cm2/(V·s). ZnO discs, acting as a varistor, are the active material in most surge arresters. Zinc oxide is noted for its strongly nonlinear optical properties, especially in bulk. The nonlinearity of ZnO nanoparticles can be fine-tuned according to their size. Production For industrial use, ZnO is produced at levels of 105 tons per year by three main processes: Indirect process In the indirect or French process, metallic zinc is melted in a graphite crucible and vaporized at temperatures above 907 °C (typically around 1000 °C). Zinc vapor reacts with the oxygen in the air to give ZnO, accompanied by a drop in its temperature and bright luminescence. Zinc oxide particles are transported into a cooling duct and collected in a bag house. This indirect method was popularized by Edme Jean LeClaire of Paris in 1844 and therefore is commonly known as the French process. Its product normally consists of agglomerated zinc oxide particles with an average size of 0.1 to a few micrometers. By weight, most of the world's zinc oxide is manufactured via French process. Direct process The direct or American process starts with diverse contaminated zinc composites, such as zinc ores or smelter by-products. The zinc precursors are reduced (carbothermal reduction) by heating with a source of carbon such as anthracite to produce zinc vapor, which is then oxidized as in the indirect process. Because of the lower purity of the source material, the final product is also of lower quality in the direct process as compared to the indirect one. Wet chemical process A small amount of industrial production involves wet chemical processes, which start with aqueous solutions of zinc salts, from which zinc carbonate or zinc hydroxide is precipitated. The solid precipitate is then calcined at temperatures around 800 °C. Laboratory synthesis Numerous specialised methods exist for producing ZnO for scientific studies and niche applications. These methods can be classified by the resulting ZnO form (bulk, thin film, nanowire), temperature ("low", that is close to room temperature or "high", that is T ~ 1000 °C), process type (vapor deposition or growth from solution) and other parameters. Large single crystals (many cubic centimeters) can be grown by the gas transport (vapor-phase deposition), hydrothermal synthesis, or melt growth. However, because of the high vapor pressure of ZnO, growth from the melt is problematic. Growth by gas transport is difficult to control, leaving the hydrothermal method as a preference. Thin films can be produced by a variety of methods including chemical vapor deposition, metalorganic vapour phase epitaxy, electrodeposition, sputtering, spray pyrolysis, thermal oxidation, sol–gel synthesis, atomic layer deposition, and pulsed laser deposition. Zinc oxide can be produced in bulk by precipitation from zinc compounds, mainly zinc acetate, in various solutions, such as aqueous sodium hydroxide or aqueous ammonium carbonate. Synthetic methods characterized in literature since the year 2000 aim to produce ZnO particles with high surface area and minimal size distribution, including precipitation, mechanochemical, sol-gel, microwave, and emulsion methods. ZnO nanostructures Nanostructures of ZnO can be synthesized into a variety of morphologies, including nanowires, nanorods, tetrapods, nanobelts, nanoflowers, nanoparticles, etc. Nanostructures can be obtained with most above-mentioned techniques, at certain conditions, and also with the vapor–liquid–solid method. The synthesis is typically carried out at temperatures of about 90 °C, in an equimolar aqueous solution of zinc nitrate and hexamine, the latter providing the basic environment. Certain additives, such as polyethylene glycol or polyethylenimine, can improve the aspect ratio of the ZnO nanowires. Doping of the ZnO nanowires has been achieved by adding other metal nitrates to the growth solution. The morphology of the resulting nanostructures can be tuned by changing the parameters relating to the precursor composition (such as the zinc concentration and pH) or to the thermal treatment (such as the temperature and heating rate). Aligned ZnO nanowires on pre-seeded silicon, glass, and gallium nitride substrates have been grown using aqueous zinc salts such as zinc nitrate and zinc acetate in basic environments. Pre-seeding substrates with ZnO creates sites for homogeneous nucleation of ZnO crystal during the synthesis. Common pre-seeding methods include in-situ thermal decomposition of zinc acetate crystallites, spin coating of ZnO nanoparticles, and the use of physical vapor deposition methods to deposit ZnO thin films. Pre-seeding can be performed in conjunction with top down patterning methods such as electron beam lithography and nanosphere lithography to designate nucleation sites prior to growth. Aligned ZnO nanowires can be used in dye-sensitized solar cells and field emission devices. Applications The applications of zinc oxide powder are numerous, and the principal ones are summarized below. Most applications exploit the reactivity of the oxide as a precursor to other zinc compounds. For material science applications, zinc oxide has high refractive index, high thermal conductivity, binding, antibacterial and UV-protection properties. Consequently, it is added into materials and products including plastics, ceramics, glass, cement, rubber, lubricants, paints, ointments, adhesive, sealants, concrete manufacturing, pigments, foods, batteries, ferrites, and fire retardants. Rubber industry Between 50% and 60% of ZnO use is in the rubber industry. Zinc oxide along with stearic acid is used in the sulfur vulcanization of rubber. ZnO additives in the form of nanoparticles are used in rubber as a pigment and to enhance its durability, and have been used in composite rubber materials such as those based on montmorillonite to impart germicidal properties. Ceramic industry Ceramic industry consumes a significant amount of zinc oxide, in particular in ceramic glaze and frit compositions. The relatively high heat capacity, thermal conductivity and high temperature stability of ZnO coupled with a comparatively low coefficient of expansion are desirable properties in the production of ceramics. ZnO affects the melting point and optical properties of the glazes, enamels, and ceramic formulations. Zinc oxide as a low expansion, secondary flux improves the elasticity of glazes by reducing the change in viscosity as a function of temperature and helps prevent crazing and shivering. By substituting ZnO for BaO and PbO, the heat capacity is decreased and the thermal conductivity is increased. Zinc in small amounts improves the development of glossy and brilliant surfaces. However, in moderate to high amounts, it produces matte and crystalline surfaces. With regard to color, zinc has a complicated influence. Medicine Skin treatment Zinc oxide as a mixture with about 0.5% iron(III) oxide (Fe2O3) is called calamine and is used in calamine lotion, a topical skin treatment. Historically, the name calamine was ascribed to a mineral that contained zinc used in powdered form as medicine, but it was determined in 1803 that ore described as calamine was actually a mixture of the zinc minerals smithsonite and hemimorphite. Zinc oxide is widely used to treat a variety of skin conditions, including atopic dermatitis, contact dermatitis, itching due to eczema, diaper rash and acne. It is used in products such as baby powder and barrier creams to treat diaper rashes, calamine cream, anti-dandruff shampoos, and antiseptic ointments. It is often combined with castor oil to form an emollient and astringent, zinc and castor oil cream, commonly used to treat infants. It is also a component in tape (called "zinc oxide tape") used by athletes as a bandage to prevent soft tissue damage during workouts. Antibacterial Zinc oxide is used in mouthwash products and toothpastes as an anti-bacterial agent proposed to prevent plaque and tartar formation, and to control bad breath by reducing the volatile gases and volatile sulfur compounds (VSC) in the mouth. Along with zinc oxide or zinc salts, these products also commonly contain other active ingredients, such as cetylpyridinium chloride, xylitol, hinokitiol, essential oils and plant extracts. Powdered zinc oxide has deodorizing and antibacterial properties. ZnO is added to cotton fabric, rubber, oral care products, and food packaging. Enhanced antibacterial action of fine particles compared to bulk material is not exclusive to ZnO and is observed for other materials, such as silver. The mechanism of ZnO's antibacterial effect has been variously described as the generation of reactive oxygen species, the release of Zn2+ ions, and a general disturbance of the bacterial cell membrane by nanoparticles. Sunscreen Zinc oxide is used in sunscreen to absorb ultraviolet light. It is the broadest spectrum UVA and UVB absorber that is approved for use as a sunscreen by the U.S. Food and Drug Administration (FDA), and is completely photostable. When used as an ingredient in sunscreen, zinc oxide blocks both UVA (320–400 nm) and UVB (280–320 nm) rays of ultraviolet light. Zinc oxide and the other most common physical sunscreen, titanium dioxide, are considered to be nonirritating, nonallergenic, and non-comedogenic. Zinc from zinc oxide is, however, slightly absorbed into the skin. Many sunscreens use nanoparticles of zinc oxide (along with nanoparticles of titanium dioxide) because such small particles do not scatter light and therefore do not appear white. The nanoparticles are not absorbed into the skin more than regular-sized zinc oxide particles are and are only absorbed into the outermost layer of the skin but not into the body. Dental restoration When mixed with eugenol, zinc oxide eugenol is formed, which has applications as a restorative and prosthodontic in dentistry. Food additive Zinc oxide is added to many food products, including breakfast cereals, as a source of zinc, a necessary nutrient. Zinc may be added to food in the form of zinc oxide nanoparticles, or as zinc sulfate, zinc gluconate, zinc acetate, or zinc citrate. Some foods also include trace amounts of ZnO even if it is not intended as a nutrient. Pigment Zinc oxide (zinc white) is used as a pigment in paints and is more opaque than lithopone, but less opaque than titanium dioxide. It is also used in coatings for paper. Chinese white is a special grade of zinc white used in artists' pigments. The use of zinc white as a pigment in oil painting started in the middle of 18th century. It has partly replaced the poisonous lead white and was used by painters such as Böcklin, Van Gogh, Manet, Munch and others. It is also a main ingredient of mineral makeup (CI 77947). UV absorber Micronized and nano-scale zinc oxide provides strong protection against UVA and UVB ultraviolet radiation, and are consequently used in sunscreens, and also in UV-blocking sunglasses for use in space and for protection when welding, following research by scientists at Jet Propulsion Laboratory (JPL). Coatings Paints containing zinc oxide powder have long been utilized as anticorrosive coatings for metals. They are especially effective for galvanized iron. Iron is difficult to protect because its reactivity with organic coatings leads to brittleness and lack of adhesion. Zinc oxide paints retain their flexibility and adherence on such surfaces for many years. ZnO highly n-type doped with aluminium, gallium, or indium is transparent and conductive (transparency ~90%, lowest resistivity ~10−4 Ω·cm). ZnO:Al coatings are used for energy-saving or heat-protecting windows. The coating lets the visible part of the spectrum in but either reflects the infrared (IR) radiation back into the room (energy saving) or does not let the IR radiation into the room (heat protection), depending on which side of the window has the coating. Plastics, such as polyethylene naphthalate (PEN), can be protected by applying zinc oxide coating. The coating reduces the diffusion of oxygen through PEN. Zinc oxide layers can also be used on polycarbonate in outdoor applications. The coating protects polycarbonate from solar radiation, and decreases its oxidation rate and photo-yellowing. Corrosion prevention in nuclear reactors Zinc oxide depleted in 64Zn (the zinc isotope with atomic mass 64) is used in corrosion prevention in nuclear pressurized water reactors. The depletion is necessary, because 64Zn is transformed into radioactive 65Zn under irradiation by the reactor neutrons. Methane reforming Zinc oxide (ZnO) is used as a pretreatment step to remove hydrogen sulfide (H2S) from natural gas following hydrogenation of any sulfur compounds prior to a methane reformer, which can poison the catalyst. At temperatures between about , H2S is converted to water by the following reaction: H2S + ZnO → H2O + ZnS Electronics ZnO has wide direct band gap (3.37 eV or 375 nm at room temperature). Therefore, its most common potential applications are in laser diodes and light emitting diodes (LEDs). Moreover, ultrafast nonlinearities and photoconductive functions have been reported in ZnO. Some optoelectronic applications of ZnO overlap with that of GaN, which has a similar band gap (~3.4 eV at room temperature). Compared to GaN, ZnO has a larger exciton binding energy (~60 meV, 2.4 times of the room-temperature thermal energy), which results in bright room-temperature emission from ZnO. ZnO can be combined with GaN for LED-applications. For instance, a transparent conducting oxide layer and ZnO nanostructures provide better light outcoupling. Other properties of ZnO favorable for electronic applications include its stability to high-energy radiation and its ability to be patterned by wet chemical etching. Radiation resistance makes ZnO a suitable candidate for space applications. Nanostructured ZnO is an effective medium both in powder and polycrystalline forms in random lasers, due to its high refractive index and aforementioned light emission properties. Gas sensors Zinc oxide is used in semiconductor gas sensors for detecting airborne compounds such as hydrogen sulfide, nitrogen dioxide, and volatile organic compounds. ZnO is a semiconductor that becomes n-doped by adsorption of reducing compounds, which reduces the detected electrical resistance through the device, in a manner similar to the widely used tin oxide semiconductor gas sensors. It is formed into nanostructures such as thin films, nanoparticles, nanopillars, or nanowires to provide a large surface area for interaction with gasses. The sensors are made selective for specific gasses by doping or surface-attaching materials such as catalytic noble metals. Aspirational applications Transparent electrodes Aluminium-doped ZnO layers are used as transparent electrodes. The components Zn and Al are much cheaper and less toxic compared to the generally used indium tin oxide (ITO). One application which has begun to be commercially available is the use of ZnO as the front contact for solar cells or of liquid crystal displays. Transparent thin-film transistors (TTFT) can be produced with ZnO. As field-effect transistors, they do not need a p–n junction, thus avoiding the p-type doping problem of ZnO. Some of the field-effect transistors even use ZnO nanorods as conducting channels. Piezoelectricity The piezoelectricity in textile fibers coated in ZnO have been shown capable of fabricating "self-powered nanosystems" with everyday mechanical stress from wind or body movements. Photocatalysis ZnO, both in macro- and nano- scales, could in principle be used as an electrode in photocatalysis, mainly as an anode in green chemistry applications. As a photocatalyst, ZnO reacts when exposed to UV radiation and is used in photodegradation reactions to remove organic pollutants from the environment. It is also used to replace catalysts used in photochemical reactions that would ordinarily require costly or inconvenient reaction conditions with low yields. Other The pointed tips of ZnO nanorods could be used as field emitters. ZnO is a promising anode material for lithium-ion battery because it is cheap, biocompatible, and environmentally friendly. ZnO has a higher theoretical capacity (978 mAh g−1) than many other transition metal oxides such as CoO (715 mAh g−1), NiO (718 mAh g−1) and CuO (674 mAh g−1). ZnO is also used as an electrode in supercapacitors. Safety As a food additive, zinc oxide is on the U.S. Food and Drug Administration's list of generally recognized as safe substances. Zinc oxide itself is non-toxic; it is hazardous, however, to inhale high concentrations of zinc oxide fumes, such as those generated when zinc or zinc alloys are melted and oxidized at high temperature. This problem occurs while melting alloys containing brass because the melting point of brass is close to the boiling point of zinc. Inhalation of zinc oxide, which may occur when welding galvanized (zinc-plated) steel, can result in a malady called metal fume fever. In sunscreen formulations that combined zinc oxide with small-molecule UV absorbers, UV light caused photodegradation of the small-molecule asorbers and toxicity in embryonic zebrafish assays.
Physical sciences
Oxide salts
Chemistry
515343
https://en.wikipedia.org/wiki/Dimetrodon
Dimetrodon
Dimetrodon ( or ; ) is an extinct genus of non-mammalian synapsid belonging to the family Sphenacodontidae that lived during the Cisuralian age of the Early Permian period, around 295–272 million years ago. With most species measuring long and weighing , the most prominent feature of Dimetrodon is the large neural spine sail on its back formed by elongated spines extending from the vertebrae. It was an obligate quadruped (it could only walk on four legs) and had a tall, curved skull with large teeth of different sizes set along the jaws. Most fossils have been found in the Southwestern United States, the majority of these coming from a geological deposit called the Red Beds of Texas and Oklahoma. More recently, its fossils have also been found in Germany and over a dozen species have been named since the genus was first erected in 1878. Dimetrodon is often mistaken for a dinosaur or as a contemporary of dinosaurs in popular culture, but it became extinct some 40 million years before the advent of dinosaurs. Although reptile-like in appearance and physiology, Dimetrodon is much more closely related to mammals than to reptiles, though it is not a direct ancestor of mammals. Dimetrodon is assigned to the "non-mammalian synapsids," a group traditionally – but incorrectly – called "mammal-like reptiles," but now known as stem mammals. This groups Dimetrodon together with mammals in the clade Synapsida, while reptiles are placed in a separate clade, Sauropsida. Single openings in the skull behind each eye, known as temporal fenestrae, and other skull features distinguish Dimetrodon and true mammals from most of the earliest sauropsids. Dimetrodon was probably one of the apex predators of the Cisuralian ecosystems, feeding on fish and tetrapods, including reptiles and amphibians. Smaller Dimetrodon species may have had different ecological roles. The sail of Dimetrodon may have been used to stabilize its spine or to heat and cool its body as a form of thermoregulation. Some recent studies argue that the sail would have been ineffective at removing heat from the body, due to large species being discovered with small sails and small species being discovered with large sails, essentially ruling out heat regulation as its main purpose. The sail was most likely used in courtship display, including threatening away rivals or showing off to potential mates. Description Dimetrodon was a quadrupedal, sail-backed synapsid that most likely had a semi-sprawling posture between that of a mammal and a lizard and also could walk in a more upright stance with its body and the majority or all of its tail off the ground. Most Dimetrodon species ranged in length from and are estimated to have weighed between . The smallest known species D.  Teutonic was about long and weighed . The larger species of Dimetrodon were among the largest predators of the Early Permian, although the closely related Tappenosaurus, known from skeletal fragments in slightly younger rocks, may have been even larger at an estimated long. Although some Dimetrodon species could grow very large, many juvenile specimens are known. Skull A single large opening on either side of the back of the skull links Dimetrodon to mammals and distinguishes it from most of the earliest sauropsids, which either lack openings or have two openings. Features such as ridges on the inside of the nasal cavity and a ridge at the back of the lower jaw are thought to be part of an evolutionary progression from early four-limbed land-dwelling vertebrates to mammals. The skull of Dimetrodon is tall and compressed laterally, or side-to-side. The eye sockets are positioned high and far back in the skull. Behind each eye socket is a single hole called an infratemporal fenestra. An additional hole in the skull, the supratemporal fenestra, can be seen when viewed from above. The back of the skull (the occiput) is oriented at a slight upward angle, a feature that it shares with all other early synapsids. The upper margin of the skull slopes downward in a convex arc to the tip of the snout. The tip of the upper jaw, formed by the premaxilla bone, is raised above the part of the jaw formed by the maxilla bone to form a maxillary "step". Within this step is a diastema, a gap in the tooth row. Its skull was more heavily built than a dinosaur's skull. Teeth The size of the teeth varies greatly along the length of the jaws, lending Dimetrodon its name, which means "two measures of tooth" in reference to sets of small and large teeth. One or two pairs of caniniforms (large, pointed, canine-like teeth) extend from the maxilla. Large incisor teeth are also present at the tips of the upper and lower jaws, rooted in the premaxillae and dentary bones. Small teeth are present around the maxillary "step" and behind the caniforms, becoming smaller further back in the jaw. Many teeth are widest at their midsections and narrow closer to the jaws, giving them the appearance of a teardrop. Teardrop-shaped teeth are unique to Dimetrodon and other closely related sphenacodontids, which helps to distinguish them from other early synapsids. As in many other early synapsids, the teeth of most Dimetrodon species are serrated at their edges. The serrations of Dimetrodon teeth were so fine that they resembled tiny cracks. The dinosaur Albertosaurus had similarly crack-like serrations, but, at the base of each serration was a round void, which would have functioned to distribute force over a larger surface area and prevent the stresses of feeding from causing the crack to spread through the tooth. Unlike Albertosaurus, Dimetrodon teeth lacked adaptations that would stop cracks from forming at their serrations. The teeth of D. teutonis lack serrations, but still have sharp edges. A 2014 study shows that Dimetrodon was in an arms race against its prey. The smaller species, D. milleri, had no tooth serrations because it ate small prey. As prey grew larger, several Dimetrodon species started developing serrations on their teeth and increasing in size. For instance, D. limbatus had enamel serrations that helped it cut through flesh (which were similar to the serrations that can be found on Secodontosaurus). The second-largest species, D. grandis, has denticle serrations similar to those of sharks and theropod dinosaurs, making its teeth even more specialized for slicing through flesh. As Dimetrodons prey grew larger, the various species responded by growing to larger sizes and developing ever-sharper teeth. The thickness and mass of the teeth of Dimetrodon may also have been an adaptation for increasing dental longevity. Nasal cavity On the inner surface of the nasal section of the skull are ridges called nasoturbinals, which may have supported cartilage that increased the area of the olfactory epithelium, the layer of tissue that detects odors. These ridges are much smaller than those of later synapsids from the Late Permian and Triassic, whose large nasoturbinals are taken as evidence for warm-bloodedness because they may have supported mucous membranes that warmed and moistened incoming air. Thus, the nasal cavity of Dimetrodon is transitional between those of early land vertebrates and mammals. Jaw joint and ear Another transitional feature of Dimetrodon is a ridge in the back of the jaw called the reflected lamina, which is found on the articular bone, which connects to the quadrate bone of the skull to form the jaw joint. In later mammal ancestors, the articular and quadrate separated from the jaw joint, while the articular developed into the malleus bone of the middle ear. The reflected lamina became part of a ring called the tympanic annulus that supports the ear drum in all living mammals. Tail The tail of Dimetrodon makes up a large portion of its total body length and includes around 50 caudal vertebrae. Tails were missing or incomplete in the first described skeletons of Dimetrodon. The only caudal vertebrae known were the 11 closest to the hip. Since these first few caudal vertebrae narrow rapidly as they progress farther from the hip, many paleontologists in the late 19th and early 20th centuries thought that Dimetrodon had a very short tail. A largely complete tail of Dimetrodon was not described until 1927. Sail The sail of Dimetrodon is formed by elongated neural spines projecting from the vertebrae. Each spine varies in cross-sectional shape from its base to its tip in what is known as "dimetrodont" differentiation. Near the vertebra body, the spine cross section is laterally compressed into a rectangular shape and, closer to the tip, it takes on a figure-eight shape as a groove runs along either side of the spine. The figure-eight shape is thought to reinforce the spine, preventing bending and fractures. A cross-section of the spine of one specimen of Dimetrodon giganhomogenes is rectangular in shape but preserves figure-eight shaped rings close to its center, indicating that the shape of spines may change as individuals age. The microscopic anatomy of each spine varies from base to tip, indicating where it was embedded in the muscles of the back and where it was exposed as part of a sail. The lower or proximal portion of the spine has a rough surface that would have served as an anchoring point for the epaxial muscles of the back and also has a network of connective tissues called Sharpey's fibers that indicate it was embedded within the body. Higher up on the distal (outer) portion of the spine, the bone surface is smoother. The periosteum, a layer of tissue surrounding the bone, is covered in small grooves that presumably supported the blood vessels that vascularized the sail. The large groove that runs the length of the spine was once thought to be a channel for blood vessels, but since the bone does not contain vascular canals, the sail is not thought to have been as highly vascularized as once thought. Some specimens of Dimetrodon preserve deformed areas of the neural spines that appear to be healed-over fractures. The cortical bone that grew over these breaks is highly vascularized, suggesting that soft tissue must have been present on the sail to supply the site with blood vessels. Layered lamellar bone makes up most of the neural spine's cross-sectional area, and contains lines of arrested growth that can be used to determine the age of each individual at death. In many specimens of D. gigashomogenes, the distal portions of spines bend sharply, indicating that the sail would have had an irregular profile in life. Their crookedness suggests that soft tissue may not have extended all the way to the tips of the spines, meaning that the sail's webbing may not have been as extensive as it is commonly imagined. Skin No fossil evidence of Dimetrodons skin has yet been found. Impressions of the skin of a related animal, Estemmenosuchus, indicate that it would have been smooth and well-provided with glands, but this form of skin may not have applied to Dimetrodon, as its lineage is fairly distant. Dimetrodon also may have had large scutes on the underside of its tail and belly, as other synapsids had these. Evidence from the varanopid Ascendonanus suggests that some early synapsids may have had squamate-like scales. However, some recent studies have put varanopids as taxonomically closer to diapsid reptiles. Classification history Earliest discoveries The earliest discovery of Dimetrodon fossils were of a maxilla recovered in 1845 by a man named Donald McLeod, living in the British colony of Prince Edward Island. These fossils were purchased by John William Johnson, a Canadian geologist, and then described by Joseph Leidy in 1854 as the mandible of Bathygnathus borealis, a large carnivore related to Thecodontosaurus, although it was later reclassified as a species of Dimetrodon in 2015, as Dimetrodon borealis. First descriptions by Cope Fossils now attributed to Dimetrodon were first studied by American paleontologist Edward Drinker Cope in the 1870s. Cope had obtained the fossils along with those of many other Permian tetrapods from several collectors who had been exploring a group of rocks in Texas called the Red Beds. Among these collectors were Swiss naturalist Jacob Boll, Texas geologist W. F. Cummins, and amateur paleontologist Charles Hazelius Sternberg. Most of Cope's specimens went to the American Museum of Natural History or to the University of Chicago's Walker Museum (most of the Walker fossil collection is now housed in the Field Museum of Natural History). Sternberg sent some of his own specimens to German paleontologist Ferdinand Broili at Munich University, although Broili was not as prolific as Cope in describing specimens. Cope's rival Othniel Charles Marsh also collected some bones of Dimetrodon, which he sent to the Walker Museum. The first use of the name Dimetrodon came in 1878 when Cope named the species Dimetrodon incisivus, Dimetrodon rectiformis, and Dimetrodon gigas in the scientific journal Proceedings of the American Philosophical Society. The first description of a Dimetrodon fossil came a year earlier, though, when Cope named the species Clepsydrops limbatus from the Texas Red Beds. (The name Clepsydrops was first coined by Cope in 1875 for sphenacodontid remains from Vermilion County, Illinois, and was later employed for many sphenacontid specimens from Texas; many new species of sphenacodontids from Texas were assigned to either Clepsydrops or Dimetrodon in the late 19th and early 20th centuries.) C. limbatus was reclassified as a species of Dimetrodon in 1940, meaning that Cope's 1877 paper was the first record of Dimetrodon. Cope was the first to describe a sail-backed synapsid with the naming of C. natalis in his 1878 paper, although he called the sail a fin and compared it to the crests of the modern basilisk lizard (Basilicus). Sails were not preserved in the specimens of D.  incisive and D. gigas that Cope described in his 1878 paper, but elongated spines were present in the D. rectiformis specimen he described. Cope commented on the purpose of the sail in 1886, writing, "The utility is difficult to imagine. Unless the animal had aquatic habits and swam on its back, the crest or fin must have been in the way of active movements... The limbs are not long enough nor the claws acute enough to demonstrate arboreal habits, as in the existing genus Basilicus, where a similar crest exists." Early 20th century descriptions In the first few decades of the 20th century, American paleontologists E. C. Case authored many studies on Dimetrodon and described several new species. He received funding from the Carnegie Institution for his study of many Dimetrodon specimens in the collections of the American Museum of Natural History and several other museums. Many of these fossils had been collected by Cope but had not been thoroughly described, as Cope was known for erecting new species on the basis of only a few bone fragments. Beginning in the late 1920s, paleontologist Alfred Romer restudied many Dimetrodon specimens and named several new species. In 1940, Romer coauthored a large study with Llewellyn Ivor Price called "Review of the Pelycosauria" in which the species of Dimetrodon named by Cope and Case were reassessed. Most of the species names considered valid by Romer and Price are still used today. New specimens In the decades following Romer and Price's monograph, many Dimetrodon specimens were described from localities outside Texas and Oklahoma. The first was described from the Four Corners region of Utah in 1966 and another was described from Arizona in 1969. In 1975, Olson reported Dimetrodon material from the Washington Formation of Ohio, which has been given a tentative assignment of D. cf. limbatus. A new species of Dimetrodon called D. occidentalis (meaning "western Dimetrodon") was named in 1977 from New Mexico. The specimens found in Utah and Arizona probably also belong to D. occidentalis. Before these discoveries, a theory existed that a midcontinental seaway separated what is now Texas and Oklahoma from more western lands during the Early Permian, isolating Dimetrodon to a small region of North America, while a smaller sphenacodontid called Sphenacodon dominated the western area. While this seaway probably did exist, the discovery of fossils outside Texas and Oklahoma show that its extent was limited and that it was not an effective barrier to the distribution of Dimetrodon. In 2001, a new species of Dimetrodon called D. teutonis was described from the Lower Permian Bromacker locality at the Thuringian Forest of Germany, extending the geographic range of Dimetrodon outside North America for the first time. Species Twenty species of Dimetrodon have been named since the genus was first described in 1878. Many have been synonymized with older named species, and some now belong to different genera. Summary Dimetrodon limbatus Dimetrodon limbatus was first described by Edward Drinker Cope in 1877 as Clepsydrops limbatus. (The name Clepsydrops was first coined by Cope in 1875 for sphenacodontid remains from Vermilion County, Illinois, and was later employed for many sphenacontid specimens from Texas; many new species of sphenacodontids from Texas were assigned to either Clepsydrops or Dimetrodon in the late nineteenth and early twentieth centuries.) Based on a specimen from the Red Beds of Texas, it was the first known sail-backed synapsid. In 1940, paleontologists Alfred Romer and Llewellyn Ivor Price reassigned C. limbatus to the genus Dimetrodon, making D. limbatus the type species of Dimetrodon. Remains tentatively assigned to this species are also known from Washington County, Ohio, which correspond to a relatively large individual. These remains are slightly older than others assigned to D. limbatus from the west, although potential D. limbatus remains from New Mexico may be concurrent with it. Dimetrodon incisivus The first use of the name Dimetrodon came in 1878 when Cope named the species Dimetrodon incisivus along with Dimetrodon rectiformis and Dimetrodon gigas. Dimetrodon rectiformis Dimetrodon rectiformis was named alongside Dimetrodon incisivus in Cope's 1878 paper, and was the only one of the three named species to preserve elongated neural spines. In 1907, paleontologist E. C. Case moved D. rectiformis into the species D. incisivus. D. incisivus was later synonymous with the type species Dimetrodon limbatus, making D. rectiformis a synonym of D. limbatus. Dimetrodon semiradicatus Described in 1881 on the basis of upper jaw bones, Dimetrodon semiradicatus was the last species named by Cope. In 1907, E. C. Case synonymized D. semiradicatus with D. incisivus based on similarities in the shape of the teeth and skull bones. D. incisivus''' and D. semiradicatus are now considered synonyms of D. limbatus. Dimetrodon dollovianusDimetrodon dollovianus was first described by Edward Drinker Cope in 1888 as Embolophorus dollovianus. In 1903, E. C. Case published a lengthy description of E. dollovianus, which he later referred to Dimetrodon. Dimetrodon grandis Paleontologist E. C. Case named a new species of sail-backed synapsid, Theropleura grandis, in 1907. In 1940, Alfred Romer and Llewellyn Ivor Price reassigned Theropleura grandis to Dimetrodon, erecting the species D. grandis. Dimetrodon gigas In his 1878 paper on fossils from Texas, Cope named Clepsydrops gigas along with the first named species of Dimetrodon, D. limbatus, D. incisivus, and D. rectiformis. Case reclassified C. gigas as a new species of Dimetrodon in 1907. Case also described a very well preserved skull of Dimetrodon in 1904, attributing it to the species Dimetrodon gigas. In 1919, Charles W. Gilmore attributed a nearly complete specimen of Dimetrodon to D. gigas. Dimetrodon gigas is now recognized as a synonym of D. grandis. Dimetrodon giganhomogenesDimetrodon giganhomogenes was named by E. C. Case in 1907 and is still considered a valid species of Dimetrodon. Dimetrodon macrospondylusDimetrodon macrospondylus was first described by Cope in 1884 as Clepsydrops macrospondylus. In 1907, Case reclassified it as Dimetrodon macrospondylus. Dimetrodon platycentrusDimetrodon platycentrus was first described by Case in his 1907 monograph. It is now considered a synonym of Dimetrodon macrospondylus. Dimetrodon natalis Paleontologist Alfred Romer erected the species Dimetrodon natalis in 1936, previously described as Clepsydrops natalis. D. natalis was the smallest known species of Dimetrodon at that time, and was found alongside remains of the larger-bodied D. limbatus. Dimetrodon booneorumDimetrodon booneorum was first described by Alfred Romer in 1937 on the basis of remains from Texas. "Dimetrodon" kempaeDimetrodon kempae was named by Romer in 1937, in the same paper as D. booneorum, D. loomisi, and D. milleri. Dimetrodon kempae was named on the basis of a single humerus and a few vertebrae, and may therefore be a nomen dubium that cannot be distinguished as a unique species of Dimetrodon. In 1940, Romer and Price raised the possibility that D. kempae may not fall within the genus Dimetrodon, preferring to classify it as Sphenacodontidae incertae sedis. Dimetrodon loomisiDimetrodon loomisi was first described by Alfred Romer in 1937 along with D. booneorum, D. kempae, and D. milleri. Remains have been found in Texas and Oklahoma. Dimetrodon milleriDimetrodon milleri was described by Romer in 1937. It is one of the smallest species of Dimetrodon in North America and may be closely related to D. occidentalis, another small-bodied species. D. milleri is known from two skeletons, one nearly complete (MCZ 1365) and another less complete but larger (MCZ 1367). D. milleri is the oldest known species of Dimetrodon. Besides its small size, D. milleri differs from other species of Dimetrodon in that its neural spines are circular rather than figure-eight shaped in cross-section. Its vertebrae are also shorter in height relative to the rest of the skeleton than those of other Dimetrodon species. The skull is tall and the snout is short relative to the temporal region. A short vertebrae and tall skull are also seen in the species D. booneorum, D. limbatus and D. grandis, suggesting that D. milleri may be the first of an evolutionary progression between these species. Dimetrodon angelensisDimetrodon angelensis was named by paleontologist Everett C. Olson in 1962. Specimens of the species were reported from the San Angelo Formation of Texas. It is also the largest species of Dimetrodon. Dimetrodon occidentalisDimetrodon occidentalis was named in 1977 from New Mexico. Its name means "western Dimetrodon" because it is the only North American species of Dimetrodon known west of Texas and Oklahoma. It was named on the basis of a single skeleton belonging to a relatively small individual. The small size of D. occidentalis is similar to that of D. milleri, suggesting a close relationship. Dimetrodon specimens found in Utah and Arizona probably also belong to D. occidentalis. Dimetrodon teutonisDimetrodon teutonis was named in 2001 from the Thuringian Forest of Germany and was the first species of Dimetrodon to be described outside North America. It is also the smallest species of Dimetrodon. Species assigned to different genera Dimetrodon cruciger In 1878, Cope published a paper called "The Theromorphous Reptilia" in which he described Dimetrodon cruciger. D. cruciger was distinguished by the small projections that extended from either side of each neural spine like the branches of a tree. In 1886, Cope moved D. cruciger to the genus Naosaurus because he considered its spines so different from those of other Dimetrodon species that the species deserved its own genus. Naosaurus would later be synonymized with Edaphosaurus, a genus which Cope named in 1882 on the basis of skulls that evidently belonged to herbivorous animals given their blunt crushing teeth. Dimetrodon longiramus E. C. Case named the species Dimetrodon longiramus in 1907 on the basis of a scapula and elongated mandible from the Belle Plains Formation of Texas. In 1940, Romer and Price recognized that the D. longiramus material belonged to the same taxon as another specimen described by paleontologist Samuel Wendell Williston in 1916, which included a similarly elongated mandible and a long maxilla. Williston did not consider his specimen to belong to Dimetrodon but instead classified it as an ophiacodontid. Romer and Price assigned Case and Williston's specimens to a newly erected genus and species, Secodontosaurus longiramus, that was closely related to Dimetrodon. Phylogenetic classification Dimetrodon is an early member of a group called synapsids, which include mammals and many of their extinct relatives, though it is not an ancestor of any mammal (which appeared millions of years later). It is often mistaken for a dinosaur in popular culture, despite having become extinct some 40 million years (Ma) before the first appearance of dinosaurs in the Triassic period. As a synapsid, Dimetrodon is more closely related to mammals than to dinosaurs or any living reptile. By the early 1900s most paleontologists called Dimetrodon a reptile in accordance with Linnean taxonomy, which ranked Reptilia as a class and Dimetrodon as a genus within that class. Mammals were assigned to a separate class, and Dimetrodon was described as a "mammal-like reptile". Paleontologists theorized that mammals evolved from this group in (what they called) a reptile-to-mammal transition. Phylogenetic taxonomy of Synapsida Under phylogenetic systematics, the descendants of the last common ancestor of Dimetrodon and all living reptiles would include all mammals because Dimetrodon is more closely related to mammals than to any living reptile. Thus, if it is desired to avoid the clade that contains both mammals and the living reptiles, then Dimetrodon must not be included in that clade—nor any other "mammal-like reptile". Descendants of the last common ancestor of mammals and reptiles (which appeared around 310 Ma in the Late Carboniferous) are therefore split into two clades: Synapsida, which includes Dimetrodon and mammals, and Sauropsida, which includes living reptiles and all extinct reptiles more closely related to them than to mammals. Within clade Synapsida, Dimetrodon is part of the clade Sphenacodontia, which was first proposed as an early synapsid group in 1940 by paleontologists Alfred Romer and Llewellyn Ivor Price, along with the groups Ophiacodontia and Edaphosauria. All three groups are known from the Late Carboniferous and Early Permian. Romer and Price distinguished them primarily by postcranial features such as the shapes of limbs and vertebrae. Ophiacodontia was considered the most primitive group because its members appeared the most reptilian, and Sphenacodontia was the most advanced because its members appeared the most like a group called Therapsida, which included the closest relatives to mammals. Romer and Price placed another group of early synapsids called varanopids within Sphenacodontia, considering them to be more primitive than other sphenacodonts like Dimetrodon. They thought varanopids and Dimetrodon-like sphenacodonts were closely related because both groups were carnivorous, although varanopids are much smaller and more lizard-like, lacking sails. The modern view of synapsid relationships was proposed by paleontologist Robert R. Reisz in 1986, whose study included features mostly found in the skull rather than in the postcranial skeleton. Dimetrodon is still considered a sphenacodont under this phylogeny, but varanodontids are now considered more basal synapsids, falling outside clade Sphenacodontia. Within Sphenacodontia is the group Sphenacodontoidea, which in turn contains Sphenacodontidae and Therapsida. Sphenacodontidae is the group containing Dimetrodon and several other sail-backed synapsids like Sphenacodon and Secodontosaurus, while Therapsida includes mammals and their mostly Permian and Triassic relatives. Below is the cladogram Clade Synapsida, which follows this phylogeny of Synapsida as modified from the analysis of Benson (2012). The below cladogram shows the relationships of a few Dimetrodon species, from Brink et al., (2015). Paleobiology Function of neural spines Paleontologists have proposed many ways in which the sail could have functioned in life. Some of the first to think about its purpose suggested that the sail may have served as camouflage among reeds while Dimetrodon waited for prey, or as an actual boat-like sail to catch the wind while the animal was in the water. Another is that the long neural spines could have stabilized the trunk by restricting up-and-down movement, which would allow for a more efficient side-to-side movement while walking. Thermoregulation In 1940, Alfred Romer and Llewellyn Ivor Price proposed that the sail served a thermoregulatory function, allowing individuals to warm their bodies with the Sun. In the following years, many models were created to estimate the effectiveness of thermoregulation in Dimetrodon. For example, in a 1973 article in the journal Nature, paleontologists C. D. Bramwell and P. B. Fellgett estimated that it took a individual about one and a half hours for its body temperature to rise from . In 1986, Steven C. Haack concluded that the warming was slower than previously thought and that the process probably took four hours. Using a model based on a variety of environmental factors and hypothesized physiological aspects of Dimetrodon, Haack found that the sail allowed Dimetrodon to warm faster in the morning and reach a slightly higher body temperature during the day, but that it was ineffective in releasing excess heat and did not allow Dimetrodon to retain a higher body temperature at night. In 1999, a group of mechanical engineers created a computer model to analyze the ability of the sail to regulate body temperature during different seasons, and concluded that the sail was beneficial for capturing and releasing heat at all times in the year. Most of these studies give two thermoregulatory roles for the sail of Dimetrodon: one as a means of warming quickly in the morning, and another as a way to cool down when body temperature becomes high. Dimetrodon and all other Early Permian land vertebrates are assumed to have been cold-blooded or poikilothermic, relying on the sun to maintain a high body temperature. Because of its large size, Dimetrodon had high thermal inertia, meaning that changes in body temperature occurred more slowly in it than in smaller-bodied animals. As temperatures rose in the mornings, the small-bodied prey of Dimetrodon could warm their bodies much faster than could something the size of Dimetrodon. Many paleontologists including Haack have proposed that the sail of Dimetrodon may have allowed it to warm quickly in the morning in order to keep pace with its prey. The sail's large surface area also meant heat could dissipate quickly into the surroundings, useful if the animal needed to release excess heat produced by metabolism or absorbed from the sun. Dimetrodon may have angled its sail away from the sun to cool off or restricted blood flow to the sail to maintain heat at night. In 1986, J. Scott Turner and C. Richard Tracy proposed that the evolution of a sail in Dimetrodon was related to the evolution of warm-bloodedness in mammal ancestors. They thought that the sail of Dimetrodon enabled it to be homeothermic, maintaining a constant, albeit low, body temperature. Mammals are also homeothermic, although they differ from Dimetrodon in being endothermic, controlling their body temperature internally through heightened metabolism. Turner and Tracy noted that early therapsids, a more advanced group of synapsids closely related to mammals, had long limbs which can release heat in a manner similar to that of the sail of Dimetrodon. The homeothermy that developed in animals like Dimetrodon may have carried over to therapsids through a modification of body shape, which would eventually develop into the warm-bloodedness of mammals. Recent studies on the sail of Dimetrodon and other sphenacodontids support Haack's 1986 contention that the sail was poorly adapted to releasing heat and maintaining a stable body temperature. The presence of sails in small-bodied species of Dimetrodon such as D. milleri and D. teutonis does not fit the idea that the sail's purpose was thermoregulation because smaller sails are less able to transfer heat and because small bodies can absorb and release heat easily on their own. Moreover, close relatives of Dimetrodon such as Sphenacodon have very low crests that would have been useless as thermoregulatory devices. The large sail of Dimetrodon is thought to have developed gradually from these smaller crests, meaning that over most of the sail's evolutionary history, thermoregulation could not have served an important function. Although the function of its sail remains uncertain, Dimetrodon and other Sphenacodontids were likely to have been whole-body endotherms, characterised by a high energy metabolism (tachymetabolism) and probably a capacity for maintaining a high and stable body temperature. This conclusion was part of an amniote-wide study that found tachymetabolic endothermy to have been widespread throughout, and likely plesiomorphic to both synapsids and sauropsids. For Dimetrodon the evidence was the endothermy-indicative size of the foramina through which blood was delivered to their long bones and the high blood pressure that would have been necessary to provide blood to the tops of the well-vascularised spines supporting the sail. Larger bodied specimens of Dimetrodon have larger sails relative to their size, an example of positive allometry. Positive allometry may benefit thermoregulation because it means that, as individuals get larger, surface area increases faster than mass. Larger-bodied animals generate a great deal of heat through metabolism, and the amount of heat that must be dissipated from the body surface is significantly greater than what must be dissipated by smaller-bodied animals. Effective heat dissipation can be predicted across many different animals with a single relationship between mass and surface area. However, a 2010 study of allometry in Dimetrodon found a different relationship between its sail and body mass: the actual scaling exponent of the sail was much larger than the exponent expected in an animal adapted to heat dissipation. The researchers concluded that the sail of Dimetrodon grew at a much faster rate than was necessary for thermoregulation, and suggested that sexual selection was the primary reason for its evolution. Sexual selection The allometric exponent for sail height is similar in magnitude to the scaling of interspecific antler length to shoulder height in cervids. Furthermore, as Bakker (1970) observed in the context of Dimetrodon, many lizard species raise a dorsal ridge of skin during threat and courtship displays, and positively allometric, sexually dimorphic frills and dewlaps are present in extant lizards (Echelle et al. 1978; Christian et al. 1995). There is also evidence of sexual dimorphism both in the robustness of the skeleton and in the relative height of the spines of D. limbatus (Romer and Price 1940). Sexual dimorphism Dimetrodon may have been sexually dimorphic, meaning that males and females had slightly different body sizes. Some specimens of Dimetrodon have been hypothesized as males because they have thicker bones, larger sails, longer skulls, and more pronounced maxillary "steps" than others. Based on these differences, the mounted skeletons in the American Museum of Natural History (AMNH 4636) and the Field Museum of Natural History may be males and the skeletons in the Denver Museum of Nature and Science (MCZ 1347) and the University of Michigan Museum of Natural History may be females. Paleoecology Fossils of Dimetrodon are known from the United States (Texas, Oklahoma, New Mexico, Arizona, Utah and Ohio), Canada (Prince Edward Island) and Germany, areas that were part of the supercontinent Euramerica during the Early Permian. Within the United States, almost all material attributed to Dimetrodon has come from three geological groups in north-central Texas and south-central Oklahoma: the Clear Fork Group, the Wichita Group, and the Pease River Group.Nelson, John W., Robert W. Hook, and Dan S. Chaney (2013). Lithostratigraphy of the Lower Permian (Leonardian) Clear Fork Formation of North-Central Texas from The Carboniferous-Permian Transition: Bulletin 60, ed. Spencer G. Lucas et al. New Mexico Museum of Natural History and Science, pg. 286-311. Retrieved December 28, 2017. Most fossil finds are part of lowland ecosystems which, during the Permian, would have been vast wetlands. In particular, the Red Beds of Texas is an area of great diversity of fossil tetrapods, or four-limbed vertebrates. In addition to Dimetrodon, the most common tetrapods in the Red Beds and throughout Early Permian deposits in the southwestern United States, are the amphibians Archeria, Diplocaulus, Eryops, and Trimerorhachis, the reptiliomorph Seymouria, the reptile Captorhinus, and the synapsids Ophiacodon and Edaphosaurus. These tetrapods made up a group of animals that paleontologist Everett C. Olson called the "Permo-Carboniferous chronofauna", a fauna that dominated the continental Euramerican ecosystem for several million years. Based on the geology of deposits like the Red Beds, the fauna is thought to have inhabited a well-vegetated lowland deltaic ecosystem. Food web Olson made many inferences on the paleoecology of the Texas Red beds and the role of Dimetrodon within its ecosystem. He proposed several main types of ecosystems in which the earliest tetrapods lived. Dimetrodon belonged to the most primitive ecosystem, which developed from aquatic food webs. In it, aquatic plants were the primary producers and were largely fed upon by fish and aquatic invertebrates. Most land vertebrates fed on these aquatic primary consumers. Dimetrodon was probably the top predator of the Red Beds ecosystem, feeding on a variety of organisms such as the shark Xenacanthus, the aquatic amphibians Trimerorhachis and Diplocaulus, and the terrestrial tetrapods Seymouria and Trematops. Insects are known from the Early Permian Red Beds and were probably involved to some degree in the same food web as Dimetrodon, feeding small reptiles like Captorhinus. The Red Beds assemblage also included some of the first large land-living herbivores like Edaphosaurus and Diadectes. Feeding primarily on terrestrial plants, these herbivores did not derive their energy from aquatic food webs. According to Olson, the best modern analogue for the ecosystem Dimetrodon inhabited is the Everglades. The exact lifestyle of Dimetrodon (amphibious to terrestrial) has long been controversial, but bone microanatomy supports a terrestrial lifestyle, which implies that it would have fed mostly on land, on the banks, or in very shallow water. Evidence also exists for Dimetrodon preying on aestivating Diplocaulus during times of drought, with three partially eaten juvenile Diplocaulus in a burrow of eight bearing teeth marks from a Dimetrodon that unearthed and killed them. The only species of Dimetrodon found outside the southwestern United States is D. teutonis from Germany. Its remains were found in the Tambach Formation in a fossil site called the Bromacker locality. The Bromacker's assemblage of Early Permian tetrapods is unusual in that there are few large-bodied synapsids serving the role of top predators. D. teutonis is estimated to have been only in length, too small to prey on the large diadectid herbivores that are abundant in the Bromacker assemblage. It more likely ate small vertebrates and insects. Only three fossils can be attributed to large predators, and they are thought to have been either large varanopids or small sphenacodonts, both of which could potentially prey on D. teutonis. In contrast to the lowland deltaic Red Beds of Texas, the Bromacker deposits are thought to have represented an upland environment with no aquatic species. It is possible that large-bodied carnivores were not part of the Bromacker assemblage because they were dependent on large aquatic amphibians for food.
Biology and health sciences
Dinosaurs and prehistoric reptiles
null
515358
https://en.wikipedia.org/wiki/Pelycosaur
Pelycosaur
Pelycosaur ( ) is an older term for basal or primitive Late Paleozoic synapsids, excluding the therapsids and their descendants. Previously, the term mammal-like reptile had been used, and pelycosaur was considered an order, but this is now thought to be incorrect and outdated. Because it excludes the advanced synapsid group Therapsida, the term is paraphyletic and contrary to modern formal naming practice. Thus the name pelycosaurs, similar to the term mammal-like reptiles, had fallen out of favor among scientists by the 21st century, and is only used informally, if at all, in the modern scientific literature. The terms stem mammals, protomammals, and basal or primitive synapsids are instead used where needed. Etymology The modern word was created from Greek meaning 'basin' and meaning 'lizard'. The term pelycosaur has been fairly well abandoned by paleontologists because it no longer matches the features that distinguish a clade. Pelycosauria is a paraphyletic taxon because it excludes the therapsids. For that reason, the term is sometimes avoided by proponents of a strict cladistic approach. Eupelycosauria is used to designate the clade that includes most pelycosaurs, along with the Therapsida and Mammalia. In contrast to "pelycosaurs", Eupelycosauria is a proper monophyletic group. Caseasauria is a pelycosaur side-branch, or clade, that did not leave any descendants. Evolutionary history The pelycosaurs appear to have been a group of synapsids that have direct ancestral links with the mammals, having differentiated teeth and a developing hard palate. The pelycosaurs appeared during the Late Carboniferous and reached their apex in the early part of the Permian, remaining the dominant land animals for some 40 million years. A few continued into the Capitanian, but they experienced a sharp decline in diversity in the late Kungurian. They were succeeded by the therapsids. Description Some species were quite large, growing to a length of or more, although most species were much smaller. Well-known pelycosaurs include the genera Dimetrodon, Sphenacodon, Edaphosaurus, and Ophiacodon. Pelycosaur fossils have been found mainly in Europe and North America, although some small, late-surviving forms are known from Russia and South Africa. Unlike lepidosaurian reptiles, pelycosaurs might have lacked reptilian epidermal scales. Fossil evidence from some varanopids shows that parts of the skin were covered in rows of osteoderms, presumably overlain by horny scutes. The belly was covered in rectangular scutes, looking like those present in crocodiles. Parts of the skin not covered in scutes might have had naked, glandular skin like that found in some mammals. Dermal scutes are also found in a diverse number of extant mammals with conservative body types, such as in the tails of some rodents, sengis, moonrats, the opossums, and other marsupials, and as regular dermal armour with underlying bone in the armadillo. At least two pelycosaur clades independently evolved a tall sail, consisting of elongated vertebral spines: the edaphosaurids and the sphenacodontids. In life, this may have been covered by skin, and likely functioned as a thermoregulatory device or as a mating display. Taxonomy In phylogenetic nomenclature, "Pelycosauria" is not used formally, since it does not constitute a group of all organisms descended from some common ancestor (a clade), because the group specifically excludes the therapsids which are descended from pelycosaurs. Instead, it represents a paraphyletic "grade" of basal synapsids leading up to the clade Therapsida. In 1940, the group was reviewed in detail, and every species known at the time described, with many illustrated, in an important monograph by Alfred Sherwood Romer and Llewellyn Price. In traditional classification, the order Pelycosauria is paraphyletic in that the therapsids (the "higher" synapsids) have emerged from them. That means Pelycosauria is a grouping of animals that does not contain all descendants of its common ancestor, as is often required by phylogenetic nomenclature. In evolutionary taxonomy, Therapsida is a separated order from Pelycosauria, and mammals (having evolved from therapsids) are separated from both as their own class. This use has not been recommended by a majority of systematists since the 1990s, but several paleontologists nevertheless continue using this word. The following classification was presented by Benton in 2004. Order Pelycosauria* Suborder Caseasauria Family Eothyrididae Family Caseidae Suborder Eupelycosauria Family Varanopidae Family Ophiacodontidae Family Edaphosauridae Infraorder Sphenacodontia Family Sphenacodontidae Order Therapsida
Biology and health sciences
Proto-mammals
Animals
515534
https://en.wikipedia.org/wiki/Injury%20in%20humans
Injury in humans
An injury is any physiological damage to living tissue caused by immediate physical stress. Injuries to humans can occur intentionally or unintentionally and may be caused by blunt trauma, penetrating trauma, burning, toxic exposure, asphyxiation, or overexertion. Injuries can occur in any part of the body, and different symptoms are associated with different injuries. Treatment of a major injury is typically carried out by a health professional and varies greatly depending on the nature of the injury. Traffic collisions are the most common cause of accidental injury and injury-related death among humans. Injuries are distinct from chronic conditions, psychological trauma, infections, or medical procedures, though injury can be a contributing factor to any of these. Several major health organizations have established systems for the classification and description of human injuries. Occurrence Injuries may be intentional or unintentional. Intentional injuries may be acts of violence against others or self-inflicted against one's own person. Accidental injuries may be unforeseeable, or they may be caused by negligence. In order, the most common types of unintentional injuries are traffic accidents, falls, drowning, burns, and accidental poisoning. Certain types of injuries are more common in developed countries or developing countries. Traffic injuries are more likely to kill pedestrians than drivers in developing countries. Scalding burns are more common in developed countries, while open-flame injuries are more common in developing countries. As of 2021, approximately 4.4 million people are killed due to injuries each year worldwide, constituting nearly 8% of all deaths. 3.16 million of these injuries are unintentional, and 1.25 million are intentional. Traffic accidents are the most common form of deadly injury, causing about one-third of injury-related deaths. One-sixth are caused by suicide, and one-tenth are caused by homicide. Tens of millions of individuals require medical treatment for nonfatal injuries each year, and injuries are responsible for about 10% of all years lived with disability. Men are twice as likely to be killed through injury than women. In 2013, 367,000 children under the age of five died from injuries, down from 766,000 in 1990. Classification systems The World Health Organization (WHO) developed the International Classification of External Causes of Injury (ICECI). Under this system, injuries are classified by mechanism of injury, objects/substances producing injury, place of occurrence, activity when injured, the role of human intent, and additional modules. These codes allow the identification of distributions of injuries in specific populations and case identification for more detailed research on causes and preventive efforts. The United States Bureau of Labor Statistics developed the Occupational Injury and Illness Classification System (OIICS). Under this system injuries are classified by nature, part of body affected, source and secondary source, and event or exposure. The OIICS was first published in 1992 and has been updated several times since. The Orchard Sports Injury and Illness Classification System (OSIICS), previously OSICS, is used to classify injuries to enable research into specific sports injuries. The injury severity score (ISS) is a medical score to assess trauma severity. It correlates with mortality, morbidity, and hospitalization time after trauma. It is used to define the term major trauma (polytrauma), recognized when the ISS is greater than 15. The AIS Committee of the Association for the Advancement of Automotive Medicine designed and updates the scale. Mechanisms Trauma Traumatic injury is caused by an external object making forceful contact with the body, resulting in a wound. Major trauma is a severe traumatic injury that has the potential to cause disability or death. Serious traumatic injury most often occurs as a result of traffic collisions. Traumatic injury is the leading cause of death in people under the age of 45. Blunt trauma injuries are caused by the forceful impact of an external object. Injuries from blunt trauma may cause internal bleeding and bruising from ruptured capillaries beneath the skin, abrasion from scraping against the superficial epidermis, lacerated tears on the skin or internal organs, or bone fractures. Crush injuries are a severe form of blunt trauma damage that apply large force to a large area over a longer period of time. Penetrating trauma injuries are caused by external objects entering the tissue of the body through the skin. Low-velocity penetration injuries are caused by sharp objects, such as stab wounds, while high-velocity penetration injuries are caused by ballistic projectiles, such as gunshot wounds or injuries caused by shell fragments. Perforated injuries result in an entry wound and an exit wound, while puncture wounds result only in an entry wound. Puncture injuries result in a cavity in the tissue. Burns Burn injury is caused by contact with extreme temperature, chemicals, or radiation. The effects of burns vary depending on the depth and size. Superficial or first-degree burns only affect the epidermis, causing pain for a short period of time. Superficial partial-thickness burns cause weeping blisters and require dressing. Deep partial-thickness burns are dry and less painful due to the burning away of the skin and require surgery. Full-thickness or third-degree burns affect the entire dermis and is susceptible to infection. Fourth-degree burns reach deep tissues such as muscles and bones, causing loss of the affected area. Thermal burns are the most common type of burn, caused by contact with excessive heat, including contact with flame, contact with hot surfaces, or scalding burns caused by contact with hot water or steam. Frostbite is a type of burn caused by contact with excessive cold, causing cellular injury and deep tissue damage through the crystallization of water in the tissue. Friction burns are caused by friction with external objects, resulting in a burn and abrasion. Radiation burns are caused by exposure to ionizing radiation. Most radiation burns are sunburns caused by ultraviolet radiation or high exposure to radiation through medical treatments such as repeated radiography or radiation therapy. Electrical burns are caused by contact with electricity as it enters and passes through the body. They are often deeper than other burns, affecting lower tissues as electricity penetrates the skin, and the full extent of electrical burns are often obscured. They will also cause extensive destruction of tissue at the entry and exit points. Electrical injuries in the home are often minor, while high tension power cables cause serious electrical injuries in the workplace. Lightning strikes can also cause severe electrical injuries. Fatal electrical injuries are often caused by tetanic spasm inducing respiratory arrest or interference with the heart causing cardiac arrest. Chemical burns are caused by contact with corrosive substances such as acid or alkali. Chemical burns are rarer than most other burns, though there are many chemicals that can damage tissue. The most common chemical-related injuries are those caused by carbon monoxide, ammonia, chlorine, hydrochloric acid, and sulfuric acid. Some chemical weapons induce chemical burns, such as white phosphorus. Most chemical burns are treated with extensive application of water to remove the chemical contaminant, though some burn-inducing chemicals react with water to create more severe injuries. The ingestion of corrosive substances can cause chemical burns to the larynx and stomach. Other mechanisms Toxic injury is caused by the ingestion, inhalation, injection, or absorption of a toxin. This may occur through an interaction caused by a drug or the ingestion of a poison. Different toxins may cause different types of injuries, and many will cause injury to specific organs. Toxins in gases, dusts, aerosols, and smoke can be inhaled, potentially causing respiratory failure. Respiratory toxins can be released by structural fires, industrial accidents, domestic mishaps, or through chemical weapons. Some toxicants may affect other parts of the body after inhalation, such as carbon monoxide. Asphyxia causes injury to the body from a lack of oxygen. It can be caused by drowning, inhalation of certain substances, strangulation, blockage of the airway, traumatic injury to the airway, apnea, and other means. The most immediate injury caused by asphyxia is hypoxia, which can in turn cause acute lung injury or acute respiratory distress syndrome as well as damage to the circulatory system. The most severe injury associated with asphyxiation is cerebral hypoxia and ischemia, in which the brain receives insufficient oxygen or blood, resulting in neurological damage or death. Specific injuries are associated with water inhalation, including alveolar collapse, atelectasis, intrapulmonary shunting, and ventilation perfusion mismatch. Simple asphyxia is caused by a lack of external oxygen supply. Systemic asphyxia is caused by exposure to a compound that prevents oxygen from being transported or used by the body. This can be caused by azides, carbon monoxide, cyanide, smoke inhalation, hydrogen sulfide, methemoglobinemia-inducing substances, opioids, or other systemic asphyxiants. Ventilation and oxygenation are necessary for treatment of asphyxiation, and some asphyxiants can be treated with antidotes. Injuries of overuse or overexertion can occur when the body is strained through use, affecting the bones, muscles, ligaments, or tendons. Sports injuries are often overuse injuries such as tendinopathy. Over-extension of the ligaments and tendons can result in sprains and strains, respectively. Repetitive sedentary behaviors such as extended use of a computer or a physically repetitive occupation may cause a repetitive strain injury. Extended use of brightly lit screens may also cause eye strain. Locations Abdomen Abdominal trauma includes injuries to the stomach, intestines, liver, pancreas, kidneys, gallbladder, and spleen. Abdominal injuries are typically caused by traffic accidents, assaults, falls, and work-related injuries, and physical examination is often unreliable in diagnosing blunt abdominal trauma. Splenic injury can cause low blood volume or blood in the peritoneal cavity. The treatment and prognosis of splenic injuries are dependent on cardiovascular stability. The gallbladder is rarely injured in blunt trauma, occurring in about 2% of blunt abdominal trauma cases. Injuries to the gallbladder are typically associated with injuries to other abdominal organs. The intestines are susceptible to injury following blunt abdominal trauma. The kidneys are protected by other structures in the abdomen, and most injuries to the kidney are a result of blunt trauma. Kidney injuries typically cause blood in the urine. Due to its location in the body, pancreatic injury is relatively uncommon but more difficult to diagnose. Most injuries to the pancreas are caused by penetrative trauma, such as gunshot wounds and stab wounds. Pancreatic injuries occur in under 5% of blunt abdominal trauma cases. The severity of pancreatic injury depends primarily on the amount of harm caused to the pancreatic duct. The stomach is also well protected from injury due to its heavy layering, its extensive blood supply, and its position relative to the rib cage. As with pancreatic injuries, most traumatic stomach injuries are caused by penetrative trauma, and most civilian weapons do not cause long-term tissue damage to the stomach. Blunt trauma injuries to the stomach are typically caused by traffic accidents. Ingestion of corrosive substances can cause chemical burns to the stomach. Liver injury is the most common type of organ damage in cases of abdominal trauma. The liver's size and location in the body makes injury relatively common compared to other abdominal organs, and blunt trauma injury to the liver is typically treated with nonoperative management. Liver injuries are rarely serious, though most injuries to the liver are concomitant with other injuries, particularly to the spleen, ribs, pelvis, or spinal cord. The liver is also susceptible to toxic injury, with overdose of paracetamol being a common cause of liver failure. Face Facial trauma may affect the eyes, nose, ears, or mouth. Nasal trauma is a common injury and the most common type of facial injury. Oral injuries are typically caused by traffic accidents or alcohol-related violence, though falls are a more common cause in young children. The primary concerns regarding oral injuries are that the airway is clear and that there are no concurrent injuries to other parts of the head or neck. Oral injuries may occur in the soft tissue of the face, the hard tissue of the mandible, or as dental trauma. The ear is susceptible to trauma in head injuries due to its prominent location and exposed structure. Ear injuries may be internal or external. Injuries of the external ear are typically lacerations of the cartilage or the formation of a hematoma. Injuries of the middle and internal ear may include a perforated eardrum or trauma caused by extreme pressure changes. The ear is also highly sensitive to blast injury. The bones of the ear are connected to facial nerves, and ear injuries can cause paralysis of the face. Trauma to the ear can cause hearing loss. Eye injuries often take place in the cornea, and they have the potential to permanently damage vision. Corneal abrasions are a common injury caused by contact with foreign objects. The eye can also be injured by a foreign object remaining in the cornea. Radiation damage can be caused by exposure to excessive light, often caused by welding without eye protection or being exposed to excessive ultraviolet radiation, such as sunlight. Exposure to corrosive chemicals can permanently damage the eyes, causing blindness if not sufficiently irrigated. The eye is protected from most blunt injuries by the infraorbital margin, but in some cases blunt force may cause an eye to hemorrhage or tear. Overuse of the eyes can cause eye strain, particularly when looking at brightly lit screens for an extended period. Heart Cardiac injuries affect the heart and blood vessels. Blunt cardiac injury in a common injury caused by blunt trauma to the heart. It can be difficult to diagnose, and it can have many effects on the heart, including contusions, ruptures, acute valvular disorders, arrhythmia, or heart failure. Penetrative trauma to the heart is typically caused by stab wounds or gunshot wounds. Accidental cardiac penetration can also occur in rare cases from a fractured sternum or rib. Stab wounds to the heart are typically survivable with medical attention, though gunshot wounds to the heart are not. The right ventricle is most susceptible to injury due to its prominent location. The two primary consequences of traumatic injury to the heart are severe hemorrhaging and fluid buildup around the heart. Musculoskeletal Musculoskeletal injuries affect the skeleton and the muscular system. Soft tissue injuries affect the skeletal muscles, ligaments, and tendons. Ligament and tendon injuries account for half of all musculoskeletal injuries. Ligament sprains and tendon strains are common injuries that do not require intervention, but the healing process is slow. Physical therapy can be used to assist reconstruction and use of injured ligaments and tendons. Torn ligaments or tendons typically require surgery. Skeletal muscles are abundant in the body and commonly injured when engaging in athletic activity. Muscle injuries trigger an inflammatory response to facilitate healing. Blunt trauma to the muscles can cause contusions and hematomas. Excessive tensile strength can overstretch a muscle, causing a strain. Strains may present with torn muscle fibers, hemorrhaging, or fluid in the muscles. Severe muscle injuries in which a tear extends across the muscle can cause total loss of function. Penetrative trauma can cause laceration to muscles, which may take an extended time to heal. Unlike contusions and strains, lacerations are uncommon in sports injuries. Traumatic injury may cause various bone fractures depending on the amount of force, direction of the force, and width of the area affected. Pathologic fractures occur when a previous condition weakens the bone until it can be easily fractured. Stress fractures occur when the bone is overused or suffers under excessive or traumatic pressure, often during athletic activity. Hematomas occur immediately following a bone fracture, and the healing process often takes from six weeks to three months to complete, though continued use of the fractured bone will prevent healing. Articular cartilage damage may also affect function of the skeletal system, and it can cause posttraumatic osteoarthritis. Unlike most bodily structures, cartilage cannot be healed once it is damaged. Nervous system Injuries to the nervous system include brain injury, spinal cord injury, and nerve injury. Trauma to the brain causes traumatic brain injury (TBI), causing "long-term physical, emotional, behavioral, and cognitive consequences". Mild TBI, including concussion, often occurs during athletic activity, military service, or as a result of untreated epilepsy, and its effects are typically short-term. More severe injuries to the brain cause moderate TBI, which may cause confusion or lethargy, or severe TBI, which may result in a coma or a secondary brain injury. TBI is a leading cause of mortality. Approximately half of all trauma-related deaths involve TBI. Non-traumatic injuries to the brain cause acquired brain injury (ABI). This can be caused by stroke, a brain tumor, poison, infection, cerebral hypoxia, drug use, or the secondary effect of a TBI. Injury to the spinal cord is not immediately terminal, but it is associated with concomitant injuries, lifelong medical complications, and reduction in life expectancy. It may result in complications in several major organ systems and a significant reduction in mobility or paralysis. Spinal shock causes temporary paralysis and loss of reflexes. Unlike most other injuries, damage to the peripheral nerves is not healed through cellular proliferation. Following nerve injury, the nerves undergo degeneration before regenerating, and other pathways can be strengthened or reprogrammed to make up for lost function. The most common form of peripheral nerve injury is stretching, due to their inherent elasticity. Nerve injuries may also be caused by laceration or compression. Pelvis Injuries to the pelvic area include injuries to the bladder, rectum, colon, and reproductive organs. Traumatic injury to the bladder is rare and often occurs with other injuries to the abdomen and pelvis. The bladder is protected by the peritoneum, and most cases of bladder injury are concurrent with a fracture of the pelvis. Bladder trauma typically causes hematuria, or blood in the urine. Ingestion of alcohol may cause distension of the bladder, increasing the risk of injury. A catheter may be used to extract blood from the bladder in the case of hemorrhaging, though injuries that break the peritoneum typically require surgery. The colon is rarely injured by blunt trauma, with most cases occurring from penetrative trauma through the abdomen. Rectal injury is less common than injury to the colon, though the rectum is more susceptible to injury following blunt force trauma to the pelvis. Injuries to the male reproductive system are rarely fatal and typically treatable through grafts and reconstruction. The elastic nature of the scrotum makes it resistant to injury, accounting for 1% of traumatic injuries. Trauma to the scrotum may cause damage to the testis or the spermatic cord. Trauma to the penis can cause penile fracture, typically as a result of vigorous intercourse. Injuries to the female reproductive system are often a result of pregnancy and childbirth or sexual activity. They are rarely fatal, but they can produce a variety of complications, such as chronic discomfort, dyspareunia, infertility, or the formation of fistulas. Age can greatly affect the nature of genital injuries in women due to changes in hormone composition. Childbirth is the most common cause of genital injury to women of reproductive age. Many cultures practice female genital mutilation, which is estimated to affect over 125 million women and girls worldwide as of 2018. Tears and abrasions to the vagina are common during sexual intercourse, and these may be exacerbated in instances of non-consensual sexual activity. Respiratory tract Injuries to the respiratory tract affect the lungs, diaphragm, trachea, bronchus, pharynx, or larynx. Tracheobronchial injuries are rare and often associated with other injuries. Bronchoscopy is necessary for an accurate diagnosis of tracheobronchial injury. The neck, including the pharynx and larynx, is highly vulnerable to injury due to its complex, compacted anatomy. Injuries to this area can cause airway obstruction. Ingestion of corrosive chemicals can cause chemical burns to the larynx. Inhalation of toxic materials can also cause serious injury to the respiratory tract. Severe trauma to the chest can cause damage to the lungs, including pulmonary contusions, accumulation of blood, or a collapsed lung. The inflammation response to a lung injury can cause acute respiratory distress syndrome. Injuries to the lungs may cause symptoms ranging from shortness of breath to terminal respiratory failure. Injuries to the lungs are often fatal, and survivors often have a reduced quality of life. Injuries to the diaphragm are uncommon and rarely serious, but blunt trauma to the diaphragm can result in the formation of a hernia over time. Injuries to the diaphragm may present in many ways, including abnormal blood pressure, cardiac arrest, gastroinetestinal obstruction, and respiratory insufficiency. Injuries to the diaphragm are often associated with other injuries in the chest or abdomen, and its position between two major cavities of the human body may complicate diagnosis. Skin Most injuries to the skin are minor and do not require specialist treatment. Lacerations of the skin are typically repaired with sutures, staples, or adhesives. The skin is susceptible to burns, and burns to the skin often cause blistering. Abrasive trauma scrapes or rubs off the skin, and severe abrasions require skin grafting to repair. Skin tears involve the removal of the epidermis or dermis through friction or shearing forces, often in vulnerable populations such as the elderly. Skin injuries are potentially complicated by foreign bodies such as glass, metal, or dirt that entered the wound, and skin wounds often require cleaning. Treatment Much of medical practice is dedicated to the treatment of injuries. Traumatology is the study of traumatic injuries and injury repair. Certain injuries may be treated by specialists. Serious injuries sometimes require trauma surgery. Following serious injuries, physical therapy and occupational therapy are sometimes used for rehabilitation. Medication is commonly used to treat injuries. Emergency medicine during major trauma prioritizes the immediate consideration of life-threatening injuries that can be quickly addressed. The airway is evaluated, clearing bodily fluids with suctioning or creating an artificial airway if necessary. Breathing is evaluated by evaluating motion of the chest wall and checking for blood or air in the pleural cavity. Circulation is evaluated to resuscitate the patient, including the application of intravenous therapy. Disability is evaluated by checking for responsiveness and reflexes. Exposure is then used to examine the patient for external injury. Following immediate life-saving procedures, a CT scan is used for a more thorough diagnosis. Further resuscitation may be required, including ongoing blood transfusion, mechanical ventilation and nutritional support. Pain management is another aspect of injury treatment. Pain serves as an indicator to determine the nature and severity of an injury, but it can also worsen an injury, reduce mobility, and affect quality of life. Analgesic drugs are used to reduce the pain associated with injuries, depending on the person's age, the severity of the injury, and previous medical conditions that may affect pain relief. NSAIDs such as aspirin and ibuprofen are commonly used for acute pain. Opioid medications such as fentanyl, methadone, and morphine are used to treat severe pain in major trauma, but their use is limited due to associated long-term risks such as addiction. Complications Complications may arise as a result of certain injuries, increasing the recovery time, further exasperating the symptoms, or potentially causing death. The extent of the injury and the age of the injured person may contribute to the likelihood of complications. Infection of wounds is a common complication in traumatic injury, resulting in diagnoses such as pneumonia or sepsis. Wound infection prevents the healing process from taking place and can cause further damage to the body. A majority of wounds are contaminated with microbes from other parts of the body, and infection takes place when the immune system is unable to address this contamination. The surgical removing of devitalized tissue and the use of topical antimicrobial agents can prevent infection. Hemorrhaging of blood is a common result of injuries, and it can cause several complications. Pooling of blood under the skin can cause a hematoma, particularly after blunt trauma or the suture of a laceration. Hematomas are susceptible to infection and are typically treated compression, though surgery is necessary in severe cases. Excessive blood loss can cause hypovolemic shock in which cellular oxygenation can no longer take place. This can cause tachycardia, hypotension, coma, or organ failure. Fluid replacement is often necessary to treat blood loss. Other complications of injuries include cavitation, development of fistulas, and organ failure. Social and psychological aspects Injuries often cause psychological harm in addition to physical harm. Traumatic injuries are associated with psychological trauma and distress, and some victims of traumatic injuries will display symptoms of post-traumatic stress disorder during and after the recovery of the injury. The specific symptoms and their triggers vary depending on the nature of the injury. Body image and self-esteem can also be affected by injury. Injuries that cause permanent disabilities, such as spinal cord injuries, can have severe effects on self-esteem. Disfiguring injuries can negatively affect body image, leading to a lower quality of life. Burn injuries in particular can cause dramatic changes in a person's appearance that may negatively affect body image. Severe injury can also cause social harm. Disfiguring injuries may also result in stigma due to scarring or other changes in appearance. Certain injuries may necessitate a change in occupation or prevent employment entirely. Leisure activities are similarly limited, and athletic activities in particular may be impossible following severe injury. In some cases, the effects of injury may strain personal relationships, such as marriages. Psychological and social variables have been found to affect the likelihood of injuries among athletes. Increased life stress can cause an increase in the likelihood of athletic injury, while social support can decrease the likelihood of injury. Social support also assists in the recovery process after athletic injuries occur.
Biology and health sciences
Injury: General
Health
27360421
https://en.wikipedia.org/wiki/Doctor%27s%20office
Doctor's office
A doctor's office in American English, a doctor's surgery in British English, or a doctor's practice, is a medical facility in which one or more medical doctors, usually general practitioners (GP), receive and treat patients. Description Doctors' offices are the primary place where ambulatory care is given, and are often the first place that a sick person would go for care, except in an emergency, in which case one would go to an emergency department at a hospital. In developed countries, where health services are guaranteed by the state in some form, most medical visits to doctors take place in their offices. In the United States, where this is not the case, many people who cannot afford health insurance or doctor's visits must either go to free or reduced-cost clinics or an emergency department at a hospital for care, instead of a doctor's office. For healthy people, most visits to doctors' offices revolve around a once-yearly recommended physical examination. This exam usually consists of gathering information such as a patient's blood pressure, heart rate, weight, and height, along with checking for any irregularities or signs of illness around the body. GPs will also ask the patients about any mental health problems that they may be experiencing, and may refer them to a psychiatrist for further examination in the event that they do indeed have such problems. If there are any other health problems that must be addressed by a medical specialist, such as a cardiologist, a referral will be given. The staff of a doctor's office usually consists of nurses, receptionists, and doctors. Sometimes, many doctors of different medical specialties may be housed in one building, allowing easy referrals. Facilities Doctors' offices can range from spartan to luxurious. A basic office usually consists of a waiting room and examination room(s). Examination rooms usually consist of an examination table, upon which the patient sits or lies down, and various other equipment, depending on the office. Examples of the equipment found in an examination room include:
Biology and health sciences
Health facilities
Health
13146531
https://en.wikipedia.org/wiki/Differentiation%20of%20trigonometric%20functions
Differentiation of trigonometric functions
The differentiation of trigonometric functions is the mathematical process of finding the derivative of a trigonometric function, or its rate of change with respect to a variable. For example, the derivative of the sine function is written (a) = cos(a), meaning that the rate of change of sin(x) at a particular angle x = a is given by the cosine of that angle. All derivatives of circular trigonometric functions can be found from those of sin(x) and cos(x) by means of the quotient rule applied to functions such as tan(x) = sin(x)/cos(x). Knowing these derivatives, the derivatives of the inverse trigonometric functions are found using implicit differentiation. Proofs of derivatives of trigonometric functions Limit of sin(θ)/θ as θ tends to 0 The diagram at right shows a circle with centre O and radius r = 1. Let two radii OA and OB make an arc of θ radians. Since we are considering the limit as θ tends to zero, we may assume θ is a small positive number, say 0 < θ < π in the first quadrant. In the diagram, let R1 be the triangle OAB, R2 the circular sector OAB, and R3 the triangle OAC. The area of triangle OAB is: The area of the circular sector OAB is: The area of the triangle OAC is given by: Since each region is contained in the next, one has: Moreover, since in the first quadrant, we may divide through by , giving: In the last step we took the reciprocals of the three positive terms, reversing the inequities. We conclude that for 0 < θ < π, the quantity is always less than 1 and always greater than cos(θ). Thus, as θ gets closer to 0, is "squeezed" between a ceiling at height 1 and a floor at height , which rises towards 1; hence sin(θ)/θ must tend to 1 as θ tends to 0 from the positive side:For the case where θ is a small negative number – π < θ < 0, we use the fact that sine is an odd function: Limit of (cos(θ)-1)/θ as θ tends to 0 The last section enables us to calculate this new limit relatively easily. This is done by employing a simple trick. In this calculation, the sign of θ is unimportant. Using the fact that the limit of a product is the product of limits, and the limit result from the previous section, we find that: Limit of tan(θ)/θ as θ tends to 0 Using the limit for the sine function, the fact that the tangent function is odd, and the fact that the limit of a product is the product of limits, we find: Derivative of the sine function We calculate the derivative of the sine function from the limit definition: Using the angle addition formula , we have: Using the limits for the sine and cosine functions: Derivative of the cosine function From the definition of derivative We again calculate the derivative of the cosine function from the limit definition: Using the angle addition formula , we have: Using the limits for the sine and cosine functions: From the chain rule To compute the derivative of the cosine function from the chain rule, first observe the following three facts: The first and the second are trigonometric identities, and the third is proven above. Using these three facts, we can write the following, We can differentiate this using the chain rule. Letting , we have: . Therefore, we have proven that . Derivative of the tangent function From the definition of derivative To calculate the derivative of the tangent function tan θ, we use first principles. By definition: Using the well-known angle formula , we have: Using the fact that the limit of a product is the product of the limits: Using the limit for the tangent function, and the fact that tan δ tends to 0 as δ tends to 0: We see immediately that: From the quotient rule One can also compute the derivative of the tangent function using the quotient rule. The numerator can be simplified to 1 by the Pythagorean identity, giving us, Therefore, Proofs of derivatives of inverse trigonometric functions The following derivatives are found by setting a variable y equal to the inverse trigonometric function that we wish to take the derivative of. Using implicit differentiation and then solving for dy/dx, the derivative of the inverse function is found in terms of y. To convert dy/dx back into being in terms of x, we can draw a reference triangle on the unit circle, letting θ be y. Using the Pythagorean theorem and the definition of the regular trigonometric functions, we can finally express dy/dx in terms of x. Differentiating the inverse sine function We let Where Then Taking the derivative with respect to on both sides and solving for dy/dx: Substituting in from above, Substituting in from above, Differentiating the inverse cosine function We let Where Then Taking the derivative with respect to on both sides and solving for dy/dx: Substituting in from above, we get Substituting in from above, we get Alternatively, once the derivative of is established, the derivative of follows immediately by differentiating the identity so that . Differentiating the inverse tangent function We let Where Then Taking the derivative with respect to on both sides and solving for dy/dx: Left side: using the Pythagorean identity Right side: Therefore, Substituting in from above, we get Differentiating the inverse cotangent function We let where . Then Taking the derivative with respect to on both sides and solving for dy/dx: Left side: using the Pythagorean identity Right side: Therefore, Substituting , Alternatively, as the derivative of is derived as shown above, then using the identity follows immediately that Differentiating the inverse secant function Using implicit differentiation Let Then (The absolute value in the expression is necessary as the product of secant and tangent in the interval of y is always nonnegative, while the radical is always nonnegative by definition of the principal square root, so the remaining factor must also be nonnegative, which is achieved by using the absolute value of x.) Using the chain rule Alternatively, the derivative of arcsecant may be derived from the derivative of arccosine using the chain rule. Let Where and Then, applying the chain rule to : Differentiating the inverse cosecant function Using implicit differentiation Let Then (The absolute value in the expression is necessary as the product of cosecant and cotangent in the interval of y is always nonnegative, while the radical is always nonnegative by definition of the principal square root, so the remaining factor must also be nonnegative, which is achieved by using the absolute value of x.) Using the chain rule Alternatively, the derivative of arccosecant may be derived from the derivative of arcsine using the chain rule. Let Where and Then, applying the chain rule to :
Mathematics
Differential calculus
null
3663717
https://en.wikipedia.org/wiki/Testudo%20%28genus%29
Testudo (genus)
Testudo, the Mediterranean tortoises, are a genus of tortoises found in North Africa, Western Asia, and Europe. Several species are under threat in the wild, mainly from habitat destruction. Background They are small tortoises, ranging in length from 7.0 to 35 cm and in weight from 0.7 to 7.0 kg. Systematics The systematics and taxonomy of Testudo is notoriously problematic. Highfield and Martin commented: Synonymies on Testudo are notoriously difficult to compile with any degree of accuracy. The status of species referred has undergone a great many changes, each change introducing an additional level of complexity and making bibliographic research on the taxa extremely difficult. Most early and not a few later checklists contain a very high proportion of entirely spurious entries, and a considerable number of described species are now considered invalid – either because they are homonyms, non-binomial or for some other reason. Since then, DNA sequence data have increasingly been used in systematics, but in Testudines (turtles and tortoises), its usefulness is limited: In some of these, at least mtDNA is known to evolve more slowly in these than in most other animals. Paleobiogeographical considerations suggest the rate of evolution of the mitochondrial 12S rRNA gene is 1.0-1.6% per million years for the last dozen million years or so in the present genus and ntDNA evolution rate has been shown to vary strongly even between different population of T. hermanni; this restricts sequence choice for molecular systematics and makes the use of molecular clocks questionable. The following extant species in the following subgenera are placed here: Genus Testudo Subgenus Agrionemys Russian tortoise or Horsfield's tortoise, T. horsfieldii Subspecies: Central Asian tortoise, T. horsfieldii horsfieldii Fergana Valley steppe tortoise, T. horsfieldii bogdanovi Kazakhstan steppe tortoise, T. horsfieldii kazakhstanica Turkmenistan steppe tortoise, T. horsfieldii kuznetzovi Kopet-Dag steppe tortoise, T. horsfieldii rustamovi Subgenus Chersine Hermann's tortoise, T. hermanni Subspecies: Eastern Hermann's tortoise, T. hermanni boettgeri Western Hermann's tortoise, T. hermanni hermanni Subgenus Testudo Spur-thighed tortoise, Greek tortoise or common tortoise, T. graeca Subspecies: Mediterranean spur-thighed tortoise, T. graeca graeca Araxes tortoise, T. graeca armeniaca Buxton's tortoise, T. graeca buxtoni Cyrenaican spur-thighed tortoise, T. graeca cyrenaica Asia Minor tortoise, T. graeca ibera Morocco tortoise, T. graeca marokkensis Nabeul tortoise, T. graeca nabeulensis Souss Valley tortoise, T. graeca soussensis Mesopotamian tortoise, T. graeca terrestris Iranian tortoise, T. graeca zarudnyi Egyptian tortoise or Kleinmann's tortoise, T. kleinmanni Marginated tortoise, T. marginata The first two are more distinct and ancient lineages than the closely related latter three species. Arguably, T. horsfieldii belongs in a new genus (Agrionemys) on the basis of the shape of its carapace and plastron, and its distinctness is supported by DNA sequence analysis. Likewise, a separate genus Eurotestudo has recently been proposed for T. hermanni; these three lineages were distinct by the Late Miocene as evidenced by the fossil record. Whether these splits will eventually be accepted remains to be seen. The genus Chersus has been proposed to unite the Egyptian and marginated tortoises which have certain DNA sequence similarities, but their ranges are (and apparently always were) separated by their closest relative T. graeca and the open sea and thus, chance convergent haplotype sorting would better explain the biogeographical discrepancy. Conversely, the Greek tortoise is widespread and highly diverse. In this and other species, a high number of subspecies has been described, but not all generally accepted, and several (such as the "Negev tortoise" and the "dwarf marginated tortoise") are now considered to be local morphs. Some, such as the Tunisian tortoise, have even been separated as a separate genus Furculachelys, but this is not supported by more recent studies. Mating Testudo spp. are promiscuous creatures and they follow a polyandrous mating system. Mating involves a courtship ritual of mechanical, olfactory and auditory displays elicited from the male to coerce a female into accepting copulation. Courtship displays are very energetically costly for males, especially because females tend to run away from courting males. The male will chase her, exerting more energy and elaborate displays of ramming and biting. Females are able to judge a male's genetic quality through these displays; only healthy males are able to perform costly courting rituals, suggesting endurance rivalry. These are considered honest signals that are then used to influence pre- and post-copulatory choice, as females are the choosy sex. Female mate choice offers no direct benefits (such as access to food or territory or parental care). There are, however, indirect benefits of mating with multiple males. Engaging in a polyandrous mating system offers a female guaranteed fertilization, higher offspring diversity and sperm competition to ensure that eggs are fertilized by a high quality male. This is in respect to the "good genes" hypothesis that females receive indirect benefits through her offspring by mating with a quality male, "a male's contribution to a female's fitness is restricted to [his] genes" (Cutuli, G. et al., 2014). Mating order has no influence on paternity of a clutch so a female's inclination to mate with multiple males and her ability to store sperm allows for sperm competition and suggests cryptic female choice. However, some species do show size-assortative, T. marginata, for example, where large males breed with large females and small males breed with small females. Other species form hierarchies; during male-to-male competition the more aggressive male is considered alpha. Alpha males are more aggressive with their courting as well and have higher mounting success rate than beta males. A female's reproductive tract contains sperm storage tubules and she is capable of storing sperm for up to four years. This sperm remains viable and when she goes a breeding season without encountering a male she is able to fertilize her eggs with the stored sperm. Storing sperm can also result in multiple paternity clutches; It is quite common among Testudo spp. females to lay a clutch that has been sired by multiple males. And females can lay one to four clutches a breeding season. Sexual dimorphism, promiscuity, long term sperm storage and elaborate courting rituals are factors that effect mate preference, sperm competition and cryptic female choice in genus Testudo.
Biology and health sciences
Turtles
Animals
3666151
https://en.wikipedia.org/wiki/Mediterranean%20house%20gecko
Mediterranean house gecko
The Mediterranean house gecko (Hemidactylus turcicus) is a species of house gecko native to the Mediterranean region, from which it has spread to many parts of the world including parts of East Africa, South America, the Caribbean, and the Southern and Southeastern United States. It is commonly referred to as the Turkish gecko as represented in its Latin name and also as the moon lizard because it tends to emerge in the evening. A study in Portugal found H. turcicus to be totally nocturnal, with its highest activity around 02:00. It is insectivorous, rarely exceeds in length, has large, lidless eyes with elliptical pupils, and purple or tan-colored skin with black spots, often with stripes on the tail. Its belly or undersides are somewhat translucent. What impact this gecko has on native wildlife in the regions to which it has been introduced is unknown. In many parts of the world, the range of H. turcicus is increasing, and unlike many other reptiles, it appears to be highly resistant to pesticides. The increase may be explained as a consequence of having few predators in places where it has been introduced, and also of its tendency to take shelter in the cracks and unseen areas of human homes, for example inside walls. Reliance on human habitation has thus contributed to the species' proliferation, similar to rodents. In some Eastern Mediterranean countries such as Turkey and Cyprus, harming H. turcicus is taboo due to its benign nature, and it is often kept as a house pet. Description The Mediterranean gecko is a very small lizard generally measuring in length, with sticky toe pads, vertical pupils, and large eyes that lack eyelids. Its snout is rounded, about as long as the distance between the eye and the ear opening, 1.25 to 1.3 times the diameter of the orbit; the forehead is slightly concave; the ear opening is oval, oblique, and nearly half the diameter of the eye. Body and limb sizes are moderate. The digits are variable in length, with the inner always well developed; 6 to 8 lamellae are under the inner digits, 8 to 10 are under the fourth finger, and 9 to 11 are under the fourth toe. The head has large granules anteriorly, but posteriorly has minute granules intermixed with round tubercles. The rostrum is four-sided, not twice as broad as deep, with amedial cleft above; the nostril is pierced between the rostrum, the first labial, and three nasals; it has 7 to 10 upper and 6 to 8 lower labials; the mental is large, triangular, and at least twice as long as the adjacent labials; its point is between two large chin-shields, which may be in contact behind it; a smaller chin shield on each side of the larger pair. Upper surface of body covered with minute granules intermixed with large tubercles, which are generally larger than the spaces between them, suboval and trihedral in shape, and arranged in 14 or 16 pretty, regular, longitudinal series. Abdominal scales are small, smooth, roundish-hexagonal, and imbricate. Males have a short angular series of four to 10 (exceptionally two) preanal pores. The tail is cylindrical, slightly depressed, tapering, and covered above with minute scales and a transverse series of large, keeled tubercles, and covered beneath with a series of large, transversely dilated plates. Its color is light brown or grayish above, with darker spots; many of the tubercles and the lower surfaces are white. They may be completely translucent except for the spotting. Some are darker. They often seek darkness when fleeing. They may be seen alone or in a group up to five or more together. Geographic distribution Native to the Mediterranean region, the "Med gecko" is one of the most successful species of geckos in the world. It has spread over much of the world and established stable populations far from its native range; it holds no threatened or endangered status. It can be found in countries with Mediterranean climates, such as Portugal, Spain, France, Italy, Greece, Israel, Malta, southern Bulgaria, North Macedonia, coastal Croatia (except western Istria), Bosnia and Herzegovina, the Adriatic islands, coastal Montenegro, the coastal part of Albania, Cyprus, Turkey, northern Morocco, Algeria, Tunisia, Jordan, Syria, Libya, Egypt, Lebanon, northern Yemen (the Socotra Archipelago), Somalia, Eritrea, Kenya, southern Iran, Iraq, Oman, Qatar, Saudi Arabia, Pakistan, India, the Balearic Islands (Island Addaya Grande), the Canary Islands (introduced to Gran Canaria and Tenerife), Panama, Puerto Rico, Belize, and Cuba. As of 2016, it was known from scattered records in the Southwestern United States, including Arizona, California, Nevada, and New Mexico and more extensively in the Southern United States, including Alabama, Arkansas, Florida, Georgia, Kansas, Kentucky, Louisiana, Maryland, Mississippi, Missouri, North Carolina, Oklahoma, South Carolina, Texas, and Virginia, being particularly well-established in Gulf Coast states in the east. More recently records have been published from several localities in Pennsylvania, and Tennessee. It was also reported from Indiana in 2019 but, it was unknown at that time if the individual represented an established population or not. In Mexico, introductions are known to the states Tamaulipas, Veracruz, Tabasco, Campeche, Yucatán, Baja California, Chihuahua, Coahuila, Sonora, Durango, and Nuevo León. Habitat Mediterranean house geckos inhabit a wide range of habitats, in areas near human presence such as university campuses, cemeteries, coastal regions, and shrublands. In these urban or suburban areas, they are typically seen in the cracks of old brick buildings. They can also be found in other areas such as mountain cliffs and caves. Their nests can be found in trash piles, attics, or under the baseboards of buildings. Behavior Mediterranean house geckos are nocturnal. They emit a distinctive, high-pitched call somewhat like a squeak or the chirp of a bird, possibly expressing a territorial message. Because of this aggressive behavior, juveniles avoid most interaction with adult geckos. They are voracious predators of moths and small roaches, and are attracted to outdoor lights in search of these prey. They are also attracted by the call of a male decorated cricket (Gryllodes supplicans); although the males are usually safely out of reach in a burrow, female crickets attracted to the male's call can be intercepted and eaten. Reproduction Mediterranean house geckos reach sexual maturity within four months to a year. Male house geckos produce clicking sounds to attract a mate, with the females responding in their own squeaks. They also display copulatory biting, with stronger bites resulting in higher fertilization success. Fertilization is internal. The breeding season is typically from April to August each year and eggs are laid mid-May to August in an average clutch size of two. Female house geckos experience delayed fertilization and can store sperm in a funnel-shaped organ called the infundibulum for up to five months. Because of this, exact gestation time is unknown, but is estimated to be around 40–60 days. Neither males nor females have been observed providing any parental care, with males going as far as to bite the juveniles. Prey Primary prey of Mediterranean house geckos has been noted to include crickets, grasshoppers, cockroaches, spiders, beetles, moths, butterflies, ants, isopods, and snails. These geckos are visual hunters; prey selection depends on whether it is alive or dead. Mediterranean house geckos are more likely to choose living prey over dead. Gallery
Biology and health sciences
Lizards and other Squamata
Animals
3666650
https://en.wikipedia.org/wiki/Duttaphrynus%20melanostictus
Duttaphrynus melanostictus
Duttaphrynus melanostictus is commonly called Asian common toad, Asian black-spined toad, Asian toad, black-spectacled toad, common Sunda toad, and Javanese toad. It is probably a complex of more than one true toad species that is widely distributed in South and Southeast Asia. The species grows to about long. Asian common toads breed during the monsoon, and their tadpoles are black. Young toads may be seen in large numbers after monsoon rains finish. Characteristics The top of the head has several bony ridges, along the edge of the snout (canthal ridge), in front of the eye (preorbital), above the eye (supraorbital), behind the eye (postorbital), and a short one between the eye and ear (orbitotympanic). The snout is short and blunt, and the space between the eyes is broader than the upper eyelid width. The ear drum or tympanum is very distinct and is at least as wide as two-thirds the diameter of the eye. The first finger is often longer than the second and the toes are at least half webbed. A warty tubercle is found just before the junction of the thigh and shank (subarticular tubercle) and two moderate ones are on the shank (metatarsus). No skin fold occurs along the tarsus. The "knee" (tarsometatarsal articulation) reaches the tympanum or the eye when the hind leg is held parallel along the side of the body. The dorsal side is covered with spiny warts. The parotoids are prominent, kidney-shaped, or elliptical and elongated, and secrete milky white Bufotoxin. The dorsal side is yellowish or brownish and the spines and ridges are black. The underside is unmarked or spotted. Males have a subgular vocal sac and black pads on the inner fingers that help in holding the female during copulation. Ecology and behaviour Asian common toads breed in still and slow-flowing rivers and temporary and permanent ponds and pools. Adults are terrestrial and may be found under ground cover such as rocks, leaf litter, and logs, and are also associated with human habitations. The larvae are found in still and slow-moving waterbodies. They are often seen at night under street lamps, especially when winged termites swarm. They have been noted to feed on a wide range of invertebrates, including scorpions. Tadpoles grown in sibling groups metamorphosed faster than those that were kept in mixed groups. Tadpoles have been shown to be able to recognize kin. The 96h LC50 of commercial grade malathion for the tadpoles is 7.5 mg/L and sublethal levels of exposure can impair swimming. Distribution and habitat Asian common toads occur widely from northern Pakistan through Nepal, Bangladesh, India including the Andaman and Nicobar Islands, Sri Lanka, Myanmar, Thailand, Laos, Vietnam, Cambodia, southern China, Taiwan, Hong Kong and Macau to Malaysia, Singapore, and the Indonesian islands of Sumatra, Java, Borneo, Anambas and Natuna Islands. They have been recorded from sea level up to altitude, and live mostly in disturbed lowland habitats, from upper beaches and riverbanks to human-dominated agricultural and urban areas. They are uncommon in closed forests. Introductions Madagascar D. melanostictus arrived in Madagascar in 2011 at the port of Toamasina, and by 2014, was found in a zone around that city. Since its discovery on the east coast, a grave fear has developed that if the Asian toad is not eradicated from Madagascar and stronger quarantines are not developed to prevent reinvasion, it could have comparable impacts to those of cane toads in Australia. Because – like Australia's – Madagascar's native predators have been isolated from bufonids since the Jurassic, they are thought to lack resistance to toad toxins as found in natural varanid and snake predators of D. melanostictus in its native range. One study analyzed the sequences of the Na+/K+-ATPase gene (sodium-potassium pump) in dozens of Malagasy species that may be feeding on D. melanostictus. It was found that all but one out of 77 species failed to show evidence of resistance to the toad toxin, which strongly suggests that these alien toads can significantly impact the native Malagasy animal life and contribute to the worsening biodiversity crisis in the region. Nevertheless, evidence from one Australian species, the bluetongue lizard, Tiliqua scincoides, produces the possibility that some Malagasy animals do possess resistance to bufotenin because almost identical cardiac glycosides are produced by native plants of the genus Bryophyllum. Wallacea and West Papua D. melanostictus was introduced to the Indonesian island of Bali in 1958, Sulawesi in 1974, then subsequently to Ambon, Lombok, Sumba, Sumbawa, Timor and Indonesian New Guinea at Manokwari on the Vogelkop Peninsula. The species is now common at Sentani in far eastern Papua Province. The absence of resistance to toad toxins in native snake and varanid predators means that these species could suffer severe declines from the inadvertent spread of the Asian common toad via human traffic, and the currently near threatened New Guinea quoll is also almost certain to be further affected in the lower-altitude portion of its range. An unwanted species in Australia The Asian common toad has been detected in Australia at least four times since 2000. The Asian common toad has been described as one of Australia's "10 most unwanted" species, and "potentially more damaging than the cane toad". It may cause serious ecological problems due to "competition with native species, its potential to spread exotic parasites and pathogens and its toxicity". Like cane toads, the Asian common toad secretes toxins from glands in their backs to deter predators. These toxins would beyond reasonable doubt severely affect native predators, such as snakes, goannas and quolls. The recent rate of incursions suggests a high likelihood of establishment in Australia. So, experts are calling for the Australian government to develop a "high-priority contingency plan" that includes stronger environmental quarantine and surveillance strategies.
Biology and health sciences
Frogs and toads
Animals
2704726
https://en.wikipedia.org/wiki/Piri%20piri
Piri piri
Piri piri ( ), often hyphenated or as one word, and with variant spellings peri-peri () or pili pili, is a cultivar of Capsicum frutescens from the malagueta pepper. It was originally produced by Portuguese explorers in Portugal's former Southern African territories and then spread to other Portuguese domains. Etymology Pilipili in Swahili means "pepper". Other romanizations include pili pili in the Democratic Republic of the Congo and peri peri in Malawi, deriving from various pronunciations of the word in different parts of Bantu-speaking Africa. The peri peri spelling is common in English due to its use in South Africa, however, in Portugal and Portuguese-speaking countries such as Mozambique, where the modern usage of the pepper originates, the spelling piri-piri is used. The Oxford Dictionary of English records piri-piri as a foreign word meaning "a very hot sauce made with red ", and gives its ultimate origin as the word for "pepper" (presumably in the native-African sense) in the Ronga language of southern Mozambique, where Portuguese explorers developed the homonymous cultivar from malagueta pepper. Plant characteristics Plants are usually very bushy and grow in height to with leaves long and wide. The fruits are generally tapered to a blunt point and measure up to long. The immature pod colour is green; the mature colour is bright red or purple. Some bird's-eye chili varieties measure up to 175,000 Scoville heat units. Cultivation Like all chili peppers, peri-peri is descended from plants from the Americas, but it has grown in the wild in Africa for centuries and is now cultivated commercially in Zambia, Uganda, Malawi, Zimbabwe and Rwanda. It grows mainly in Malawi, Ethiopia, Zambia, South Africa, Ghana, Nigeria, Zimbabwe, Mozambique and Portugal. It is cultivated for both commercial food processing and the pharmaceutical industry. Cultivation of peri-peri is labor-intensive. Piri-piri sauce Piri-piri sauce was produced by mixing pepper with condiments the Portuguese traded with their other territories in Asia and India. The first sauce may have been produced in any part of Portugal's empire, given the lack of reliable sources that it was specifically mixed right there in Mozambique, it seems impossible to say more than the sauce was originally produced within the Portuguese Empire, either in their territories in Southern Africa or elsewhere. The sauce is made from piri-piri chilis (used as a seasoning or marinade). Beyond Portugal and the Southern African region (Angola, Namibia, Mozambique and South Africa) where it is very popular, the sauce is particularly well known in the United Kingdom due to the success of the South African restaurant chain Nando's. Recipes vary from region to region, and sometimes within the same region depending on intended use (for example, cooking vs. seasoning at the table) but the key ingredients are chili and garlic, with an oily or acidic base. Other common ingredients are salt, lemon, spirits (namely whisky), citrus peel, onion, pepper, bay leaves, paprika, pimiento, basil, oregano and tarragon.
Biology and health sciences
Botanical fruits used as culinary vegetables
Plants
2705904
https://en.wikipedia.org/wiki/Anthracotherium
Anthracotherium
Anthracotherium (from , 'coal' and 'beast') is an extinct genus of artiodactyls characterized by having 44 teeth, with five semi-crescentic cusps on the crowns of the upper molars. The genus ranged from the middle Eocene period until the early Miocene, having a distribution throughout Eurasia. Material subjectively assigned to Anthracotherium from Pakistan suggests the last species died out soon after the start of the Miocene. Description The genus typifies the family Anthracotheriidae, if only because it is the most thoroughly studied. In many respects, especially the anatomy of the lower jaw, Anthracotherium, as with the other members of the family, is allied to the hippopotamus, of which it is probably an ancestral form. The Anthracotheres, together with the hippos, are grouped with the cetaceans in the clade Whippomorpha. Anthracotheriinae are characterized by three non-ambiguous features, which is their crown height development of the lower canine, and the presence of accessory cristulids from the hypoconulid, posthypocristulid, and labial on the lower and upper molars. With these features into consideration, it has been found that these anthracotheres are in three dietary categories of extant herbivores: leaf browsers, fruit browsers, and grazers. Etymology The genus name stems from the fact that the holotype and other first specimens were originally obtained from the Paleogene (previously known as "Tertiary")-aged lignite beds of Europe. The European Anthracotherium magnum was approximately as large as a pygmy hippo (about 2 m long and weighing up to 250 kg), but there were several smaller species and the genus also occurs in Egypt, India and North America. Members of the genus Anthracotherium, as well as other members of the family Anthracotheriidae, are known colloquially as anthracotheres.
Biology and health sciences
Other artiodactyla
Animals
2706031
https://en.wikipedia.org/wiki/Bovid%20hybrid
Bovid hybrid
A bovid hybrid is the hybrid offspring of members of two different species of the bovid family. There are 143 extant species of bovid, and the widespread domestication of several species has led to an interest in hybridisation for the purpose of encouraging traits useful to humans, and to preserve declining populations. Bovid hybrids may occur naturally through undirected interbreeding, traditional pastoral practices, or may be the result of modern interventions, sometimes bringing together species from different parts of the world. Bos hybrids The following are examples of hybrids including species or sub-species in the Bos genus. Hybrids between bison and domestic cattle The American bison (Bison bison) and European bison (Bison bonasus) have been hybridized with domestic cattle (Bos taurus). European bison The wisent, or European bison, was originally crossed with cattle in an attempt to reinvigorate the declining wisent population. First generation hybrid males are sterile, but females may be crossed back to either a wisent or domestic bull to produce fertile males. Modern wisent herds keep hybrids well isolated from pure wisent. However, since the modern purebred wisent descend from fewer than two dozen individuals, a significant genetic bottleneck has resulted for the purebred wisent. Wisent have also been crossed with domestic cattle to produce the . These were first bred in Poland in 1847 as hardy, disease resistant alternatives to domestic cattle. Breeding was discontinued in the 1980s. The few remaining can be found at Bialowieski National Park. Male żubroń are infertile, but the females are fertile. American bison American bison bulls (American "buffalo") have been crossed with domestic cattle to produce beefalo or cattalo. These are variable in type and colour, depending on the breed of cattle used e.g. Herefords and Charolais (beef cattle), Holsteins (dairy) or Brahmans (humped cattle). Generally, they are horned with heavy set forequarters, sloping backs, and lighter hindquarters. Beefalo have been back-crossed to bison and to domestic cattle; some of these resemble pied bison with smooth coats and a maned hump. The aim of hybridisation is to produce high protein, low fat and low cholesterol beef on animals which have "less hump and more rump". Although bison bull-domestic cow crossings are more usual, domestic bull-bison cow crossings have a lower infant mortality rate (cow immune systems can reject hybrid calves). Modern beefalo include fertile bulls, making the beefalo a variety of "improved cattle". Bull and cow cattalos are reported in Wonders of Animal Life, edited by J. A. Hammerton (1930). Hybrids between zebu and other Bos species The zebu (Bos taurus indicus), is the common domestic cow in much of Asia. They have been interbred with other domestic cattle over thousands of years. Some zebu breeds are derived from hybrids between zebu and yak (Bos grunniens), gaur (Bos gaurus), and banteng (Bos javanicus). Zebu breeds have been widely crossed with European cattle. In Brazil, the Chanchim breed is 5/8 Charolais and 3/8 Zebu and combines the Charolais' meat quality and yield with the zebu's heat-resistance. In The Variation of Animals and Plants Under Domestication, Charles Darwin wrote: Hybrids between domestic cattle and yaks In India, Nepal, Tibet, and Mongolia, cattle are crossbred with yaks. This gives rise to the infertile male , often used as oxen, as well as fertile females which are bred into cattle breeds and can serve as milk cows. The "Dwarf Lulu" breed of cattle was tested for DNA markers and found to be a mixture of both types of cattle with yak genetics. Hybrids between bison and yaks Yaks can also cross with bison. The hybrid offspring are occasionally kept by farmers in northern Alberta where the snowy, cold winters necessitate a cold-hardy animal. American bison has been bred with the domestic Tibetan yak to create the yakalo. Domestic yak bulls mated with bison cows produced fully fertile offspring. Male yak bred with beefalo produced fertile females and sterile males. The appearance of the yak × bison hybrid is strongly reminiscent morphologically to Bison latifrons. Hybrids between bison Hybrids between species of bison In an attempt to revive the Caucasian wisent (Bison bonasus caucasicus), the American bisons and the European bisons were crossbred. Some have argued that these hybrids should be classified as a new subspecies Bison bonasus montanus. Hybrids between subspecies of American bison A herd of hybrid plains bison (Bison bison bison) × wood bison (Bison bison athabascae) lived wild in the Yukon, Canada. The wood bison is a distinct subspecies that almost became extinct in the 20th century. In an attempt to save the plains bison subspecies, between 1925 and 1928, thousands of plains bison were released into Wood Buffalo Park, a preserve for the wood bison subspecies. They readily interbred and produced a 12,000 strong herd by 1934. Consequently, the wood bison was nearly hybridized into extinction. A small genetically pure herd was recovered from an isolated area in 1959 and is now being kept isolated from introduced plains bison. Recent genetic testing seems to indicate that these wood bison are themselves hybridized with the plains subspecies, though their genetic makeup remains predominantly that of the wood buffalo. Water buffalo-domestic cattle hybrids Water buffalo and domestic cattle are not known to be able to hybridize. In laboratory experiments, the embryos fail around the 8-cell stage. There were suggestions of crossing the beefalo (an American bison-domestic cattle hybrid) to Cape buffalo, although this idea essentially ended when the Cape buffalo was found to have 52 chromosomes (instead of 60 as in cattle and bison), meaning that the hybrid's success would be unlikely. Hybrids between buffalo Wild water buffalo (Bubalus arnee) and Domestic water buffalo (Bubalus bubalis) can interbreed freely and these may be a single species differentiated only by domestication. The African buffalo (Syncerus caffer) subspecies, the Lake Chad buffalo (Syncerus caffer brachyceros) x African forest buffalo (Syncerus caffer nanus) can interbreed. The main difference between these buffalo is preferred habitat. Hybrid zones occur on forest/savannah margins. Genetic testing for hybrids Genetic testing of public herds Mitochondrial DNA testing of the Custer State Park herd have shown that 6% of the animals have bovine DNA traits and Dr. Derr from Texas A&M University, who led a study into bison genetics, conceded that the ‘hybrid’ animals tested were at least 15-20 generations from the original base stock and those animals contained only 0.003% bovine DNA. This herd was started in 1901 with a relatively small number of animals (30). At the time, all these animals were believed to be pure through analysis of the physical phenotype but at least one animal with some bovine DNA must have been included in the original herd. Since the herd was formed, no new animals have been introduced and in this closed genetic pool, bovine DNA influences have not exceeded 6%, despite numerous generations of animals having passed. It is not yet known why this bovine DNA has not influenced a greater proportion of the herd nor a higher percentage of bovine DNA having survived, but one theory is that pure-bred animals with bovine influence do not grow to be as competitive as full-blood animals and are less likely to become dominant herd bulls. Therefore, the hybrid animals bulls are less likely to reproduce in the wild than pure animals, limiting the spread of bovine DNA within the herd. Not all public herds in the US and Canada have been tested for bovine DNA, but the Elk Island Plains Bison Herd in Canada has been tested as pure. Other public herds that are believed to be pure include the Yellowstone Park Bison Herd, the Henry Mountains Bison Herd and the Wind Cave Bison Herd. Genetic testing of private herds Since DNA testing for purity has become available, there is a growing movement among bison ranchers to test their herds and cull animals that test positive for bovine DNA. The largest private herd in the world, with over 50,000 animals, is currently undergoing such a program. As similar programs gather momentum among smaller private herds, the level of hybridization among private herds will likely reduce to a very small level as there is no commercial gain to be had by hybridization and both the Canadian and American Bison Associations share the goal of preserving pure bison herds. Obtaining bison with minimal cattle introgression is desirable for the conservation of bison. However, at present, most private herds have yet to be tested for bovine DNA and the majority of plains bison in North America can be found in these private herds. As these herds have been built from the same original base stock as the public herds, is possible that up to 6% of some herds may contain bovine DNA. Consequently, out of approximately 500,000 bison in North America, it is possible that up to 30,000 may have some bovine DNA. Due to many base herds having started pure with feed stock from pure public herds such as Elk Island Park, this may be a high estimate and the true number of bison containing bovine DNA is likely to be significantly lower.
Biology and health sciences
Hybrids
Animals
8497971
https://en.wikipedia.org/wiki/Military%20helicopter
Military helicopter
A military helicopter is a helicopter that is either specifically designed for or converted for usage by a military. A military helicopter's mission is a function of its design or conversion. The most common use of military helicopters is airlift, but transport helicopters can be modified or converted to perform other missions such as combat search and rescue (CSAR), medical evacuation (MEDEVAC), serving as an airborne command post, or even armed with weapons for close air support. Specialized military helicopters are intended to conduct specific missions. Examples of specialized military helicopters are attack helicopters, observation helicopters and anti-submarine warfare (ASW) helicopters. Types and roles Military helicopters play an integral part in the sea, land and air operations of modern militaries. Generally manufacturers will develop airframes in different weight/size classes which can be adapted to different roles through the installation of mission specific equipment. To minimise development costs the basic airframes can be stretched and shortened, be updated with new engines and electronics and have the entire mechanical and flight systems mated to new fuselages to create new aircraft. For example, the Bell UH-1 Iroquois (known as the "Huey") has given rise to a number of derivatives through stretching and re-engining, including the Bell AH-1. Modern helicopters have introduced modular systems which allow the same airframe to be configured for different roles, for example the Augusta Westland AW101 "Merlin" in Royal Navy service can be rapidly configured for ASW or transport missions in hours. To at the same time retain flexibility and limit costs, it is possible to fit an airframe for but not with a system, for example in the US Army's Boeing AH-64D Apache variants are all fitted to be able to take the Longbow radar system, but not enough sets have been bought to equip the whole force. The systems can be fitted to only those airframes that need it, or when finances allow the purchase of enough units. Equipment Most military helicopters are armoured to some extent; however, all equipment is limited to the installed power and lift capability and the limits installed equipment places on useful payload. The most extensive armour is placed around the pilots, engines, transmission, and fuel tanks. Fuel lines, control cables and power to the tail rotor may also be shrouded by Kevlar armour. The most heavily armoured helicopters are attack, assault and special forces helicopters. In transport helicopters the crew compartment may or may not be fully armoured, a compromise being to give the passengers Kevlar lined seats but to leave the compartment for the most part unarmoured. Survivability is enhanced by redundancy and the placement of components to protect each other. For example, the Blackhawk family of helicopters uses two engines and can continue to fly on only one (under certain conditions), the engines are separated by the transmission and placed so that if attacked from any one flank, the engine on that flank acts to protect the transmission and the engine on the other side from damage. Aviation electronics, or avionics, such as communication radios and navigation aids are common on most military helicopters. Specialized avionics, such as electronic countermeasures and identification friend or foe systems, are military specific systems that can also be installed on military helicopters. Other payload or mission systems are installed either permanently or temporarily, based on specific mission requirements; optical and IR cameras for scout helicopters, dunking sonar and search radar for anti-submarine helicopters, extra radio transceivers and computers for helicopters used as airborne command posts. Armour, fire suppression, dynamic and electronics systems enhancements are invisible to casual inspection; as a cost-cutting measure some nations and services have been tempted to use what are essentially commercial helicopters for military purposes. For example, it has been reported that China is carrying out a rapid enlargement of its assault helicopter regiments with the civilian version of the Russian Mil Mi-17. These helicopters without armour and electronic counter measures will function well enough for training exercises and photo opportunities but would be suicidal to deploy in the assault role in actual combat situations. The intention of China appears to be to retrofit these helicopters with locally produced electronics and armour when possible, freeing available funds to allow rapid creation of enough regiments to equip each of its Group Armies, allowing a widespread buildup of experience in helicopter operations. Attack Attack helicopters are helicopters used in the anti-tank and close air support roles. The first of the modern attack helicopters was the Vietnam era Bell AH-1 Cobra, which pioneered the now classic format of pilot and weapons officer seated in tandem in a narrow fuselage, chin mounted guns, and rockets and missiles mounted on stub wings. To enable them to find and identify their targets, some modern attack helicopters are equipped with very capable sensors such as a millimeter wave radar system. Transport Transport helicopters are used for transporting personnel (troops) and cargo in support of military operations. In larger militaries, these helicopters are often purpose-built for military operations, but commercially available aircraft are also used. The benefit of using helicopters for these operations is that personnel and cargo can be moved to and from locations without requiring a runway for takeoffs and landings. Cargo is carried either internally, or externally by slung load where the load is suspended from an attachment point underneath the aircraft. Personnel are primarily loaded and unloaded while the helicopter is on the ground. However, when the terrain restricts even helicopters from landing, personnel may also be picked up and dropped off using specialized devices, such as rescue hoists or special rope lines, while the aircraft hovers overhead. Air assault is a military strategy that relies heavily on the use of transport helicopters. An air assault involves a customized assault force that is assembled on the pick-up zone and staged for sequential transport to a landing zone (LZ). The idea is to use the helicopters to transport and land a large number of troops and equipment in a relatively short amount of time, in order to assault and overwhelm an objective near the LZ. The advantage of air assault over an airborne assault is the ability of the helicopters to continually resupply the force during the operation, as well as to transport the personnel and equipment to their previous location, or a follow-on location if the mission dictates. Observation The first reconnaissance and observation aircraft were balloons, followed by light airplanes, such as the Taylorcraft L-2 and Fieseler Fi 156. As the first military helicopters became available, their ability to both maneuver and to remain in one location made them ideal for reconnaissance. Initially observation helicopters were limited to visual observation by the aircrew, and most helicopters featured rounded, well-glazed cockpits for maximum visibility. Over time, the human eye became supplemented by optical sensor systems. Today, these include low light level television and forward looking infrared cameras. Often, these are mounted in a stabilised mount along with multi-function lasers capable of acting as laser rangefinder and targeting designators for weapons systems. By nature of the mission, the observation helicopter's primary weapons are its sensor suite and communications equipment. Early observation helicopters were effective at calling for artillery fire and airstrikes. With modern sensor suites, they are also able to provide terminal guidance to anti-tank guided weapons, laser-guided bombs and other missiles and munitions fired by other armed aircraft. Observation helicopters may also be armed with combinations of gun and rocket pods and sometimes anti-tank guided missiles or air-to-air missiles, but in smaller quantities than larger attack helicopters. Primarily, these weapons were intended for the counter-reconnaissance fight—to eliminate an enemy's reconnaissance assets—but they can also be used to provide limited direct fire support or close air support. Maritime Among the first practical uses of helicopters when the Sikorsky R-4 and R-5 became available to British and American forces was deployment from navy cruisers and battleships, at first supplementing and later replacing catapult-launched observation aircraft. Another niche within the capability of the early helicopters was as plane guard - tasked with the recovery of pilots who had ditched near an aircraft carrier. As helicopter technology matured with increased payload and endurance, anti-submarine warfare (ASW) was added to the helicopter's repertoire. Initially, helicopters operated as weapons delivery systems, attacking with air-launched torpedoes and depth charges based on information provided by its parent and other warships. In the 1960s, the development of the turboshaft engine and transistor technology changed the face of maritime helicopter aviation. The turboshaft engine allowed smaller helicopters, such as the Westland Wasp, to operate from smaller vessels than their reciprocating engine predecessors. The introduction of transistors allowed helicopters, such as the Sikorsky SH-3 Sea King, to be equipped with integral dunking sonar, radar and magnetic anomaly detection equipment. The result was an aircraft able to more quickly respond to submarine threats to the fleet without waiting for directions from fleet vessels. Today, maritime helicopters such as the Sikorksy SH-60 Seahawk and the Westland Lynx are designed to be operated from frigates, destroyers and similar size vessels. The desire to carry and operate two helicopters from frigate- and destroyer-sized vessels has affected the maximum size of the helicopters and the minimum size of the ships. Increasing miniaturisation of electronics, better engines and modern weapons now allow even the modern, destroyer-based, multi-role helicopter to operate nearly autonomously in the ASW, anti-shipping, transport, SAR and reconnaissance roles. Medium- and large-sized helicopters are operated from carriers and land bases. In the British, Spanish, and Italian navies, the larger helicopters form the main anti-submarine strength of carrier air wings. When operating from shore bases, the helicopters are used as anti-submarine pickets to protect against hostile submarines loitering outside military ports and harbours; their endurance and payload providing advantages over smaller helicopters. Soviet maritime helicopters, operating from its cruisers, had the additional role of guidance of the cruisers' long range anti-shipping missiles. Maritime helicopters are navalised aircraft for operation from ships. This includes enhanced protection against salt water corrosion, protection against ingestion of water, and provision for forced ditching at sea. Multi-mission and rescue As helicopters came into military service, they were quickly pressed into service for search and rescue and medical evacuation. During World War II, Flettner Fl 282s were used in Germany for reconnaissance, and Sikorsky R-4s were used by the United States to rescue downed aircrews and injured personnel in remote areas of the China Burma India Theater, from April 1944 until the war's end. The use of helicopters for rescue during combat increased during the Korean War and the Algerian War. In the Vietnam War the USAF acquired Sikorsky S-61R (Jolly Green Giant) and Sikorsky CH-53 Sea Stallion (Super Jolly Green Giant) helicopters for the Combat search and rescue (CSAR) mission. Training Some services use a version of their operational helicopters, usually in the light class, for pilot training. For example, the British have used the Aérospatiale Gazelle both in operations and as a trainer. Some services also have an ab initio phase in training that uses very basic helicopters. The Mexican Navy has acquired a number of the commercially available Robinson R22 and R44 helicopters for this purpose. Utility A utility helicopter is a multi-purpose helicopter. A utility military helicopter can fill roles such as ground attack, air assault, military logistics, medical evacuation, command and control, and troop transport. Tactics and operations While not essential to combat operations, helicopters give a substantial advantage to their operators by being a force multiplier. To maximise their impact, helicopters are utilised in a combined arms approach. High intensity warfare High-intensity warfare is characterized by large arrays of conventional armed forces, including mass formations of tanks, with significant air defenses. Helicopter armament and tactics were changed to account for a less-permissive flight environment. Anti-tank missiles, such as the SS.11 and the Aérospatiale SS.12/AS.12 were developed and mounted on French military helicopters. In turn, the United States adapted its BGM-71 TOW for firing from helicopters and eventually developed the AGM-114 Hellfire. Meanwhile, the Soviet Union adapted the 3M11 Falanga missile for firing from the Mil Mi-24. In the air, attack helicopters armed with anti-tank missiles, and one or more unarmed, or lightly armed scout helicopters operate in concert. The scout helicopter, flying at low level in a nap-of-the-earth approach, attempts to both locate the enemy armoured columns and to map out approaches and ambush positions for the attack helicopters. Late-model scout helicopters include laser designators to guide missiles fired from the attack helicopters. After finding a target, the scout helicopter can locate it and then direct the attack helicopter's missile where to fire. The attack helicopters have only to rise from cover briefly to fire their missiles before returning to a concealed location. Late-development of attack helicopters, such as the Mil Mi-28N, the Kamov Ka-52, and the AH-64D Longbow, incorporate sensors and command and control systems to relieve the requirement for scout helicopters. To enhance the combat endurance of these missile-armed helicopters, transport helicopters were used to carry technicians, reloads and fuel to forward locations. Establishing these forward arming and refuel points (FARP) at pre-arranged locations and times allowed armed or attack helicopters to re-arm and refuel, often with their engines running and the rotors still turning, and to quickly return to the front lines. Low intensity warfare In counter-insurgency (COIN) warfare, the government force establishes its presence in permanent or temporary military bases from which to mount patrols and convoys. The government forces seek to deter the insurgent forces from operating, and to capture or kill those that do. The operation of forces from fixed bases linked by a fixed network of roads becomes a weakness. Emplaced insurgents and local sympathisers may observe such facilities covertly and gather intelligence on the schedules and routes of patrols and convoys. With this intelligence the insurgents can time their operations to avoid the COIN forces or plan ambushes to engage them, depending on their own tactical situation. Helicopters return a measure of surprise and tactical flexibility to the COIN commander. Patrols need not start and end in the same place (the main entrance of the local compound), nor do supply convoys need follow the same roads and highways. During the Rhodesian Bush War, the Rhodesian military developed and refined "Fireforce" tactics, using small flights of light helicopters, the helicopters would be equipped as gunships to directly attack insurgents with aerial gunfire and also as either an airborne command/observation post or troop transport. Once contact had been established against enemy guerillas paratroopers would be dropped by a Dakota and act as "beaters" to drive the guerillas into stop groups landed by the helicopters. During the Troubles, the Provisional Irish Republican Army (IRA) became adept at avoiding conventional, fixed roadblocks and patrols. To prevent predictable patterns, the patrols were deployed by helicopter, known as Eagle Patrols, and were then able to disrupt the IRA's ability to move personnel and arms. In the aftermath of the American invasion of Iraq helicopters have been used as aerial supply trucks and troop transports to prevent exposure to ambushes set by the Iraqi insurgency. Due to the cost and complexity of training and support requirements, insurgent forces rarely have access to helicopters. Manufacturers The major Western European helicopter manufacturers are Leonardo S.p.A. (formerly AgustaWestland) and Eurocopter Group. In North America, the three primary manufacturers are Boeing (Boeing Vertol and McDonnell Douglas), Bell Helicopter and Sikorsky Aircraft. In Japan, the three main manufacturers of helicopters are the aviation arms of the Japanese conglomerates Mitsubishi, Kawasaki and Fuji Heavy Industries. These companies initially followed a business model based on forming strategic partnerships with foreign, usually American, companies with the licensed production of those companies products, whilst building up their own ability to design and manufacture helicopters through a process of workshare and technology transfer. In India, Hindustan Aeronautics Limited is the main helicopter manufacturer for the Indian Armed Forces. In the Soviet planned economic system, the Mil and Kamov OKBs were responsible only for the design of helicopters. A re-organisation of the helicopter industry in Russia created Russian Helicopters, a holding company to bring together Mil, Kamov, and other helicopter manufacturing and maintenance plants.
Technology
Military aviation
null
24104095
https://en.wikipedia.org/wiki/Cartesian%20product
Cartesian product
In mathematics, specifically set theory, the Cartesian product of two sets and , denoted , is the set of all ordered pairs where is in and is in . In terms of set-builder notation, that is A table can be created by taking the Cartesian product of a set of rows and a set of columns. If the Cartesian product is taken, the cells of the table contain ordered pairs of the form . One can similarly define the Cartesian product of sets, also known as an -fold Cartesian product, which can be represented by an -dimensional array, where each element is an -tuple. An ordered pair is a 2-tuple or couple. More generally still, one can define the Cartesian product of an indexed family of sets. The Cartesian product is named after René Descartes, whose formulation of analytic geometry gave rise to the concept, which is further generalized in terms of direct product. Set-theoretic definition A rigorous definition of the Cartesian product requires a domain to be specified in the set-builder notation. In this case the domain would have to contain the Cartesian product itself. For defining the Cartesian product of the sets and , with the typical Kuratowski's definition of a pair as , an appropriate domain is the set where denotes the power set. Then the Cartesian product of the sets and would be defined as Examples A deck of cards An illustrative example is the standard 52-card deck. The standard playing card ranks {A, K, Q, J, 10, 9, 8, 7, 6, 5, 4, 3, 2} form a 13-element set. The card suits } form a four-element set. The Cartesian product of these sets returns a 52-element set consisting of 52 ordered pairs, which correspond to all 52 possible playing cards. returns a set of the form {(A, ♠), (A, ), (A, ), (A, ♣), (K, ♠), ..., (3, ♣), (2, ♠), (2, ), (2, ), (2, ♣)}. returns a set of the form {(♠, A), (♠, K), (♠, Q), (♠, J), (♠, 10), ..., (♣, 6), (♣, 5), (♣, 4), (♣, 3), (♣, 2)}. These two sets are distinct, even disjoint, but there is a natural bijection between them, under which (3, ♣) corresponds to (♣, 3) and so on. A two-dimensional coordinate system The main historical example is the Cartesian plane in analytic geometry. In order to represent geometrical shapes in a numerical way, and extract numerical information from shapes' numerical representations, René Descartes assigned to each point in the plane a pair of real numbers, called its coordinates. Usually, such a pair's first and second components are called its and coordinates, respectively (see picture). The set of all such pairs (i.e., the Cartesian product , with denoting the real numbers) is thus assigned to the set of all points in the plane. Most common implementation (set theory) A formal definition of the Cartesian product from set-theoretical principles follows from a definition of ordered pair. The most common definition of ordered pairs, Kuratowski's definition, is . Under this definition, is an element of , and is a subset of that set, where represents the power set operator. Therefore, the existence of the Cartesian product of any two sets in ZFC follows from the axioms of pairing, union, power set, and specification. Since functions are usually defined as a special case of relations, and relations are usually defined as subsets of the Cartesian product, the definition of the two-set Cartesian product is necessarily prior to most other definitions. Non-commutativity and non-associativity Let , , , and be sets. The Cartesian product is not commutative, because the ordered pairs are reversed unless at least one of the following conditions is satisfied: is equal to , or or is the empty set. For example: ; Strictly speaking, the Cartesian product is not associative (unless one of the involved sets is empty). If for example , then . Intersections, unions, and subsets The Cartesian product satisfies the following property with respect to intersections (see middle picture). In most cases, the above statement is not true if we replace intersection with union (see rightmost picture). In fact, we have that: For the set difference, we also have the following identity: Here are some rules demonstrating distributivity with other operators (see leftmost picture): where denotes the absolute complement of . Other properties related with subsets are: Cardinality The cardinality of a set is the number of elements of the set. For example, defining two sets: and . Both set and set consist of two elements each. Their Cartesian product, written as , results in a new set which has the following elements: . where each element of is paired with each element of , and where each pair makes up one element of the output set. The number of values in each element of the resulting set is equal to the number of sets whose Cartesian product is being taken; 2 in this case. The cardinality of the output set is equal to the product of the cardinalities of all the input sets. That is, . In this case, Similarly, and so on. The set is infinite if either or is infinite, and the other set is not the empty set. Cartesian products of several sets n-ary Cartesian product The Cartesian product can be generalized to the -ary Cartesian product over sets as the set of -tuples. If tuples are defined as nested ordered pairs, it can be identified with . If a tuple is defined as a function on } that takes its value at to be the -th element of the tuple, then the Cartesian product is the set of functions n-ary Cartesian power The Cartesian square of a set is the Cartesian product . An example is the 2-dimensional plane where is the set of real numbers: is the set of all points where and are real numbers (see the Cartesian coordinate system). The -ary Cartesian power of a set , denoted , can be defined as An example of this is , with again the set of real numbers, and more generally . The -ary Cartesian power of a set is isomorphic to the space of functions from an -element set to . As a special case, the 0-ary Cartesian power of may be taken to be a singleton set, corresponding to the empty function with codomain . Infinite Cartesian products It is possible to define the Cartesian product of an arbitrary (possibly infinite) indexed family of sets. If is any index set, and is a family of sets indexed by , then the Cartesian product of the sets in is defined to be that is, the set of all functions defined on the index set such that the value of the function at a particular index is an element of Xi. Even if each of the Xi is nonempty, the Cartesian product may be empty if the axiom of choice, which is equivalent to the statement that every such product is nonempty, is not assumed. may also be denoted . For each in , the function defined by is called the -th projection map. Cartesian power is a Cartesian product where all the factors Xi are the same set . In this case, is the set of all functions from to , and is frequently denoted XI. This case is important in the study of cardinal exponentiation. An important special case is when the index set is , the natural numbers: this Cartesian product is the set of all infinite sequences with the -th term in its corresponding set Xi. For example, each element of can be visualized as a vector with countably infinite real number components. This set is frequently denoted , or . Other forms Abbreviated form If several sets are being multiplied together (e.g., ), then some authors choose to abbreviate the Cartesian product as simply . Cartesian product of functions If is a function from to and is a function from to , then their Cartesian product is a function from to with This can be extended to tuples and infinite collections of functions. This is different from the standard Cartesian product of functions considered as sets. Cylinder Let be a set and . Then the cylinder of with respect to is the Cartesian product of and . Normally, is considered to be the universe of the context and is left away. For example, if is a subset of the natural numbers , then the cylinder of is . Definitions outside set theory Category theory Although the Cartesian product is traditionally applied to sets, category theory provides a more general interpretation of the product of mathematical structures. This is distinct from, although related to, the notion of a Cartesian square in category theory, which is a generalization of the fiber product. Exponentiation is the right adjoint of the Cartesian product; thus any category with a Cartesian product (and a final object) is a Cartesian closed category. Graph theory In graph theory, the Cartesian product of two graphs and is the graph denoted by , whose vertex set is the (ordinary) Cartesian product and such that two vertices and are adjacent in , if and only if and is adjacent with ′ in , or and is adjacent with ′ in . The Cartesian product of graphs is not a product in the sense of category theory. Instead, the categorical product is known as the tensor product of graphs.
Mathematics
Set theory
null
24104134
https://en.wikipedia.org/wiki/Conditional%20probability
Conditional probability
In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) is already known to have occurred. This particular method relies on event A occurring with some sort of relationship with another event B. In this situation, the event A can be analyzed by a conditional probability with respect to B. If the event of interest is and the event is known or assumed to have occurred, "the conditional probability of given ", or "the probability of under the condition ", is usually written as or occasionally . This can also be understood as the fraction of probability B that intersects with A, or the ratio of the probabilities of both events happening to the "given" one happening (how many times A occurs rather than not assuming B has occurred): . For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person is sick, then they are much more likely to be coughing. For example, the conditional probability that someone unwell (sick) is coughing might be 75%, in which case we would have that = 5% and = 75 %. Although there is a relationship between and in this example, such a relationship or dependence between and is not necessary, nor do they have to occur simultaneously. may or may not be equal to , i.e., the unconditional probability or absolute probability of . If , then events and are said to be independent: in such a case, knowledge about either event does not alter the likelihood of each other. (the conditional probability of given ) typically differs from . For example, if a person has dengue fever, the person might have a 90% chance of being tested as positive for the disease. In this case, what is being measured is that if event (having dengue) has occurred, the probability of (tested as positive) given that occurred is 90%, simply writing = 90%. Alternatively, if a person is tested as positive for dengue fever, they may have only a 15% chance of actually having this rare disease due to high false positive rates. In this case, the probability of the event (having dengue) given that the event (testing positive) has occurred is 15% or = 15%. It should be apparent now that falsely equating the two probabilities can lead to various errors of reasoning, which is commonly seen through base rate fallacies. While conditional probabilities can provide extremely useful information, limited information is often supplied or at hand. Therefore, it can be useful to reverse or convert a conditional probability using Bayes' theorem: . Another option is to display conditional probabilities in a conditional probability table to illuminate the relationship between events. Definition Conditioning on an event Kolmogorov definition Given two events and from the sigma-field of a probability space, with the unconditional probability of being greater than zero (i.e., , the conditional probability of given () is the probability of A occurring if B has or is assumed to have happened. A is assumed to be the set of all possible outcomes of an experiment or random trial that has a restricted or reduced sample space. The conditional probability can be found by the quotient of the probability of the joint intersection of events and , that is, , the probability at which A and B occur together, and the probability of : . For a sample space consisting of equal likelihood outcomes, the probability of the event A is understood as the fraction of the number of outcomes in A to the number of all outcomes in the sample space. Then, this equation is understood as the fraction of the set to the set B. Note that the above equation is a definition, not just a theoretical result. We denote the quantity as and call it the "conditional probability of given ." As an axiom of probability Some authors, such as de Finetti, prefer to introduce conditional probability as an axiom of probability: . This equation for a conditional probability, although mathematically equivalent, may be intuitively easier to understand. It can be interpreted as "the probability of B occurring multiplied by the probability of A occurring, provided that B has occurred, is equal to the probability of the A and B occurrences together, although not necessarily occurring at the same time". Additionally, this may be preferred philosophically; under major probability interpretations, such as the subjective theory, conditional probability is considered a primitive entity. Moreover, this "multiplication rule" can be practically useful in computing the probability of and introduces a symmetry with the summation axiom for Poincaré Formula: Thus the equations can be combined to find a new representation of the : As the probability of a conditional event Conditional probability can be defined as the probability of a conditional event . The Goodman–Nguyen–Van Fraassen conditional event can be defined as: , where and represent states or elements of A or B. It can be shown that which meets the Kolmogorov definition of conditional probability. Conditioning on an event of probability zero If , then according to the definition, is undefined. The case of greatest interest is that of a random variable , conditioned on a continuous random variable resulting in a particular outcome . The event has probability zero and, as such, cannot be conditioned on. Instead of conditioning on being exactly , we could condition on it being closer than distance away from . The event will generally have nonzero probability and hence, can be conditioned on. We can then take the limit For example, if two continuous random variables and have a joint density , then by L'Hôpital's rule and Leibniz integral rule, upon differentiation with respect to : The resulting limit is the conditional probability distribution of given and exists when the denominator, the probability density , is strictly positive. It is tempting to define the undefined probability using limit (), but this cannot be done in a consistent manner. In particular, it is possible to find random variables and and values , such that the events and are identical but the resulting limits are not: The Borel–Kolmogorov paradox demonstrates this with a geometrical argument. Conditioning on a discrete random variable Let be a discrete random variable and its possible outcomes denoted . For example, if represents the value of a rolled dice then is the set . Let us assume for the sake of presentation that is a discrete random variable, so that each value in has a nonzero probability. For a value in and an event , the conditional probability is given by . Writing for short, we see that it is a function of two variables, and . For a fixed , we can form the random variable . It represents an outcome of whenever a value of is observed. The conditional probability of given can thus be treated as a random variable with outcomes in the interval . From the law of total probability, its expected value is equal to the unconditional probability of . Partial conditional probability The partial conditional probability is about the probability of event given that each of the condition events has occurred to a degree (degree of belief, degree of experience) that might be different from 100%. Frequentistically, partial conditional probability makes sense, if the conditions are tested in experiment repetitions of appropriate length . Such -bounded partial conditional probability can be defined as the conditionally expected average occurrence of event in testbeds of length that adhere to all of the probability specifications , i.e.: Based on that, partial conditional probability can be defined as where Jeffrey conditionalization is a special case of partial conditional probability, in which the condition events must form a partition: Example Suppose that somebody secretly rolls two fair six-sided dice, and we wish to compute the probability that the face-up value of the first one is 2, given the information that their sum is no greater than 5. Let D1 be the value rolled on dice 1. Let D2 be the value rolled on dice 2. Probability that D1 = 2 Table 1 shows the sample space of 36 combinations of rolled values of the two dice, each of which occurs with probability 1/36, with the numbers displayed in the red and dark gray cells being D1 + D2. D1 = 2 in exactly 6 of the 36 outcomes; thus P(D1 = 2) =  = : {| class="wikitable" style="background:silver; text-align:center; width:300px" |+ Table 1 ! rowspan=2 colspan=2 | + ! colspan=6 | D2 |- ! scope="col" | 1 ! scope="col" | 2 ! scope="col" | 3 ! scope="col" | 4 ! scope="col" | 5 ! scope="col" | 6 |- ! rowspan=6 scope="row" | D1 ! scope="row" | 1 | 2 || 3 || 4 || 5 || 6 || 7 |- style="background: red;" ! scope="row" | 2 | 3 || 4 || 5 || 6 || 7 || 8 |- ! scope="row" | 3 | 4 || 5 || 6 || 7 || 8 || 9 |- ! scope="row" | 4 | 5 || 6 || 7 || 8 || 9 || 10 |- ! scope="row" | 5 | 6 || 7 || 8 || 9 || 10 || 11 |- ! scope="row" | 6 | 7 || 8 || 9 || 10 || 11 || 12 |} Probability that D1 + D2 ≤ 5 Table 2 shows that D1 + D2 ≤ 5 for exactly 10 of the 36 outcomes, thus P(D1 + D2 ≤ 5) = : {| class="wikitable" style="background:silver; text-align:center; width:300px" |+ Table 2 ! rowspan=2 colspan=2 | + ! colspan=6 | D2 |- ! scope="col" | 1 ! scope="col" | 2 ! scope="col" | 3 ! scope="col" | 4 ! scope="col" | 5 ! scope="col" | 6 |- ! rowspan=6 scope="row" | D1 ! 1 | style="background:red;" | 2 || style="background:red;" | 3 || style="background:red;" | 4 || style="background:red;" | 5 || 6 || 7 |- ! scope="row" | 2 | style="background:red;" | 3 || style="background:red;" | 4 || style="background:red;" | 5 || 6 || 7 || 8 |- ! scope="row" | 3 | style="background:red;" | 4 || style="background:red;" | 5 || 6 || 7 || 8 || 9 |- ! scope="row" | 4 | style="background:red;" | 5 || 6 || 7 || 8 || 9 || 10 |- ! scope="row" | 5 | 6 || 7 || 8 || 9 || 10 || 11 |- style="background: red;" |- ! scope="row" | 6 | 7 || 8 || 9 || 10 || 11 || 12 |- style="background: red;" |} Probability that D1 = 2 given that D1 + D2 ≤ 5 Table 3 shows that for 3 of these 10 outcomes, D1 = 2. Thus, the conditional probability P(D1 = 2 | D1+D2 ≤ 5) =  = 0.3: {| class="wikitable" style="text-align:center; width:300px" |+ Table 3 ! rowspan=2 colspan=2 | + ! colspan=6 | D2 |- ! scope="col" | 1 ! scope="col" | 2 ! scope="col" | 3 ! scope="col" | 4 ! scope="col" | 5 ! scope="col" | 6 |- ! rowspan=6 scope="row" | D1 ! 1 | style="background:silver;" | 2 || style="background:silver;" | 3 || style="background:silver;" | 4 || style="background:silver;" | 5 || 6 || 7 |- ! scope="row" | 2 | style="background:red;" | 3 || style="background:red;" | 4 || style="background:red;" | 5 || 6 || 7 || 8 |- ! scope="row" | 3 | style="background:silver;" | 4 || style="background:silver;" | 5 || 6 || 7 || 8 || 9 |- ! scope="row" | 4 | style="background:silver;" | 5 || 6 || 7 || 8 || 9 || 10 |- ! scope="row" | 5 | 6 || 7 || 8 || 9 || 10 || 11 |- ! scope="row" | 6 | 7 || 8 || 9 || 10 || 11 || 12 |} Here, in the earlier notation for the definition of conditional probability, the conditioning event B is that D1 + D2 ≤ 5, and the event A is D1 = 2. We have as seen in the table. Use in inference In statistical inference, the conditional probability is an update of the probability of an event based on new information. The new information can be incorporated as follows: Let A, the event of interest, be in the sample space, say (X,P). The occurrence of the event A knowing that event B has or will have occurred, means the occurrence of A as it is restricted to B, i.e. . Without the knowledge of the occurrence of B, the information about the occurrence of A would simply be P(A) The probability of A knowing that event B has or will have occurred, will be the probability of relative to P(B), the probability that B has occurred. This results in whenever P(B) > 0 and 0 otherwise. This approach results in a probability measure that is consistent with the original probability measure and satisfies all the Kolmogorov axioms. This conditional probability measure also could have resulted by assuming that the relative magnitude of the probability of A with respect to X will be preserved with respect to B (cf. a Formal Derivation below). The wording "evidence" or "information" is generally used in the Bayesian interpretation of probability. The conditioning event is interpreted as evidence for the conditioned event. That is, P(A) is the probability of A before accounting for evidence E, and P(A|E) is the probability of A after having accounted for evidence E or after having updated P(A). This is consistent with the frequentist interpretation, which is the first definition given above. Example When Morse code is transmitted, there is a certain probability that the "dot" or "dash" that was received is erroneous. This is often taken as interference in the transmission of a message. Therefore, it is important to consider when sending a "dot", for example, the probability that a "dot" was received. This is represented by: In Morse code, the ratio of dots to dashes is 3:4 at the point of sending, so the probability of a "dot" and "dash" are . If it is assumed that the probability that a dot is transmitted as a dash is 1/10, and that the probability that a dash is transmitted as a dot is likewise 1/10, then Bayes's rule can be used to calculate . Now, can be calculated: Statistical independence Events A and B are defined to be statistically independent if the probability of the intersection of A and B is equal to the product of the probabilities of A and B: If P(B) is not zero, then this is equivalent to the statement that Similarly, if P(A) is not zero, then is also equivalent. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined, and the preferred definition is symmetrical in A and B. Independence does not refer to a disjoint event. It should also be noted that given the independent event pair [A B] and an event C, the pair is defined to be conditionally independent if the product holds true: This theorem could be useful in applications where multiple independent events are being observed. Independent events vs. mutually exclusive events The concepts of mutually independent events and mutually exclusive events are separate and distinct. The following table contrasts results for the two cases (provided that the probability of the conditioning event is not zero). In fact, mutually exclusive events cannot be statistically independent (unless both of them are impossible), since knowing that one occurs gives information about the other (in particular, that the latter will certainly not occur). Common fallacies These fallacies should not be confused with Robert K. Shope's 1978 "conditional fallacy", which deals with counterfactual examples that beg the question. Assuming conditional probability is of similar size to its inverse In general, it cannot be assumed that P(A|B) ≈ P(B|A). This can be an insidious error, even for those who are highly conversant with statistics. The relationship between P(A|B) and P(B|A) is given by Bayes' theorem: That is, P(A|B) ≈ P(B|A) only if P(B)/P(A) ≈ 1, or equivalently, P(A) ≈ P(B). Assuming marginal and conditional probabilities are of similar size In general, it cannot be assumed that P(A) ≈ P(A|B). These probabilities are linked through the law of total probability: where the events form a countable partition of . This fallacy may arise through selection bias. For example, in the context of a medical claim, let S be the event that a sequela (chronic disease) S occurs as a consequence of circumstance (acute condition) C. Let H be the event that an individual seeks medical help. Suppose that in most cases, C does not cause S (so that P(S) is low). Suppose also that medical attention is only sought if S has occurred due to C. From experience of patients, a doctor may therefore erroneously conclude that P(S) is high. The actual probability observed by the doctor is P(S|H). Over- or under-weighting priors Not taking prior probability into account partially or completely is called base rate neglect. The reverse, insufficient adjustment from the prior probability is conservatism. Formal derivation Formally, P(A | B) is defined as the probability of A according to a new probability function on the sample space, such that outcomes not in B have probability 0 and that it is consistent with all original probability measures. Let Ω be a discrete sample space with elementary events {ω}, and let P be the probability measure with respect to the σ-algebra of Ω. Suppose we are told that the event B ⊆ Ω has occurred. A new probability distribution (denoted by the conditional notation) is to be assigned on {ω} to reflect this. All events that are not in B will have null probability in the new distribution. For events in B, two conditions must be met: the probability of B is one and the relative magnitudes of the probabilities must be preserved. The former is required by the axioms of probability, and the latter stems from the fact that the new probability measure has to be the analog of P in which the probability of B is one - and every event that is not in B, therefore, has a null probability. Hence, for some scale factor α, the new distribution must satisfy: Substituting 1 and 2 into 3 to select α: So the new probability distribution is Now for a general event A,
Mathematics
Probability
null
24104729
https://en.wikipedia.org/wiki/Insect%20morphology
Insect morphology
Insect morphology is the study and description of the physical form of insects. The terminology used to describe insects is similar to that used for other arthropods due to their shared evolutionary history. Three physical features separate insects from other arthropods: they have a body divided into three regions (called tagmata) (head, thorax, and abdomen), three pairs of legs, and mouthparts located outside of the head capsule. This position of the mouthparts divides them from their closest relatives, the non-insect hexapods, which include Protura, Diplura, and Collembola. There is enormous variation in body structure amongst insect species. Individuals can range from 0.3 mm (fairyflies) to 30 cm across (great owlet moth); have no eyes or many; well-developed wings or none; and legs modified for running, jumping, swimming, or even digging. These modifications allow insects to occupy almost every ecological niche except the deep ocean. This article describes the basic insect body and some variations of the different body parts; in the process, it defines many of the technical terms used to describe insect bodies. Anatomy summary Insects, like all arthropods, have no interior skeleton; instead, they have an exoskeleton, a hard outer layer made mostly of chitin that protects and supports the body. The insect body is divided into three parts: the head, thorax, and abdomen. The head is specialized for sensory input and food intake; the thorax, which is the anchor point for the legs and wings (if present), is specialized for locomotion; and the abdomen is for digestion, respiration, excretion, and reproduction. Although the general function of the three body regions is the same across all insect species, there are major differences in basic structure, with wings, legs, antennae, and mouthparts being variable from group to group. External Exoskeleton The insect's outer skeleton, the cuticle, consists of two layers; the epicuticle, which is a thin, waxy, water-resistant outer layer that lacks chitin, and the layer under it is called the procuticle. This is chitinous and much thicker than the epicuticle and has two layers, the outer is the exocuticle while the inner is the endocuticle. The tough and flexible endocuticle is built from numerous layers of fibrous chitin and proteins, crisscrossing each other in a sandwich pattern, while the exocuticle is rigid and sclerotized. The exocuticle is greatly reduced in many soft-bodied insects, especially the larval stages (e.g., caterpillars). Chemically, chitin is a long-chain polymer of a N-acetylglucosamine, a derivative of glucose. In its unmodified form, chitin is translucent, pliable, and resilient. In arthropods, however, it is often modified, becoming embedded in a hardened proteinaceous matrix, which forms much of the exoskeleton. In its pure form, it is leathery, but when encrusted in calcium carbonate, it becomes much harder. The difference between the unmodified and modified forms is evident when comparing the body wall of a caterpillar (unmodified) to a beetle (modified). From the embryonic stages, a layer of columnar or cuboidal epithelial cells gives rise to the external cuticle and an internal basement membrane. The majority of insect material is inside of the endocuticle. The cuticle provides muscular support and acts as a protective shield as the insect develops. However, since it cannot grow, the external sclerotized part of the cuticle is periodically shed in a process called "molting". As the time for molting approaches, most of the exocuticle material is reabsorbed. In molting, the old cuticle separates from the epidermis (apolysis). Enzymatic molting fluid is then released between the old cuticle and epidermis, which separates the exocuticle by digesting the endocuticle and sequestering its material for the new cuticle. When the new cuticle has formed sufficiently, the epicuticle and reduced exocuticle are shed in ecdysis. The four principal regions of an insect body segment are the tergum or dorsal, sternum or ventral, and the two pleura or laterals. Hardened plates in the exoskeleton are called sclerites, which are subdivisions of the major regions – tergites, sternites, and pleurites, for respective regions tergum, sternum, and pleuron. Head The head in most insects is enclosed in a hard, heavily sclerotized, exoskeletal head capsule. The main exception is in those species whose larvae are not fully sclerotized, mainly some holometabola; but even most unsclerotized or weakly sclerotized larvae tend to have well-sclerotized head capsules, for example, the larvae of Coleoptera and Hymenoptera. The larvae of Cyclorrhapha however, tend to have hardly any head capsule at all. The head capsule bears most of the sensory organs, including the antennae, ocelli, and compound eyes, along with the mouthparts. In the adult insect, the head capsule appears unsegmented, though embryological studies show it to consist of six segments that bear the paired head appendages, including the mouthparts, each pair on a specific segment. Each such pair occupies one segment, though not all segments in modern insects bear any visible appendages. Of all the insect orders, Orthoptera displays the greatest variety of features found in the heads of insects, including the sutures and sclerites. Here, the vertex, or the apex (dorsal region), is situated between the compound eyes of insects with hypognathous and opisthognathous heads. In prognathous insects, the vertex is not found between the compound eyes, but rather where the ocelli are normally found. This is because the primary axis of the head is rotated 90° to become parallel to the primary axis of the body. In some species, this region is modified and assumes a different name. The ecdysial suture is made of the coronal, frontal, and epicranial sutures plus the ecdysial and cleavage lines, which vary among different species of insects. The ecdysial suture is longitudinally placed on the vertex, separating the epicranial halves of the head to the left and right sides. Depending on the insect, the suture may come in different shapes: like either a Y, U or V. Those diverging lines that make up the ecdysial suture are called the frontal or frontogenal sutures. Not all species of insects have frontal sutures, but in those that do, the sutures split open during ecdysis, which provides an opening for the new instar to emerge from the integument. The frons is that part of the head capsule that lies ventrad or anteriad of the vertex. The frons varies in size relative to the insect, and in many species, the definition of its borders is arbitrary, even in some insect taxa that have well-defined head capsules. In most species, though, the frons is bordered at its anterior by the frontoclypeal or epistomal sulcus above the clypeus. Laterally it is limited by the fronto-genal sulcus, if present, and the boundary with the vertex, by the ecdysial cleavage line, if it is visible. If there is a median ocellus, it generally is on the frons, though in some insects such as many Hymenoptera, all three ocelli appear on the vertex. A more formal definition is that it is the sclerite from which the pharyngeal dilator muscles arise, but in many contexts that too, is not helpful. In the anatomy of some taxa, such as many Cicadomorpha, the front of the head is fairly clearly distinguished and tends to be broad and sub-vertical; that median area commonly is taken to be the frons. The clypeus is a sclerite between the face and labrum, which is dorsally separated from the frons by the frontoclypeal suture in primitive insects. The clypeogenal suture laterally demarcates the clypeus, with the clypeus ventrally separated from the labrum by the clypeolabral suture. The clypeus differs in shape and size, such as species of Lepidoptera with a large clypeus with elongated mouthparts. The cheek or gena forms the sclerotized area on each side of the head below the compound eyes extending to the gular suture. Like many parts making up the insect's head, the gena varies among species, with its boundaries difficult to establish. In dragonflies and damselflies, it is between the compound eyes, clypeus, and mouthparts. The postgena is the area immediately posteriad, or posterior or lower on the gena of pterygote insects, and forms the lateral and ventral parts of the occipital arch. The occipital arch is a narrow band forming the posterior edge of the head capsule arching dorsally over the foramen. The subgenal area is usually narrow, located above the mouthparts; this area also includes the hypostoma and pleurostoma. The vertex extends anteriorly above the bases of the antennae as a prominent, pointed, concave rostrum. The posterior wall of the head capsule is penetrated by a large aperture, the foramen. Through it passes the organ systems, such as the nerve cord, esophagus, salivary ducts, and musculature, connecting the head with the thorax. On the posterior aspect of the head are the occiput, postgena, occipital foramen, posterior tentorial pit, gula, postgenal bridge, hypostomal suture and bridge, and the mandibles, labium, and maxilla. The occipital suture is well-founded in species of Orthoptera, but not so much in other orders. Where found, the occipital suture is the arched, horseshoe-shaped groove on the back of the head ending at the posterior of each mandible. The postoccipital suture is a landmark on the posterior surface of the head, and is typically near the occipital foremen. In pterygotes, the postocciput forms the extreme posterior, often U-shaped, which forms the rim of the head extending to the postoccipital suture. In pterygotes, such as those of Orthoptera, the occipital foramen and the mouth are not separated. The three types of occipital closures, or points under the occipital foramen that separate the two lower halves of the postgena, are the hypostomal bridge, the postgenal bridge, and the gula. The hypostomal bridge is usually found in insects with hypognathous orientation. The postgenal bridge is found in the adults of species of higher Diptera and aculeate Hymenoptera, while the gula is found on some Coleoptera, Neuroptera, and Isoptera, which typically display prognathous-oriented mouthparts. Compound eyes and ocelli Most insects have one pair of large, prominent compound eyes composed of units called ommatidia (ommatidium, singular), up to 30,000 in a single compound eye of, for example, large dragonflies. This type of eye gives less resolution than eyes found in vertebrates, but it gives an acute perception of movement and usually possesses UV- and green sensitivity, and may have additional sensitivity peaks in other regions of the visual spectrum. Often an ability to detect the E-vector of polarized light exists in polarization of light. There can also be an additional two or three ocelli, which help detect low light or small changes in light intensity. The image perceived is a combination of inputs from the numerous ommatidia, located on a convex surface, thus pointing in slightly different directions. Compared with simple eyes, compound eyes possess very large view angles and better acuity than the insect's dorsal ocelli, but some stemmatal (= larval eyes), for example, those of sawfly larvae (Tenthredinidae) with an acuity of 4 degrees and very high polarization sensitivity, match the performance of compound eyes. Because the individual lenses are so small, the effects of diffraction impose a limit on the possible resolution that can be obtained (assuming they do not function as phased arrays). This can only be countered by increasing lens size and number. To see with a resolution comparable to our simple eyes, humans would require compound eyes that would each reach the size of their heads. Compound eyes fall into two groups: apposition eyes, which form multiple inverted images, and superposition eyes, which form a single erect image. Compound eyes grow at their margins with the addition of new ommatidia. Antennae Antennae, sometimes called "feelers", are flexible appendages located on the insect's head which are used for sensing the environment. Insects can feel with their antennae because of the fine hairs (setae) that cover them. However, touch is not the only thing that antennae can detect; numerous tiny sensory structures on the antennae allow insects to sense smells, temperature, humidity, pressure, and even potentially sense themselves in space. Some insects, including bees and some groups of flies, can also detect sound with their antennae. The number of segments in an antenna varies amongst insects, with higher flies having 3-6 segments, while adult cockroaches can have over 140. The general shape of the antennae is also quite variable, but the first segment (the one attached to the head) is always called the scape, and the second segment is called the pedicel. The remaining antennal segments or flagellomeres are called the flagellum. General insect antenna types are shown below: Mouthparts The insect mouthparts consist of the maxilla, labium, and in some species, the mandibles. The labrum is a simple, fused sclerite, often called the upper lip, and moves longitudinally. It is hinged to the clypeus. The mandibles (jaws) are a highly sclerotized pair of structures that move at right angles to the body, used for biting, chewing, and severing food. The maxillae are paired structures that can also move at right angles to the body and possess segmented palps. The labium (lower lip) is the fused structure that moves longitudinally and has a pair of segmented palps. The mouthparts and rest of the head can be articulated in at least three different positions: prognathous, opisthognathous, and hypognathous. In species with prognathous articulation, the head is vertically aligned with the body, such as species of Formicidae; while in a hypognathous type, the head is aligned horizontally adjacent to the body. An opisthognathous head is positioned diagonally, such as in species of Blattodea and some Coleoptera. The mouthparts vary greatly between insects of different orders, but the two main functional groups are mandibulate and haustellate. Haustellate mouthparts are used for sucking liquids and can be further classified by the presence of stylets, which include piercing-sucking, sponging, and siphoning. The stylets are needle-like projections used to penetrate plant and animal tissues. The stylets and the feeding tube form the modified mandibles, maxilla, and hypopharynx. Mandibulate mouthparts, among the most common in insects, are used for biting and grinding solid foods. Piercing-sucking mouthparts have stylets and are used to penetrate solid tissue and then suck up liquid food. Sponging mouthparts are used to sponge and suck liquids, and lack stylets (e.g. most Diptera). Siphoning mouthparts lack stylets and are used to suck liquids and are commonly found among species of Lepidoptera. Mandibular mouthparts are found in species of Odonata, adult Neuroptera, Coleoptera, Hymenoptera, Blattodea, Orthoptera, and Lepidoptera. However, most adult Lepidoptera have siphoning mouthparts, while their larvae (commonly called caterpillars) have mandibles. Mandibulate The labrum is a broad lobe forming the roof of the preoral cavity, suspended from the clypeus in front of the mouth and forming the upper lip. On its inner side, it is membranous and may be produced into a median lobe, the epipharynx, bearing some sensilla. The labrum is raised away from the mandibles by two muscles arising in the head and inserted medially into the anterior margin of the labrum. It is closed against the mandibles in part by two muscles arising in the head and inserted on the posterior lateral margins on two small sclerites, the tormae, and, at least in some insects, by a resilin spring in the cuticle at the junction of the labrum with the clypeus. Until recently, the labrum generally was considered to be associated with the first head segment. However, recent studies of the embryology, gene expression, and nerve supply to the labrum show it is innervated by the tritocerebrum of the brain, which is the fused ganglia of the third head segment. This is formed from the fusion of parts of a pair of ancestral appendages found on the third head segment, showing their relationship. Its ventral, or inner, surface is usually membranous and forms the lobe-like epipharynx, which bears mechanosensilla and chemosensilla. Chewing insects have two mandibles, one on each side of the head. The mandibles are positioned between the labrum and maxillae. The mandibles cut and crush food, and may be used for defense; generally, they have an apical cutting edge, and the more basal molar area grinds the food. They can be extremely hard (around 3 on Mohs, or an indentation hardness of about 30 kg/mm2); thus, many termites and beetles have no physical difficulty in boring through foils made from such common metals as copper, lead, tin, and zinc. The cutting edges are typically strengthened by the addition of zinc, manganese, or rarely, iron, in amounts up to about 4% of the dry weight. They are typically the largest mouthparts of chewing insects, being used to masticate (cut, tear, crush, chew) food items. They open outwards (to the sides of the head) and come together medially. In carnivorous, chewing insects, the mandibles can be modified to be more knife-like, whereas in herbivorous chewing insects, they are more typically broad and flat on their opposing faces (e.g., caterpillars). In male stag beetles, the mandibles are modified to such an extent as to not serve any feeding function but are instead used to defend mating sites from other males. In ants, the mandibles also serve a defensive function (particularly in soldier castes). In bull ants, the mandibles are elongated and toothed, used as hunting (and defensive) appendages. Situated beneath the mandibles, paired maxillae manipulate food during mastication. Maxillae can have hairs and "teeth" along their inner margins. At the outer margin, the galea is a cupped or scoop-like structure, which sits over the outer edge of the labium. They also have palps, which are used to sense the characteristics of potential foods. The maxillae occupy a lateral position, one on each side of the head behind the mandibles. The proximal part of the maxilla consists of a basal cardo, which has a single articulation with the head, and a flat plate, the stipes, hinged to the cardo. Both cardo and stipes are loosely joined to the head by a membrane, so they are capable of movement. Distally on the stipes are two lobes, an inner lacinea, and an outer galea, one or both of which may be absent. More laterally on the stipes is a jointed, leglike palp made up of many segments; in Orthoptera, there are five. Anterior and posterior rotator muscles are inserted on the cardo, and ventral adductor muscles arising on the tentorium are inserted on both the cardo and stipes. Arising in the stipes are flexor muscles of the lacinea and galea and another lacineal flexor arises in the cranium, but neither the lacinea nor the galea has an extensor muscle. The palp has levator and depressor muscles arising in the stipes, and each segment of the palp has a single muscle causing flexion of the next segment. In mandibulate mouthparts, the labium is a quadrupedal structure, although it is formed from two fused secondary maxillae. It can be described as the floor of the mouth. With the maxillae, it assists with the manipulation of food during mastication or chewing or, in the unusual case of the dragonfly nymph, extends out to snatch prey back to the head, where the mandibles can eat it. The labium is similar in structure to the maxilla, but with the appendages of the two sides fused by the midline, so they come to form a median plate. The basal part of the labium, equivalent to the maxillary cardines and possibly including a part of the sternum of the labial segment, is called the postmentum. This may be subdivided into a proximal submentum and a distal mentum. Distal to the postmentum, and equivalent to the fused maxillary stipites, is the prementum. The prementum closes the preoral cavity from behind. Terminally, it bears four lobes, two inner glossae, and two outer paraglossae, which are collectively known as the ligula. One or both pairs of lobes may be absent or they may be fused to form a single median process. A palp arises from each side of the prementum, often being three-segmented. The hypopharynx is a median lobe immediately behind the mouth, projecting forwards from the back of the preoral cavity; it is a lobe of uncertain origin, but perhaps associated with the mandibular segment; in apterygotes, earwigs, and nymphal mayflies, the hypopharynx bears a pair of lateral lobes, the superlinguae (singular: superlingua). It divides the cavity into a dorsal food pouch, or cibarium, and a ventral salivarium into which the salivary duct opens. It is commonly found fused to the libium. Most of the hypopharynx is membranous, but the adoral face is sclerotized distally, and proximally contains a pair of suspensory sclerites extending upwards to end in the lateral wall of the stomodeum. Muscles arising on the frons are inserted into these sclerites, which distally are hinged to a pair of lingual sclerites. These, in turn, have inserted into them antagonistic pairs of muscles arising on the tentorium and labium. The various muscles serve to swing the hypopharynx forwards and back, and in the cockroach, two more muscles run across the hypopharynx and dilate the salivary orifice and expand the salivarium. Piercing-sucking Mouthparts can have multiple functions. Some insects combine piercing parts along with sponging ones which are then used to pierce through tissues of plants and animals. Female mosquitoes feed on blood (hemophagous) making them disease vectors. The mosquito mouthparts consist of the proboscis, paired mandibles and maxillae. The maxillae form needle-like structures, called stylets, which are enclosed by the labium. When mosquito bites, maxillae penetrate the skin and anchor the mouthparts, thus allowing other parts to be inserted. The sheath-like labium slides back, and the remaining mouthparts pass through its tip and into the tissue. Then, through the hypopharynx, the mosquito injects saliva, which contains anticoagulants to stop the blood from clotting. And finally, the labrum (upper lip) is used to suck up the blood. Species of the genus Anopheles are characterized by their long palpi (two parts with widening end), almost reaching the end of labrum. Siphoning The proboscis is formed from maxillary galeae and is an adaption found in some insects for sucking. The muscles of the cibarium or pharynx are strongly developed and form the pump. In Hemiptera and many Diptera, which feed on fluids within plants or animals, some components of the mouthparts are modified for piercing, and the elongated structures are called stylets. The combined tubular structures are referred to as the proboscis, although specialized terminology is used in some groups. In species of Lepidoptera, it consists of two tubes held together by hooks and separable for cleaning. Each tube is inwardly concave, thus forming a central tube through which moisture is sucked. Suction is affected by the contraction and expansion of a sac in the head. The proboscis is coiled under the head when the insect is at rest and is extended only when feeding. The maxillary palpi are reduced or even vestigial. They are conspicuous and five-segmented in some of the more basal families and are often folded. The shape and dimensions of the proboscis have evolved to give different species wider and therefore more advantageous diets. There is an allometric scaling relationship between the body mass of Lepidoptera and length of the proboscis from which an interesting adaptive departure is the unusually long-tongued hawk moth Xanthopan morganii praedicta. Charles Darwin predicted the existence and proboscis length of this moth before its discovery based on his knowledge of the long-spurred Madagascan star orchid Angraecum sesquipedale. Sponging The mouthparts of insects that feed on fluids are modified in various ways to form a tube through which liquid can be drawn into the mouth and usually another through which saliva passes. The muscles of the cibarium or pharynx are strongly developed to form a pump. In nonbiting flies, the mandibles are absent and other structures are reduced; the labial palps have become modified to form the labellum, and the maxillary palps are present, although sometimes short. In Brachycera, the labellum is especially prominent and used for sponging liquid or semiliquid food. The labella are a complex structure consisting of many grooves, called pseudotracheae, which sop up liquids. Salivary secretions from the labella assist in dissolving and collecting food particles so they can be more easily taken up by the pseudotracheae or laid their egg on the suitable media; this is thought to occur by capillary action. The liquid food is then drawn up from the pseudotracheae through the food channel into the esophagus. The mouthparts of bees are of a chewing and lapping-sucking type. Lapping is a mode of feeding in which liquid or semiliquid food adhering to a protrusible organ, or "tongue", is transferred from substrate to mouth. In the honey bee (Hymenoptera: Apidae: Apis mellifera), the elongated and fused labial glossae form a hairy tongue, which is surrounded by the maxillary galeae and the labial palps to form a tubular proboscis containing a food canal. In feeding, the tongue is dipped into the nectar or honey, which adheres to the hairs, and then is retracted so the adhering liquid is carried into the space between the galeae and labial palps. This back-and-forth glossal movement occurs repeatedly. Movement of liquid to the mouth results from the action of the cibarial pump, facilitated by each retraction of the tongue pushing liquid up the food canal either for feeding requirements or to have a suitable media for laying their egg. Thorax The insect thorax has three segments: the prothorax, mesothorax, and metathorax. The anterior segment, closest to the head, is the prothorax; its major features are the first pair of legs and the pronotum. The middle segment is the mesothorax; its major features are the second pair of legs and the anterior wings, if any. The third, the posterior, thoracic segment, abutting the abdomen, is the metathorax, which bears the third pair of legs and the posterior wings. Each segment is delineated by an intersegmental suture. Each segment has four basic regions. The dorsal surface is called the tergum (or notum, to distinguish it from the abdominal terga). The two lateral regions are called the pleura (singular: pleuron), and the ventral aspect is called the sternum. In turn, the notum of the prothorax is called the pronotum, the notum for the mesothorax is called the mesonotum and the notum for the metathorax is called the metanotum. Continuing with this logic, there are also the mesopleura and metapleura, as well as the mesosternum and metasternum. The tergal plates of the thorax are simple structures in apterygotes and many immature insects but are variously modified in winged adults. The pterothoracic nota each have two main divisions: the anterior, wing-bearing alinotum and the posterior, phragma-bearing postnotum. Phragmata (singular: phragma) are plate-like apodemes that extend inwards below the antecostal sutures, marking the primary intersegmental folds between segments; phragmata provide attachment for the longitudinal flight muscles. Each alinotum (sometimes confusingly referred to as a "notum") may be traversed by sutures that mark the position of internal strengthening ridges and commonly divide the plate into three areas: the anterior prescutum, the scutum, and the smaller posterior scutellum. The lateral pleural sclerites are believed to be derived from the subcoxal segment of the ancestral insect leg. These sclerites may be separate, as in silverfish, or fused into an almost continuous sclerotic area, as in most winged insects. Prothorax The pronotum of the prothorax may be simple in structure and small in comparison with the other nota, but in beetles, mantids, many bugs, and some Orthoptera, the pronotum is expanded, and in cockroaches, it forms a shield that covers part of the head and mesothorax. Pterothorax Because the mesothorax and metathorax hold the wings, they have a combined name called the pterothorax (pteron = wing). The forewing, which goes by different names in different orders (e.g., the tegmina in Orthoptera and elytra in Coleoptera), arises between the mesonotum and the mesopleuron, and the hindwing articulates between the metanotum and metapleuron. The legs arise from the mesopleuron and metapleura. The mesothorax and metathorax each have a pleural suture (mesopleural and metapleural sutures) that runs from the wing base to the coxa of the leg. The sclerite anterior to the pleural suture is called the episternum (serially, the mesepisternum and metepisternum). The sclerite posterior to the suture is called the epimiron (serially, the mesepimiron and metepimiron). Spiracles, the external organs of the respiratory system, are found on the pterothorax, usually one between the pro- and mesopleoron, as well as one between the meso- and metapleuron. The ventral view or sternum follows the same convention, with the prosternum under the prothorax, the mesosternum under the mesothorax and the metasternum under the metathorax. The notum, pleura, and sternum of each segment have a variety of different sclerites and sutures, varying greatly from order to order, and they will not be discussed in detail in this section. Wings Most phylogenetically advanced insects have two pairs of wings located on the second and third thoracic segments. Insects are the only invertebrates to have developed flight capability, and this has played an important part in their success. Insect flight is not very well understood, relying on turbulent aerodynamic effects. The primitive insect groups use muscles that act directly on the wing structure. The more advanced groups making up the Neoptera have foldable wings, and their muscles act on the thorax wall and power the wings indirectly. These muscles can contract multiple times for each single nerve impulse, allowing the wings to beat faster than would ordinarily be possible. Insect flight can be rapid, maneuverable, and versatile, possibly due to the changing shape, extraordinary control, and variable motion of the insect wing. Insect orders use different flight mechanisms; for example, the flight of a butterfly can be explained using steady-state, nontransitory aerodynamics, and thin airfoil theory. Internal Each of the wings consists of a thin membrane supported by a system of veins. The membrane is formed by two layers of integument closely apposed, while the veins are formed where the two layers remain separate and the cuticle may be thicker and more heavily sclerotized. Within each of the major veins is a nerve and a trachea, and, since the cavities of the veins are connected with the hemocoel, hemolymph can flow into the wings. As the wing develops, the dorsal and ventral integumental layers become closely apposed over most of their area, forming the wing membrane. The remaining areas form channels, the future veins, in which the nerves and tracheae may occur. The cuticle surrounding the veins becomes thickened and more heavily sclerotized to provide strength and rigidity to the wing. Hairs of two types may occur on the wings: microtrichia, which are small and irregularly scattered, and macrotrichia, which are larger, socketed, and may be restricted to veins. The scales of Lepidoptera and Trichoptera are highly modified macrotrichia. Veins In some minuscule insects, the venation may be reduced. In chalcidoid wasps, for instance, only the subcosta and part of the radius are present. Conversely, an increase in venation may occur by the branching of existing veins to produce accessory veins or by the development of additional, intercalary veins between the original ones, as in the wings of Orthoptera (grasshoppers and crickets). Large numbers of cross-veins are present in some insects, and they may form a reticulum as in the wings of Odonata (dragonflies and damselflies) and at the base of the forewings of Tettigonioidea and Acridoidea (katydids and grasshoppers, respectively). The archedictyon is the name given to a hypothetical scheme of wing venation proposed for the very first winged insect. It is based on a combination of speculation and fossil data. Since all winged insects are believed to have evolved from a common ancestor, the archediction represents the "template" that has been modified (and streamlined) by natural selection for 200 million years. According to current dogma, the archedictyon contained six to eight longitudinal veins. These veins (and their branches) are named according to a system devised by John Comstock and George Needham—the Comstock-Needham system: Costa (C) – the leading edge of the wing Subcosta (Sc) – second longitudinal vein (behind the costa), typically unbranched Radius (R) – third longitudinal vein, one to five branches reach the wing margin Media (M) – fourth longitudinal vein, one to four branches reach the wing margin Cubitus (Cu) – fifth longitudinal vein, one to three branches reach the wing margin Anal veins (A1, A2, A3) – unbranched veins behind the cubitus The costa (C) is the leading marginal vein on most insects, although a small vein, the precosta, is sometimes found above the costa. In almost all extant insects, the precosta is fused with the costa; the costa rarely ever branches because it is at the leading edge, which is associated at its base with the humeral plate. The trachea of the costal vein is perhaps a branch of the subcostal trachea. Located after the costa is the third vein, the subcosta, which branches into two separate veins: the anterior and posterior. The base of the subcosta is associated with the distal end of the neck of the first axillary. The fourth vein is the radius, which is branched into five separate veins. The radius is generally the strongest vein of the wing. Toward the middle of the wing, it forks into a first undivided branch (R1) and a second branch, called the radial sector (Ra), which subdivides dichotomously into four distal branches (R2, R3, R4, R5). Basally, the radius is flexibly united with the anterior end of the second axillary (2Ax). The fifth vein of the wing is the media. In the archetype pattern (A), the media forks into two main branches, a media anterior (MA), which divides into two distal branches (MA1, MA2), and a median sector, or media posterior (MP), which has four terminal branches (M1, M2, M3, M4). In most modern insects, the media anterior has been lost, and the usual "media" is the four-branched media posterior with the common basal stem. In the Ephemerida, according to present interpretations of the wing venation, both branches of the media are retained, while in Odonata, the persisting media is the primitive anterior branch. The stem of the media is often united with the radius, but when it occurs as a distinct vein, its base is associated with the distal median plate (m') or is continuously sclerotized with the latter. The cubitus, the sixth vein of the wing, is primarily two-branched. The primary forking takes place near the base of the wing, forming the two principal branches (Cu1, Cu2). The anterior branch may break up into several secondary branches, but commonly it forks into two distal branches. The second branch of the cubitus (Cu2) in Hymenoptera, Trichoptera, and Lepidoptera, was mistaken by Comstock and Needham for the first anal. Proximally, the main stem of the cubitus is associated with the distal median plate (m') of the wing base. The postcubitus (Pcu) is the first anal of the Comstock and Needham system. The postcubitus, however, has the status of an independent wing vein and should be recognized as such. In nymphal wings, its trachea arises between the cubital trachea and the group of vannal tracheae. In the mature wings of more generalized insects, the postcubitus is always associated proximally with the cubitus and is never intimately connected with the flexor sclerite (3Ax) of the wing base. In Neuroptera, Mecoptera, and Trichoptera, the postcubitus may be more closely associated with the vannal veins, but its base is always free from the latter. The postcubitus is usually unbranched; primitively, it is two-branched. The vannal veins (lV to nV) are the anal veins immediately associated with the third axillary, and are directly affected by the movement of this sclerite that brings about the flexion of the wings. In number, the vannal veins vary from one to 12, according to the expansion of the vannal area of the wing. The vannal tracheae usually arise from a common tracheal stem in nymphal insects, and the veins are regarded as branches of a single anal vein. Distally, the vannal veins are either simple or branched. The jugal vein (J) of the jugal lobe of the wing is often occupied by a network of irregular veins, or it may be entirely membranous; sometimes it contains one or two distinct, small veins, the first jugal vein, or vena arcuata, and the second jugal vein, or vena cardinalis (2J). C-Sc cross-veins – run between the costa and subcosta R cross-veins – run between adjacent branches of the radius R-M cross-veins – run between the radius and media M-Cu cross-veins – run between the media and cubitus All the veins of the wing are subject to secondary forking and union by cross-veins. In some orders of insects, the cross-veins are so numerous, the whole venational pattern becomes a close network of branching veins and cross-veins. Ordinarily, however, a definite number of cross-veins having specific locations occurs. The more constant cross-veins are the humeral cross-vein (h) between the costa and subcosta, the radial cross-vein (r) between R and the first fork of Rs, the sectorial cross-vein (s) between the two forks of R8, the median cross-vein (m-m) between M2 and M3, and the mediocubital cross-vein (m-cu) between the media and the cubitus. The veins of insect wings are characterized by a convex-concave placement, such as those seen in mayflies (i.e., concave is "down" and convex is "up"), which alternate regularly and by their branching; whenever a vein forks there is always an interpolated vein of the opposite position between the two branches. The concave vein will fork into two concave veins (with the interpolated vein being convex) and the regular alteration of the veins is preserved. The veins of the wing appear to fall into an undulating pattern according to whether they tend to fold up or down when the wing is relaxed. The basal shafts of the veins are convex, but each vein forks distally into an anterior convex branch and a posterior concave branch. Thus, the costa and subcosta are regarded as convex and concave branches of a primary first vein, Rs is the concave branch of the radius, posterior media is the concave branch of the media, Cu1 and Cu2 are respectively convex and concave, while the primitive postcubitus and the first vannal have each an anterior convex branch and a posterior concave branch. The convex or concave nature of the veins has been used as evidence in determining the identities of the persisting distal branches of the veins of modern insects, but it has not been demonstrated to be consistent for all wings. Fields Wing areas are delimited and subdivided by fold lines, along which the wings can fold, and flexion lines, which flex during flight. Between the flexion and the fold lines, the fundamental distinction is often blurred, as fold lines may permit some flexibility or vice versa. Two constants, found in nearly all insect wings, are the claval (a flexion line) and jugal folds (or fold line), forming variable and unsatisfactory boundaries. Wing folding can be very complicated, with transverse folding occurring in the hindwings of Dermaptera and Coleoptera, and in some insects, the anal area can be folded like a fan. The four different fields found on insect wings are: Remigium Anal area (vannus) Jugal area Axillary area Alula Most veins and cross-veins occur in the anterior area of the remigium, which is responsible for most of the flight, powered by the thoracic muscles. The posterior portion of the remigium is sometimes called the clavus; the two other posterior fields are the anal and jugal areas. When the vannal fold has the usual position anterior to the group of anal veins, the remigium contains the costal, subcostal, radial, medial, cubital, and postcubital veins. In the flexed wing, the remigium turns posteriorly on the flexible basal connection of the radius with the second axillary, and the base of the mediocubital field is folded medially on the axillary region along the plica basalis (bf) between the median plates (m, m') of the wing base. The vannus is bordered by the vannal fold, which typically occurs between the postcubitus and the first vannal vein. In Orthoptera, it usually has this position. In the forewing of Blattidae, however, the only fold in this part of the wing lies immediately before the postcubitus. In Plecoptera, the vannal fold is posterior to the postcubitus, but proximally it crosses the base of the first vannal vein. In the cicada, the vannal fold lies immediately behind the first vannal vein (lV). These small variations in the actual position of the vannal fold, however, do not affect the unity of action of the vannal veins, controlled by the flexor sclerite (3Ax), in the flexion of the wing. In the hindwings of most Orthoptera, a secondary vena dividens forms a rib in the vannal fold. The vannus is usually triangular in shape, and its veins typically spread out from the third axillary like the ribs of a fan. Some of the vannal veins may be branched, and secondary veins may alternate with the primary veins. The vannal region is usually best developed in the hindwing, in which it may be enlarged to form a sustaining surface, as in Plecoptera and Orthoptera. The great fan-like expansions of the hindwings of Acrididae are clearly the vannal regions, since their veins are all supported on the third axillary sclerites on the wing bases, though Martynov (1925) ascribes most of the fan areas in Acrididae to the jugal regions of the wings. The true jugum of the acridid wing is represented only by the small membrane (Ju) mesad of the last vannal vein. The jugum is more highly developed in some other Orthoptera, as in the Mantidae. In most of the higher insects with narrow wings, the vannus becomes reduced, and the vannal fold is lost, but even in such cases, the flexed wing may bend along a line between the postcubitus and the first vannal vein. The jugal region, or neala, is a region of the wing that is usually a small membranous area proximal to the base of the vannus strengthened by a few small, irregular vein-like thickenings; but when well developed, it is a distinct section of the wing and may contain one or two jugal veins. When the jugal area of the forewing is developed as a free lobe, it projects beneath the humeral angle of the hindwing and thus serves to yoke the two wings together. In the Jugatae group of Lepidoptera, it bears a long finger-like lobe. The jugal region was termed the neala ("new wing") because it is a secondary and recently developed part of the wing. The auxiliary region containing the axillary sclerites has, in general, the form of a scalene triangle. The base of the triangle (a-b) is the hinge of the wing with the body; the apex (c) is the distal end of the third axillary sclerite; the longer side is anterior to the apex. Point d on the anterior side of the triangle marks the articulation of the radial vein with the second axillary sclerite. The line between d and c is the plica basalis (bf), or fold of the wing at the base of the mediocubital field. At the posterior angle of the wing base in some Diptera, there is a pair of membranous lobes (squamae, or calypteres) known as the alula. The alula is well developed in the house fly. The outer squama (c) arises from the wing base behind the third axillary sclerite (3Ax) and represents the jugal lobe of other insects (A, D); the larger inner squama (d) arises from the posterior scutellar margin of the tergum of the wing-bearing segment and forms a protective, hood-like canopy over the halter. In the flexed wing, the outer squama of the alula is turned upside down above the inner squama, the latter not being affected by the movement of the wing. In many Diptera, a deep incision of the anal area of the wing membrane behind the single vannal vein sets off a proximal alar lobe distal to the outer squama of the alula. Joints The various movements of the wings, especially in insects that flex their wings horizontally over their backs when at rest, demand a more complicated articular structure at the wing base than a mere hinge of the wing with the body. Each wing is attached to the body by a membranous basal area, but the articular membrane contain several small articular sclerites, collectively known as the pteralia. The pteralia include an anterior humeral plate at the base of the costal vein, a group of axillaries (Ax) associated with the subcostal, radial, and vannal veins, and two less definite median plates (m, m') at the base of the mediocubital area. The axillaries are specifically developed only in wing-flexing insects, where they constitute the flexor mechanism of the wing operated by the flexor muscle arising on the pleuron. Characteristic of the wing base is also a small lobe on the anterior margin of the articular area proximal to the humeral plate, which, in the forewing of some insects, is developed into a large, flat, scale-like flap, the tegula, overlapping the base of the wing. Posteriorly, the articular membrane often forms an ample lobe between the wing and the body, and its margin is generally thickened and corrugated, giving the appearance of a ligament, the so-called axillary cord, continuous mesally with the posterior marginal scutellar fold of the tergal plate bearing the wing. The articular sclerites, or pteralia, of the wing base of the wing-flexing insects and their relations to the body and the wing veins, shown diagrammatically, are as follows: Humeral plates First Axillary Second Axillary Third Axillary Fourth Axillary Median plates (m, m') The humeral plate is usually a small sclerite on the anterior margin of the wing base, movable and articulated with the base of the costal vein. Odonata have their humeral plates greatly enlargened, with two muscles arising from the episternum inserted into the humeral plates and two from the edge of the epimeron inserted into the axillary plate. The first axillary sclerite (lAx) is the anterior hinge plate of the wing base. Its anterior part is supported on the anterior notal wing process of the tergum (ANP); its posterior part articulates with the tergal margin. The anterior end of the sclerite is generally produced as a slender arm, the apex of which (e) is always associated with the base of the subcostal vein (Sc), though it is not united with the latter. The body of the sclerite articulates laterally with the second axillary. The second axillary sclerite (2Ax) is more variable in form than the first axillary, but its mechanical relations are no less definite. It is obliquely hinged to the outer margin of the body of the first axillary, and the radial vein (R) is always flexibly attached to its anterior end (d). The second axillary presents both a dorsal and ventral sclerotization in the wing base; its ventral surface rests upon the fulcral wing process of the pleuron. The second axillary, therefore, is the pivotal sclerite of the wing base, and it specifically manipulates the radial vein. The third axillary sclerite (3Ax) lies in the posterior part of the articular region of the wing. Its form is highly variable and often irregular, but the third axillary is the sclerite on which is inserted the flexor muscle of the wing (D). Mesally, it articulates anteriorly (f) with the posterior end of the second axillary, and posteriorly (b) with the posterior wing process of the tergum (PNP), or with a small fourth axillary when the latter is present. Distally, the third axillary is prolonged in a process always associated with the bases of the group of veins in the anal region of the wing, here termed the vannal veins (V). The third axillary, therefore, is usually the posterior hinge plate of the wing base and is the active sclerite of the flexor mechanism, which directly manipulates the vannal veins. The contraction of the flexor muscle (D) revolves the third axillary on its mesal articulations (b, f), and thereby lifts its distal arm; this movement produces the flexion of the wing. The fourth axillary sclerite is not a constant element of the wing base. When present, it is usually a small plate intervening between the third axillary and the posterior notal wing process and is probably a detached piece of the latter. The median plates (m, m') are also sclerites that are not so definitely differentiated as specific plates as are the three principal axillaries, but they are important elements of the flexor apparatus. They lie in the median area of the wing base distal to the second and third axillaries and are separated from each other by an oblique line (bf), which forms a prominent convex fold during flexion of the wing. The proximal plate (m) is usually attached to the distal arm of the third axillary and perhaps should be regarded as a part of the latter. The distal plate (m') is less constantly present as a distinct sclerite and may be represented by a general sclerotization of the base of the mediocubital field of the wing. When the veins of this region are distinct at their bases, they are associated with the outer median plate. Coupling, folding, and other features In many insect species, the forewing and hindwing are coupled together, which improves the aerodynamic efficiency of flight. The most common coupling mechanism (e.g., Hymenoptera and Trichoptera) is a row of small hooks on the forward margin of the hindwing, or "hamuli", which lock onto the forewing, keeping them held together (hamulate coupling). In some other insect species (e.g., Mecoptera, Lepidoptera, and some Trichoptera) the jugal lobe of the forewing covers a portion of the hindwing (jugal coupling), or the margins of the forewing and hindwing overlap broadly (amplexiform coupling), or the hindwing bristles, or frenulum, hook under the retaining structure or retinalucum on the forewing. When at rest, the wings are held over the back in most insects, which may involve longitudinal folding of the wing membrane and sometimes also transverse folding. Folding may sometimes occur along the flexion lines. Though fold lines may be transverse, as in the hindwings of beetles and earwigs, they are normally radial to the base of the wing, allowing adjacent sections of a wing to be folded over or under each other. The commonest fold line is the jugal fold, situated just behind the third anal vein, although, most Neoptera have a jugal fold just behind vein 3A on the forewings. It is sometimes also present on the hindwings. Where the anal area of the hindwing is large, as in Orthoptera and Blattodea, the whole of this part may be folded under the anterior part of the wing along a vannal fold a little posterior to the claval furrow. In addition, in Orthoptera and Blattodea, the anal area is folded like a fan along the veins, the anal veins being convex, at the crests of the folds, and the accessory veins concave. Whereas the claval furrow and jugal fold are probably homologous in different species, the vannal fold varies in position in different taxa. Folding is produced by a muscle arising on the pleuron and inserted into the third axillary sclerite in such a way that when it contracts, the sclerite pivots about its points of articulation with the posterior notal process and the second axillary sclerite. As a result, the distal arm of the third axillary sclerite rotates upwards and inwards, so that finally its position is completely reversed. The anal veins are articulated with this sclerite in such a way that when it moves they are carried with it and become flexed over the back of the insect. Activity of the same muscle in flight affects the power output of the wing and so it is also important in flight control. In orthopteroid insects, the elasticity of the cuticle causes the vannal area of the wing to fold along the veins. Consequently, energy is expended in unfolding this region when the wings are moved to the flight position. In general, wing extension probably results from the contraction of muscles attached to the basilar sclerite or, in some insects, to the subalar sclerite. Legs The typical and usual segments of the insect leg are divided into the coxa, one trochanter, the femur, the tibia, the tarsus, and the pretarsus. The coxa in its more symmetrical form, has the shape of a short cylinder or truncate cone, though commonly it is ovate and may be almost spherical. The proximal end of the coxa is girdled by a submarginal basicostal suture that forms internally a ridge, or basicosta, and sets off a marginal flange, the coxomarginale, or basicoxite. The basicosta strengthens the base of the coxa and is commonly enlarged on the outer wall to give insertion to muscles; on the mesal half of the coxa, however, it is usually weak and often confluent with the coxal margin. The trochanteral muscles that take their origin in the coxa are always attached distally to the basicosta. The coxa is attached to the body by an articular membrane, the coxal corium, which surrounds its base. These two articulations are perhaps the primary dorsal and ventral articular points of the subcoxo-coxal hinge. In addition, the insect coxa has often an anterior articulation with the anterior, ventral end of the trochantin, but the trochantinal articulation does not coexist with a sternal articulation. The pleural articular surface of the coxa is borne on a mesal inflection of the coxal wall. If the coxa is movable on the pleural articulation alone, the coxal articular surface is usually inflected to a sufficient depth to give leverage to the abductor muscles inserted on the outer rim of the coxal base. Distally the coxa bears an anterior and a posterior articulation with the trochanter. The outer wall of the coxa is often marked by a suture extending from the base to the anterior trochanteral articulation. In some insects, the coxal suture falls in line with the pleural suture. In such cases, the coxa appears to be divided into two parts corresponding to the episternum and epimeron of the pleuron. The coxal suture is absent in many insects. The inflection of the coxal wall bearing the pleural articular surface divides the lateral wall of the basicoxite into a prearticular part and a postarticular part, and the two areas often appear as two marginal lobes on the base of the coxa. The posterior lobe is usually the larger and is termed the meron. The meron may be greatly enlarged by an extension distally in the posterior wall of the coxa; in the Neuroptera, Mecoptera, Trichoptera, and Lepidoptera, the meron is so large that the coxa appears to be divided into an anterior piece, the so-called "coxa genuina," and the meron, but the meron never includes the region of the posterior trochanteral articulation, and the groove delimiting it is always a part of the basicostal suture. A coxa with an enlarged meron has an appearance similar to one divided by a coxal suture falling in line with the pleural suture, but the two conditions are fundamentally quite different and should not be confused. The meron reaches the extreme of its departure from the usual condition in the Diptera. In some of the more generalized flies, as in the Tipulidae, the meron of the middle leg appears as a large lobe of the coxa projecting upward and posteriorly from the coxal base; in higher members of the order, it becomes completely separated from the coxa and forms a plate of the lateral wall of the mesothorax. The trochanter is the basal segment of the telopodite; it is always a small segment in the insect leg, freely movable by a horizontal hinge on the coxa, but more or less fixed to the base of the femur. When movable on the femur the trochantero femoral hinge is usually vertical or oblique in a vertical plane, giving a slight movement of production and reduction at the joint, though only a reductor muscle is present. In the Odonata, both nymphs and adults, there are two trochanteral segments, but they are not movable on each other; the second contains the reductor muscle of the femur. The usual single trochanteral segment of insects, therefore, probably represents the two trochanters of other arthropods fused into one apparent segment since it is not likely that the primary coxotrochanteral hinge has been lost from the leg. In some of the Hymenoptera, a basal subdivision of the femur simulates a second trochanter, but the insertion of the reductor muscle on its base attests that it belongs to the femoral segment, since as shown in the odonate leg, the reductor has its origin in the true second trochanter. The femur is the third segment of the insect leg, is usually the longest and strongest part of the limb, but it varies in size from the huge hind femur of leaping Orthoptera to a very small segment such as is present in many larval forms. The volume of the femur is generally correlated with the size of the tibial muscles contained within it, but it is sometimes enlarged and modified in shape for other purposes than that of accommodating the tibial muscles. The tibia is characteristically a slender segment in adult insects, only a little shorter than the femur or the combined femur and trochanter. Its proximal end forms a more or less distinct head bent toward the femur, a device allowing the tibia to be flexed close against the undersurface of the femur. The terms profemur, mesofemur, and metafemur refer to the femora of the front, middle and hind legs of an insect, respectively. Similarly, protibia, mesotibia, and metatibia refer to the tibiae of the front, middle and hind legs. The tarsus of insects corresponds to the penultimate segment of a generalized arthropod limb, which is the segment called the propodite in Crustacea. In adult insects, it is commonly subdivided into two to five subsegments, or tarsomeres, but in the Protura, some Collembola, and most holometabolous insect larvae it preserves the primitive form of a simple segment. The subsegments of the adult insect tarsus are usually freely movable on one another by inflected connecting membranes, but the tarsus never has intrinsic muscles. The tarsus of adult pterygote insects having fewer than five subsegments is probably specialized by the loss of one or more subsegments or by a fusion of adjoining subsegments. In the tarsi of Acrididae, the long basal piece is composed of three united tarsomeres, leaving the fourth and the fifth. The basal tarsomere is sometimes conspicuously enlarged and is distinguished as the basitarsus. On the under surfaces of the tarsal subsegments in certain Orthoptera, there are small pads, the tarsal pulvilli, or euplantulae. The tarsus is occasionally fused with the tibia in larval insects, forming a tibiotarsal segment; in some cases, it appears to be eliminated or reduced to a rudiment between the tibia and the pretarsus. For the most part, the femur and tibia are the longest leg segments but variations in the lengths and robustness of each segment relate to their functions. For example, gressorial and cursorial, or walking and running type insects respectively, usually have well-developed femora and tibiae on all legs, whereas jumping (saltatorial) insects such as grasshoppers have disproportionately developed metafemora and metatibiae. In aquatic beetles (Coleoptera) and bugs (Hemiptera), the tibiae and/or tarsi of one or more pairs of legs usually are modified for swimming (natatorial) with fringes of long, slender hairs. Many ground-dwelling insects, such as mole crickets (Orthoptera: Gryllotalpidae), nymphal cicadas (Hemiptera: Cicadidae), and scarab beetles (Scarabaeidae), have the tibiae of the forelegs (protibiae) enlarged and modified for digging (fossorial), whereas the forelegs of some predatory insects, such as mantispid lacewings (Neuroptera) and mantids (Mantodea), are specialized for seizing prey, or raptorial. The tibia and basal tarsomere of each hindleg of honey bees are modified for the collection and carriage of pollen. Abdomen The ground plan of the abdomen of an adult insect typically consists of 11–12 segments and is less strongly sclerotized than the head or thorax. Each segment of the abdomen is represented by a sclerotized tergum, sternum, and perhaps a pleurite. Terga are separated from each other and from the adjacent sterna or pleura by a membrane. Spiracles are located in the pleural area. Variation of this ground plan includes the fusion of terga or terga and sterna to form continuous dorsal or ventral shields or a conical tube. Some insects bear a sclerite in the pleural area called a laterotergite. Ventral sclerites are sometimes called laterosternites. During the embryonic stage of many insects and the postembryonic stage of primitive insects, 11 abdominal segments are present. In modern insects there is a tendency toward reduction in the number of the abdominal segments, but the primitive number of 11 is maintained during embryogenesis. Variation in abdominal segment number is considerable. If the Apterygota are considered to be indicative of the ground plan for pterygotes, confusion reigns: adult Protura have 12 segments, Collembola have 6. The orthopteran family Acrididae has 11 segments, and a fossil specimen of Zoraptera has a 10-segmented abdomen. Generally, the first seven abdominal segments of adults (the pregenital segments) are similar in structure and lack appendages. However, apterygotes (bristletails and silverfish) and many immature aquatic insects have abdominal appendages. Apterygotes possess a pair of styles; rudimentary appendages that are serially homologous with the distal part of the thoracic legs. And, mesally, one or two pairs of protrusible (or exsertile) vesicles on at least some abdominal segments. These vesicles are derived from the coxal and trochanteral endites (inner annulated lobes) of the ancestral abdominal appendages. Aquatic larvae and nymphs may have gills laterally on some to most abdominal segments. Of the rest of the abdominal segments consist of the reproductive and anal parts. External genitalia The organs concerned specifically with mating and the deposition of eggs are known collectively as the external genitalia, although they may be largely internal. The components of the external genitalia of insects are very diverse in form and often have considerable taxonomic value, particularly among species that appear structurally similar in other respects. The male external genitalia have been used widely to aid in distinguishing species, whereas the female external genitalia may be simpler and less varied. The terminalia of adult female insects include internal structures for receiving the male copulatory organ and his spermatozoa and external structures used for oviposition (egg-laying; section 5.8). Segments 8 and 9 bear the genitalia; segment 10 is visible as a complete segment in many "lower" insects but always lacks appendages. Most female insects have an egg-laying tube, or ovipositor; it is absent in termites, parasitic lice, many Plecoptera, and most Ephemeroptera. Ovipositors take two forms: The anal-genital part of the abdomen. which consists generally of segments 8 or 9 to the abdominal apex substitutional, composed of extensible posterior abdominal segments. Other appendages The terminal abdominal segments have excretory and sensory functions in all insects, besides the reproductive function in adults. The small segment 11 may be represented by an epiproct (usually a dorsal plate or filament above the anus of certain insects); other appendages include: the paraprocts: paired plate-like appendages also derived from the sternum at the side of the tip of the abdomen, often most apparent in certain basal orders such as Odonata; the cerci: a pair of appendages which articulate laterally on segment 11; typically, these are annulated and filamentous but have been modified (e.g. the forceps of earwigs) or reduced in different insect orders. a central caudal filament, prolongation or median appendix dorsalis, which arises from the tip of the epiproct in certain apterygotes, many mayflies (Ephemeroptera), and a few fossil insects. A similar structure in nymphal stoneflies (Plecoptera) is of uncertain homology. Internal Nervous system The nervous system of an insect can be divided into a brain and a ventral nerve cord. The head capsule is made up of six fused segments, each with a pair of ganglia, or a cluster of nerve cells outside of the brain. The first three pairs of ganglia are fused into the brain, while the three following pairs are fused into a structure of three pairs of ganglia under the insect's esophagus, called the subesophageal ganglion. The thoracic segments have one ganglion on each side, which are connected into a pair, one pair per segment. This arrangement is also seen in the abdomen but only in the first eight segments. Many species of insects have reduced numbers of ganglia due to fusion or reduction. Some cockroaches have just six ganglia in the abdomen, whereas the wasp Vespa crabro has only two in the thorax and three in the abdomen. Some insects, like the house fly Musca domestica, have all the body ganglia fused into a single large thoracic ganglion. At least a few insects have nociceptors, cells that detect and transmit sensations of pain. This was discovered in 2003 by studying the variation in reactions of larvae of the common fruitfly Drosophila to the touch of a heated probe and an unheated one. The larvae reacted to the touch of the heated probe with a stereotypical rolling behavior that was not exhibited when the larvae were touched by the unheated probe. Although nociception has been demonstrated in insects, there is not a consensus that insects feel pain consciously. Digestive system An insect uses its digestive system for all steps in food processing: digestion, absorption, and feces delivery and elimination. Most of this food is ingested in the form of macromolecules and other complex substances like proteins, polysaccharides, fats, and nucleic acids. These macromolecules must be broken down by catabolic reactions into smaller molecules like amino acids and simple sugars before being used by cells of the body for energy, growth, or reproduction. This break-down process is known as digestion. The main structure of an insect's digestive system is a long-enclosed tube called the alimentary canal (or gut), which runs lengthwise through the body. The alimentary canal directs food in one direction: from the mouth to the anus. The gut is where almost all of insects' digestion takes place. It can be divided into three sections – the foregut, midgut and hindgut – each of which performs a different process of digestion. In addition to the alimentary canal, insects also have paired salivary glands and salivary reservoirs. These structures usually reside in the thorax, adjacent to the foregut. Foregut The first section of the alimentary canal is the foregut (element 27 in numbered diagram), or stomodaeum. The foregut is lined with a cuticular lining made of chitin and proteins as protection from tough food. The foregut includes the buccal cavity (mouth), pharynx, esophagus, and Crop and proventriculus (any part may be highly modified), which both store food and signify when to continue passing onward to the midgut. Here, digestion starts as partially chewed food is broken down by saliva from the salivary glands. As the salivary glands produce fluid and carbohydrate-digesting enzymes (mostly amylases), strong muscles in the pharynx pump fluid into the buccal cavity, lubricating the food like the salivarium does, and helping blood feeders, and xylem and phloem feeders. From there, the pharynx passes food to the esophagus, which could be just a simple tube passing it on to the crop and proventriculus, and then on ward to the midgut, as in most insects. Alternately, the foregut may expand into a very enlarged crop and proventriculus, or the crop could just be a diverticulum, or fluid filled structure, as in some Diptera species. The salivary glands (element 30 in numbered diagram) in an insect's mouth produce saliva. The salivary ducts lead from the glands to the reservoirs and then forward through the head to an opening called the salivarium, located behind the hypopharynx. By moving its mouthparts (element 32 in numbered diagram) the insect can mix its food with saliva. The mixture of saliva and food then travels through the salivary tubes into the mouth, where it begins to break down. Some insects, like flies, have extra-oral digestion. Insects using extra-oral digestion expel digestive enzymes onto their food to break it down. This strategy allows insects to extract a significant proportion of the available nutrients from the food source. Midgut Once food leaves the crop, it passes to the midgut (element 13 in numbered diagram), also known as the mesenteron, where the majority of digestion takes place. Microscopic projections from the midgut wall, called microvilli, increase the surface area of the wall and allow more nutrients to be absorbed; they tend to be close to the origin of the midgut. In some insects, the role of the microvilli and where they are located may vary. For example, specialized microvilli producing digestive enzymes may more likely be near the end of the midgut, and absorption near the origin or beginning of the midgut. In the wingless (apterygote) orders Archaeognatha and Zygentoma (and the hexapods Entognatha), the midgut epithelium is derived entirely from yolk cells. In the majority of the flying insects (Neoptera), it is derived from bipolar formation. The Palaeoptera (mayflies and dragonflies) show a transition between apterygotes and neopterans, where the middle part of the midgut epithelium is derived from yolk cells and the anterior and posterior parts are formed through bipolar formation. Hindgut In the hindgut (element 16 in numbered diagram), or proctodaeum, undigested food particles are joined by uric acid to form fecal pellets. The rectum absorbs 90% of the water in these fecal pellets, and the dry pellet is then eliminated through the anus (element 17), completing the process of digestion. The uric acid is formed using hemolymph waste products diffused from the Malpighian tubules (element 20). It is then emptied directly into the alimentary canal, at the junction between the midgut and hindgut. The number of Malpighian tubules possessed by a given insect varies between species, ranging from only two tubules in some insects to over 100 tubules in others. Respiratory systems Insect respiration is accomplished without lungs. Instead, the insect respiratory system uses a system of internal tubes and sacs through which gases either diffuse or are actively pumped, delivering oxygen directly to tissues that need it via their trachea (element 8 in numbered diagram). Since oxygen is delivered directly, the circulatory system is not used to carry oxygen, and is therefore greatly reduced. The insect circulatory system has no veins or arteries, and instead consists of little more than a single, perforated dorsal tube that pulses peristaltically. Toward the thorax, the dorsal tube (element 14) divides into chambers and acts like the insect's heart. The opposite end of the dorsal tube is like the aorta of the insect circulating the hemolymph, arthropods' fluid analog of blood, inside the body cavity. Air is taken in through openings on the sides of the abdomen called spiracles. There are many different patterns of gas exchange demonstrated by different groups of insects. Gas exchange patterns in insects can range from continuous and diffusive ventilation, to discontinuous gas exchange. During continuous gas exchange, oxygen is taken in and carbon dioxide is released in a continuous cycle. In discontinuous gas exchange, however, the insect takes in oxygen while it is active and small amounts of carbon dioxide are released when the insect is at rest. Diffusive ventilation is simply a form of continuous gas exchange that occurs by diffusion rather than physically taking in the oxygen. Some species of insect that are submerged also have adaptations to aid in respiration. As larvae, many insects have gills that can extract oxygen dissolved in water, while others need to rise to the water surface to replenish air supplies, which may be held or trapped in special structures. Circulatory system Insect blood or haemolymph's main function is that of transport and it bathes the insect's body organs. Making up usually less than 25% of an insect's body weight, it transports hormones, nutrients and wastes and has a role in, osmoregulation, temperature control, immunity, storage (water, carbohydrates and fats) and skeletal function. It also plays an essential part in the moulting process. An additional role of the haemolymph in some orders, can be that of predatory defence. It can contain unpalatable and malodourous chemicals that will act as a deterrent to predators. Haemolymph contains molecules, ions and cells; regulating chemical exchanges between tissues, haemolymph is encased in the insect body cavity or haemocoel. It is transported around the body by combined heart (posterior) and aorta (anterior) pulsations, which are located dorsally just under the surface of the body. It differs from vertebrate blood in that it does not contain any red blood cells and therefore is without high oxygen carrying capacity, and is more similar to lymph found in vertebrates. Body fluids enter through one-way valved ostia, which are openings situated along the length of the combined aorta and heart organ. Pumping of the haemolymph occurs by waves of peristaltic contraction, originating at the body's posterior end, pumping forwards into the dorsal vessel, out via the aorta and then into the head where it flows out into the haemocoel. The haemolymph is circulated to the appendages unidirectionally with the aid of muscular pumps or accessory pulsatile organs usually found at the base of the antennae or wings and sometimes in the legs, with pumping rates accelerating with periods of increased activity. Movement of haemolymph is particularly important for thermoregulation in orders such as Odonata, Lepidoptera, Hymenoptera and Diptera. Endocrine system These glands are part of the endocrine system: 1. Neurosecretory cells 2. Corpora cardiaca 3. Prothoracic glands 4. Corpora allata Reproductive system Female Female insects are able make eggs, receive and store sperm, manipulate sperm from different males, and lay eggs. Their reproductive systems are made up of a pair of ovaries, accessory glands, one or more spermathecae, and ducts connecting these parts. The ovaries make eggs and accessory glands produce the substances to help package and lay the eggs. Spermathecae store sperm for varying periods of time and, along with portions of the oviducts, can control sperm use. The ducts and spermathecae are lined with a cuticle. The ovaries are made up of a number of egg tubes, called ovarioles, which vary in size and number by species. The number of eggs that the insect is able to make vary by the number of ovarioles with the rate that eggs can be developed being also influenced by ovariole design. In meroistic ovaries, the eggs-to-be divide repeatedly and most of the daughter cells become helper cells for a single oocyte in the cluster. In panoistic ovaries, each egg-to-be produced by stem germ cells develops into an oocyte; there are no helper cells from the germ line. Production of eggs by panoistic ovaries tends to be slower than that by meroistic ovaries. Accessory glands or glandular parts of the oviducts produce a variety of substances for sperm maintenance, transport, and fertilization, as well as for protection of eggs. They can produce glue and protective substances for coating eggs or tough coverings for a batch of eggs called oothecae. Spermathecae are tubes or sacs in which sperm can be stored between the time of mating and the time an egg is fertilized. Paternity testing of insects has revealed that some, and probably many, female insects use the spermatheca and various ducts to control or bias sperm used in favor of some males over others. Male The main component of the male reproductive system is the testis, suspended in the body cavity by tracheae and the fat body. The more primitive apterygote insects have a single testis, and in some lepidopterans the two maturing testes are secondarily fused into one structure during the later stages of larval development, although the ducts leading from them remain separate. However, most male insects have a pair of testes, inside of which are sperm tubes or follicles that are enclosed within a membranous sac. The follicles connect to the vas deferens by the vas efferens, and the two tubular vasa deferentia connect to a median ejaculatory duct that leads to the outside. A portion of the vas deferens is often enlarged to form the seminal vesicle, which stores the sperm before they are discharged into the female. The seminal vesicles have glandular linings that secrete nutrients for nourishment and maintenance of the sperm. The ejaculatory duct is derived from an invagination of the epidermal cells during development and, as a result, has a cuticular lining. The terminal portion of the ejaculatory duct may be sclerotized to form the intromittent organ, the aedeagus. The remainder of the male reproductive system is derived from embryonic mesoderm, except for the germ cells, or spermatogonia, which descend from the primordial pole cells very early during embryogenesis. The aedeagus can be quite pronounced or de minimis. The base of the aedeagus may be the partially sclerotized phallotheca, also called the phallosoma or theca. In some species the phallotheca contains a space, called the endosoma (internal holding pouch), into which the tip end of the aedeagus may be withdrawn (retracted). The vas deferens is sometimes drawn into (folded into) the phallotheca together with a seminal vesicle. Internal morphology of different taxa Blattodea Cockroaches are most common in tropical and subtropical climates. Some species are in close association with human dwellings and widely found around garbage or in the kitchen. Cockroaches are generally omnivorous with the exception of the wood-eating species such as Cryptocercus; these roaches are incapable of digesting cellulose themselves but have symbiotic relationships with various protozoans and bacteria that digest the cellulose, allowing them to extract the nutrients. The similarity of these symbionts in the genus Cryptocercus to those in termites are such that it has been suggested that they are more closely related to termites than to other cockroaches, and current research strongly supports this hypothesis of relationships. All species studied so far carry the obligate mutualistic endosymbiont bacterium Blattabacterium, with the exception of Nocticola australiensis, an Australian cave dwelling species without eyes, pigment or wings, and which recent genetic studies indicates are very primitive cockroaches. Cockroaches, like all insects, breathe through a system of tubes called tracheae. The tracheae of insects are attached to the spiracles, excluding the head. Thus cockroaches, like all insects, are not dependent on the mouth and windpipe to breathe. The valves open when the CO2 level in the insect rises to a high level; then the CO2 diffuses out of the tracheae to the outside and fresh O2 diffuses in. Unlike in vertebrates that depend on blood for transporting O2 and CO2, the tracheal system brings the air directly to cells, the tracheal tubes branching continually like a tree until their finest divisions, tracheoles, are associated with each cell, allowing gaseous oxygen to dissolve in the cytoplasm lying across the fine cuticle lining of the tracheole. CO2 diffuses out of the cell into the tracheole. While cockroaches do not have lungs and thus do not actively breathe in the vertebrate lung manner, in some very large species the body musculature may contract rhythmically to forcibly move air out and in the spiracles; this may be considered a form of breathing. Coleoptera The digestive system of beetles is primarily based on plants, which they for the most part feed upon, with mostly the anterior midgut performing digestion. However, in predatory species (e.g., Carabidae) most digestion occurs in the crop by means of midgut enzymes. In Elateridae species, the predatory larvae defecate enzymes on their prey, with digestion being extraorally. The alimentary canal basically comprises a short narrow pharynx, a widened expansion, the crop and a poorly developed gizzard. After there is a midgut, that varies in dimensions between species, with a large amount of cecum, with a hingut, with varying lengths. There are typically four to six Malpighian tubules. The nervous system in beetles contains all the types found in insects, varying between different species. With three thoracic and seven or eight abdominal ganglia can be distinguished to that in which all the thoracic and abdominal ganglia are fused to form a composite structure. Oxygen is obtained via a tracheal system. Air enters a series of tubes along the body through openings called spiracles, and is then taken into increasingly finer fibers. Pumping movements of the body force the air through the system. Some species of diving beetles (Dytiscidae) carry a bubble of air with them whenever they dive beneath the water surface. This bubble may be held under the elytra, or it may be trapped against the body using specialized hairs. The bubble usually covers one or more spiracles so the insect can breathe air from the bubble while submerged. An air bubble provides an insect with only a short-term supply of oxygen, but thanks to its unique physical properties, oxygen will diffuse into the bubble and displacing the nitrogen, called passive diffusion, however the volume of the bubble eventually diminishes, and the beetle will have to return to the surface. Like other insect species, beetles have hemolymph instead of blood. The open circulatory system of the beetle is driven by a tube-like heart attached to the top inside of the thorax. Different glands specialize for different pheromones produced for finding mates. Pheromones from species of Rutelinae are produced from epithelial cells lining the inner surface of the apical abdominal segments or amino acid-based pheromones of Melolonthinae from eversible glands on the abdominal apex. Other species produce different types of pheromones. Dermestids produce esters, and species of Elateridae produce fatty-acid-derived aldehydes and acetates. For means of finding a mate also, fireflies (Lampyridae) utilized modified fat body cells with transparent surfaces backed with reflective uric acid crystals to biosynthetically produce light, or bioluminescence. The light produce is highly efficient, as it is produced by oxidation of luciferin by the enzymes luciferase in the presence of ATP (adenosine triphosphate) and oxygen, producing oxyluciferin, carbon dioxide, and light. A notable number of species have developed special glands that produce chemicals for deterring predators (see Defense and predation). The Ground beetle's (of Carabidae) defensive glands, located at the posterior, produce a variety of hydrocarbons, aldehydes, phenols, quinones, esters, and acids released from an opening at the end of the abdomen. While African carabid beetles (e.g., Anthia some of which used to comprise the genus Thermophilum) employ the same chemicals as ants: formic acid. While Bombardier beetles have well-developed, like other carabid beetles, pygidial glands that empty from the lateral edges of the intersegment membranes between the seventh and eighth abdominal segments. The gland is made of two containing chambers. The first holds hydroquinones and hydrogen peroxide, with the second holding just hydrogen peroxide plus catalases. These chemicals mix and result in an explosive ejection, forming temperatures of around 100 C, with the breakdown of hydroquinone to H2 + O2 + quinone, with the O2 propelling the excretion. Tympanal organs are hearing organs. Such an organ is generally a membrane (tympanum) stretched across a frame backed by an air sac and associated sensory neurones. In the order Coleoptera, tympanal organs have been described in at least two families. Several species of the genus Cicindela in the family Cicindelidae have ears on the dorsal surface of the first abdominal segment beneath the wing; two tribes in the family Dynastinae (Scarabaeidae) have ears just beneath the pronotal shield or neck membrane. The ears of both families are to ultrasonic frequencies, with strong evidence that they function to detect the presence of bats via their ultrasonic echolocation. Even though beetles constitute a large order and live in a variety of niches, examples of hearing is surprisingly lacking in species, though it is likely that most are just undiscovered. Dermaptera The neuroendocrine system is typical of insects. There is a brain, a subesophageal ganglion, three thoracic ganglia, and six abdominal ganglia. Strong neurone connections connect the neurohemal corpora cardiaca to the brain and frontal ganglion, where the closely related median corpus allatum produces juvenile hormone III in close proximity to the neurohemal dorsal aorta. The digestive system of earwigs is like all other insects, consisting of a fore-, mid-, and hindgut, but earwigs lack gastric caecae which are specialized for digestion in many species of insect. Long, slender (extratory) malpighian tubules can be found between the junction of the mid- and hind gut. The reproductive system of females consist of paired ovaries, lateral oviducts, spermatheca, and a genital chamber. The lateral ducts are where the eggs leave the body, while the spermatheca is where sperm is stored. Unlike other insects, the gonopore, or genital opening is behind the seventh abdominal segment. The ovaries are primitive in that they are polytrophic (the nurse cells and oocytes alternate along the length of the ovariole). In some species these long ovarioles branch off the lateral duct, while in others, short ovarioles appear around the duct. Diptera The genitalia of female flies are rotated to a varying degree from the position found in other insects. In some flies this is a temporary rotation during mating, but in others it is a permanent torsion of the organs that occurs during the pupal stage. This torsion may lead to the anus being located below the genitals, or, in the case of 360° torsion, to the sperm duct being wrapped around the gut, despite the external organs being in their usual position. When flies mate, the male initially flies on top of the female, facing in the same direction, but then turns round to face in the opposite direction. This forces the male to lie on its back in order for its genitalia to remain engaged with those of the female, or the torsion of the male genitals allows the male to mate while remaining upright. This leads to flies having more reproduction abilities than most insects and at a much quicker rate. Flies come in great populations due to their ability to mate effectively and in a short period of time especially during the mating season. The female lays her eggs as close to the food source as possible, and development is very rapid, allowing the larva to consume as much food as possible in a short period of time before transforming into the adult. The eggs hatch soon after being laid, or the flies are ovoviviparous, with the larva hatching inside the mother. Larval flies, or maggots, have no true legs, and little demarcation between the thorax and abdomen; in the more derived species, the head is not distinguishable from the rest of the body. Maggots are limbless, or else have small prolegs. The eyes and antennae are reduced or absent, and the abdomen also lacks appendages such as cerci. This lack of features is an adaptation to a food-rich environment, such as within rotting organic matter, or as an endoparasite. The pupae take various forms, and in some cases develop inside a silk cocoon. After emerging from the pupa, the adult fly rarely lives more than a few days, and serves mainly to reproduce and to disperse in search of new food sources. Lepidoptera In reproductive system of butterflies and moths, the male genitalia are complex and unclear. In females, there are three types of genitalia based on the relating taxa: monotrysian, exoporian, and dytresian. In the monotrysian type, there is an opening on the fused segments of the sterna 9 and 10, which act as insemination and oviposition. In the exoporian type (in Hepialoidea and Mnesarchaeoidea), there are two separate places for insemination and oviposition, both occurring on the same sterna as the monotrysian type, 9/10. In most species the genitalia are flanked by two soft lobes, although they may be specialized and sclerotized in some species for ovipositing in area such as crevices and inside plant tissue. Hormones and the glands that produce them run the development of butterflies and moths as they go through their life cycle, called the endocrine system. The first insect hormone PTTH (Prothoracicotropic hormone) operates the species life cycle and diapause (see the relates section). This hormone is produced by corpora allata and corpora cardiaca, where it is also stored. Some glands are specialized to perform certain task such as producing silk or producing saliva in the palpi. While the corpora cardiaca produce PTTH, the corpora allata also produces juvenile hormones, and the prothorocic glands produce moulting hormones. In the digestive system, the anterior region of the foregut has been modified to form a pharyngeal sucking pump as they need it for the food they eat, which are for the most part liquids. An esophagus follows and leads to the posterior of the pharynx and in some species forms a form of crop. The midgut is short and straight, with the hindgut being longer and coiled. Ancestors of lepidopteran species, stemming from Hymenoptera, had midgut ceca, although this is lost in current butterflies and moths. Instead, all the digestive enzymes other than initial digestion, are immobilized at the surface of the midgut cells. In larvae, long-necked and stalked goblet cells are found in the anterior and posterior midgut regions, respectively. In insects, the goblet cells excrete positive potassium ions, which are absorbed from leaves ingested by the larvae. Most butterflies and moths display the usual digestive cycle, however species that have a different diet require adaptations to meet these new demands. In the circulatory system, hemolymph, or insect blood, is used to circulate heat in a form of thermoregulation, where muscles contraction produces heat, which is transferred to the rest of the body when conditions are unfavorable. In lepidopteran species, hemolymph is circulated through the veins in the wings by some form of pulsating organ, either by the heart or by the intake of air into the trachea. Air is taken in through spiracles along the sides of the abdomen and thorax supplying the trachea with oxygen as it goes through the lepidopteran's respiratory system. There are three different tracheae supplying oxygen diffusing oxygen throughout the species body: The dorsal, ventral, and visceral. The dorsal tracheae supply oxygen to the dorsal musculature and vessels, while the ventral tracheae supply the ventral musculature and nerve cord, and the visceral tracheae supply the guts, fat bodies, and gonads.
Biology and health sciences
Basic anatomy
Biology
18841931
https://en.wikipedia.org/wiki/Scrotum
Scrotum
In most terrestrial mammals, the scrotum (: scrotums or scrota; possibly from Latin scortum, meaning "hide" or "skin") or scrotal sac is a part of the external male genitalia located at the base of the penis. It consists of a sac of skin containing the external spermatic fascia, testicles, epididymides, and vasa deferentia. The scrotum will usually tighten when exposed to cold temperatures. The scrotum is homologous to the labia majora in females. Structure In regards to humans, the scrotum is a suspended dual-chambered sac of skin and muscular tissue containing the testicles and the lower part of the spermatic cords. It is located behind the penis and above the perineum. The perineal raphe is a small, vertical ridge of skin that expands from the anus and runs through the middle of the scrotum front to back. The scrotum is also a distention of the perineum and carries some abdominal tissues into its cavity including the testicular artery, testicular vein, and pampiniform plexus. Nerve supply Blood supply Skin and glands Sebaceous glands Apocrine glands Smooth muscle The skin on the scrotum is more highly pigmented in comparison to the rest of the body. The septum is a connective tissue membrane dividing the scrotum into two cavities. Lymphatic system The scrotal lymph initially drains into the superficial inguinal lymph nodes, this then drains into the deep inguinal lymph nodes. The deep inguinal lymph nodes channel into the common iliac, which ultimately releases lymph into the cisterna chyli. Asymmetry One testis is typically lower than the other, which is believed to function to avoid compression in the event of impact; in humans, the left testis is typically lower than the right. An alternative view is that testis descent asymmetry evolved to enable more effective cooling of the testicles. Internal structure Additional tissues and organs reside inside the scrotum and are described in more detail in the following articles: Appendix of epididymidis Cremaster muscle Dartos fascia Efferent ductules Epididymis Leydig cell Lobules of testis Paradidymis Rete testes Scrotal septum Seminiferous tubule Sertoli cell Spermatic cord Testicle Tunica albuginea of testis Tunica vaginalis Tunica vasculosa testis Vas deferens Development During the fifth week after fertilization, the genital ridge grows behind the peritoneal membrane. By the sixth week, string-like tissues called primary sex cords form within the enlarging genital ridge. Externally, a swelling called the genital tubercule appears over the cloacal membrane. Testosterone secretion starts during week eight, reaches peak levels during week 13 and eventually declines to very low levels by the end of the second trimester. The testosterone causes the masculinization of the labioscrotal folds into the scrotum. The scrotal raphe is formed when the embryonic, urethral groove closes by week 12. Scrotal growth and puberty Though the testes and scrotum form early in embryonic life, sexual maturation begins upon entering puberty. The increased secretion of testosterone causes the darkening of the skin and development of pubic hair on the scrotum. Function The scrotum regulates the temperature of the testicles and maintains it at , i.e. two or three degrees below the body temperature of . Higher temperatures affect spermatogenesis. Temperature control is accomplished by the smooth muscles of the scrotum moving the testicles either closer to or further away from the abdomen dependent upon the ambient temperature. This is accomplished by the cremaster muscle in the abdomen and the dartos fascia (muscular tissue under the skin that makes the scrotum appear wrinkly). During sexual arousal, the scrotum will also tighten and thicken in the course of penile erection. Having the scrotum and testicles situated outside the abdominal cavity may provide additional advantages. The external scrotum is not affected by abdominal pressure. This may prevent the emptying of the testes before the sperm were matured sufficiently for fertilization. Another advantage is it protects the testes from jolts and compressions associated with an active lifestyle. The scrotum may provide some friction during intercourse, helping to enhance the activity. The scrotum is also considered to be an erogenous zone. Society and culture Common slang terms for the scrotum are ballsack, nutsack, and teabag. Some men will get a piercing on the skin of the scrotum, any of which is called a hafada (e.g., scrotal ladder). Side-to-side or front-to-back piercings that pass through the scrotum are known as transscrotal piercings. Scrotoplasty is a sex reassignment surgery that creates a scrotum for trans men using tissue from the labia majora, or a plastic surgery that repairs or reconstructs the scrotum. Other animals A scrotum is present in all boreoeutherian land mammals except hippopotamuses, rhinoceroses, hedgehogs, moles, pangolins, tapirs, and numerous families of bats and rodents. The anus is separated from the scrotum by the perineum in these mammals. The testicles remain in the body cavity in all other vertebrates, including cloacal animals. Unlike placentals, some male marsupials have a scrotum that is anterior to the penis, which is not homologous to the scrotum of placentals, although there are several marsupial species without an external scrotum. The scrotum is also absent in marine mammals, such as whales, dolphins, and seals, as well as in lineages of other land mammals, such as the afrotherians (elephants, aardvarks, etc.), xenarthrans (armadillos, anteaters, and sloths), and monotremes. Clinical significance A study has indicated that use of a laptop computer positioned on the lap can negatively affect sperm production. Diseases and conditions The scrotum and its contents can develop many diseases and can incur injuries. These include: Candidiasis (yeast infection) Sebaceous cyst Epidermal cyst Hydrocele testis Hematocele Molluscum contagiosum Spermatocele Paget's disease of the scrotum Varicocele - enlargement of the pampiniform venous complex Inguinal hernia Epididymo-orchitis Testicular torsion Pruritus scroti - irritation of the scrotum (itchiness) Genital warts - sexually transmitted infection Testicular cancer Dermatitis Undescended testes Chyloderma - swollen scrotum caused by a lymphatic obstruction Mumps Scabies Herpes - sexually transmitted infection Pubic lice Chancroid (Haemophilus ducreyi) - sexually transmitted infection Chlamydia (Chlamydia trachomatis) - sexually transmitted infection Gonorrhea (Neisseria gonorrhoeae) - sexually transmitted infection Granuloma inguinale or (Klebsiella granulomatis) Syphilis (Treponema pallidum) - sexually transmitted infection Scrotal eczema Scrotal psoriasis disease Riboflavin deficiency Chimney sweeps' carcinoma (scrotal cancer)
Biology and health sciences
Reproductive system
Biology
18842002
https://en.wikipedia.org/wiki/Shampoo
Shampoo
Shampoo () is a hair care product, typically in the form of a viscous liquid, that is formulated to be used for cleaning (scalp) hair. Less commonly, it is available in solid bar format. ("Dry shampoo" is a separate product.) Shampoo is used by applying it to wet hair, massaging the product in the hair, roots and scalp, and then rinsing it out. Some users may follow a shampooing with the use of hair conditioner. Shampoo is typically used to remove the unwanted build-up of sebum (natural oils) in the hair without stripping out so much as to make hair unmanageable. Shampoo is generally made by combining a surfactant, most often sodium lauryl sulfate or sodium laureth sulfate, with a co-surfactant, most often cocamidopropyl betaine in water. The sulfate ingredient acts as a surfactant, trapping oils and other contaminants, similarly to soap. Shampoos are marketed to people with hair. There are also shampoos intended for animals that may contain insecticides or other medications to treat skin conditions or parasite infestations such as fleas. History Indian subcontinent In the Indian subcontinent, a variety of herbs and their extracts have been used as shampoos since ancient times. The first origin of shampoo came from the Indus Valley Civilization. A very effective early shampoo was made by boiling Sapindus with dried Indian gooseberry (amla) and a selection of other herbs, using the strained extract. Sapindus, also known as soapberries or soapnuts, a tropical tree widespread in India, is called ksuna (Sanskrit: क्षुण) in ancient Indian texts and its fruit pulp contains saponins which are a natural surfactant. The extract of soapberries creates a lather which Indian texts called phenaka (Sanskrit: फेनक). It leaves the hair soft, shiny and manageable. Other products used for hair cleansing were shikakai (Acacia concinna), hibiscus flowers, ritha (Sapindus mukorossi) and arappu (Albizzia amara). Guru Nanak, the founder and the first Guru of Sikhism, made references to soapberry tree and soap in the 16th century. Cleansing the hair and body massage (champu) during one's daily bath was an indulgence of early colonial traders in India. When they returned to Europe, they introduced the newly learned habits, including the hair treatment they called shampoo. The word shampoo entered the English language from the Indian subcontinent during the colonial era. It dated to 1762 and was derived from the Hindi word (, ), itself derived from the Sanskrit root (), which means 'to press, knead, or soothe'. Europe Sake Dean Mahomed, an Indian traveller, surgeon, and entrepreneur, is credited with introducing the practice of shampoo or "shampooing" to Britain. In 1814, Mahomed, with his Irish wife Jane Daly, opened the first commercial "shampooing" vapour masseur bath in England, in Brighton. He described the treatment in a local paper as "The Indian Medicated Vapour Bath (type of Turkish bath), a cure to many diseases and giving full relief when everything fails; particularly Rheumatic and paralytic, gout, stiff joints, old sprains, lame legs, aches and pains in the joints". This medical work featured testimonies from his patients, as well as the details of the treatment made him famous. The book acted as a marketing tool for his unique baths in Brighton and capitalised on the early 19th-century trend for seaside spa treatments. During the early stages of shampoo in Europe, English hair stylists boiled shaved soap in water and added herbs to give the hair shine and fragrance. Commercially made shampoo was available from the turn of the 20th century. A 1914 advertisement for Canthrox Shampoo in American Magazine showed young women at camp washing their hair with Canthrox in a lake; magazine advertisements in 1914 by Rexall featured Harmony Hair Beautifier and Shampoo.<ref>Victoria Sherrow, Encyclopedia of hair: a cultural history, 2007 s.v. "Advertising" p. 7.</ref> In 1900, German perfumer and hair-stylist Josef Wilhelm Rausch developed the first liquid hair washing soap and named it "Champooing" in Emmishofen, Switzerland. Later, in 1919, J.W. Rausch developed an antiseptic chamomile shampooing with a pH of 8.5. In 1927, liquid shampoo was improved for mass production by German inventor Hans Schwarzkopf in Berlin; his name became a shampoo brand sold in Europe. Originally, soap and shampoo were very similar products; both containing the same naturally derived surfactants, a type of detergent. Modern shampoo as it is known today was first introduced in the 1930s with Drene, the first shampoo using synthetic surfactants instead of soap. Indonesia Early shampoos used in Indonesia were made from the husk and straw (merang) of rice. The husks and straws were burned into ash, and the ashes (which have alkaline properties) are mixed with water to form lather. The ashes and lather were scrubbed into the hair and rinsed out, leaving the hair clean, but very dry. Afterwards, coconut oil was applied to the hair in order to moisturize it. Philippines Filipinos have been traditionally using gugo before commercial shampoos were sold in stores. The shampoo is obtained by soaking and rubbing the bark of the vine Gugo (Entada phaseoloides), producing a lather that cleanses the scalp effectively. Gugo is also used as an ingredient in hair tonics. Pre-Columbian North America Certain Native American tribes used extracts from North American plants as hair shampoo; for example the Costanoans of present-day coastal California used extracts from the coastal woodfern, Dryopteris expansa. Pre-Columbian South America Before quinoa can be eaten the saponin must be washed out from the grain prior to cooking. Pre-Columbian Andean civilizations used this soapy by-product as a shampoo. Types Shampoos can be classified into four main categories: deep cleansing shampoos, sometimes marketed under descriptions such as volumizing, clarifying, balancing, oil control, or thickening, which have a slightly higher amount of detergent and create a lot of foam; conditioning shampoos, sometimes marketed under descriptions such as moisturizing, 2-in-1, smoothing, anti-frizz, color care, and hydrating, which contain an ingredient like silicone or polyquaternium-10 to smooth the hair; baby shampoos, sometimes marketed as tear-free, which contain less detergent and produce less foam; and anti-dandruff shampoos, which are medicated to reduce dandruff. Composition Shampoo is generally made by combining a surfactant, most often sodium lauryl sulfate or sodium laureth sulfate, with a co-surfactant, most often cocamidopropyl betaine in water to form a thick, viscous liquid. Other essential ingredients include salt (sodium chloride), which is used to adjust the viscosity, a preservative and fragrance. Other ingredients are generally included in shampoo formulations to maximize the following qualities: pleasing foam ease of rinsing minimal skin and eye irritation thick or creamy feeling pleasant fragrance low toxicity good biodegradability slight acidity (pH less than 7) no damage to hair repair of damage already done to hair Many shampoos are pearlescent. This effect is achieved by the addition of tiny flakes of suitable materials, e.g. glycol distearate, chemically derived from stearic acid, which may have either animal or vegetable origins. Glycol distearate is a wax. Many shampoos also include silicone to provide conditioning benefits. Commonly used ingredients Ammonium chloride Ammonium lauryl sulfate Glycol Sodium laureth sulfate is derived from coconut oils and is used to soften water and create a lather. Hypromellose cellulose ethers are widely used as thickeners, rheology modifiers, emulsifiers and dispersants in Shampoo products. Sodium lauroamphoacetate is naturally derived from coconut oils and is used as a cleanser and counter-irritant. This is the ingredient that makes the product tear-free. Polysorbate 20 (abbreviated as PEG(20)) is a mild glycol-based surfactant that is used to solubilize fragrance oils and essential oils, meaning it causes liquid to spread across and penetrate the surface of a solid (i.e. hair). Polysorbate 80 (abbreviated as PEG(80)) is a glycol used to emulsify (or disperse) oils in water so the oils do not float on top. PEG-150 distearate is a simple thickener. Citric acid is produced biochemically and is used as an antioxidant to preserve the oils in the product. While it is a severe eye-irritant, the sodium lauroamphoacetate counteracts that property. Citric acid is used to adjust the pH down to approximately 5.5. It is a fairly weak acid which makes the adjustment easier. Shampoos usually are at pH 5.5 because at slightly acidic pH, the scales on a hair follicle lie flat, making the hair feel smooth and look shiny. It also has a small amount of preservative action. Citric acid, as opposed to any other acid, will prevent bacterial growth. Quaternium-15 is used as a bacterial and fungicidal preservative. Polyquaternium-10 acts as the conditioning ingredient, providing moisture and fullness to the hair. Di-PPG-2 myreth-10 adipate is a water-dispersible emollient that forms clear solutions with surfactant systems. Chloromethylisothiazolinone, or CMIT, is a powerful biocide and preservative. Benefit claims regarding ingredients In the United States, the Food and Drug Administration (FDA) mandates that shampoo containers accurately list ingredients on the products container. The government further regulates what shampoo manufacturers can and cannot claim as any associated benefit. Shampoo producers often use these regulations to challenge marketing claims made by competitors, helping to enforce these regulations. While the claims may be substantiated, however, the testing methods and details of such claims are not as straightforward. For example, many products are purported to protect hair from damage due to ultraviolet radiation. While the ingredient responsible for this protection does block UV, it is not often present in a high enough concentration to be effective. The North American Hair Research Society has a program to certify functional claims based on third-party testing. Shampoos made for treating medical conditions such as dandruff or itchy scalp are regulated as OTC drugs in the US marketplace. In the European Union, there is a requirement for the anti-dandruff claim to be substantiated as with any other advertising claim, but it is not considered to be a medical problem. Health risks A number of contact allergens are used as ingredients in shampoos, and contact allergy caused by shampoos is well known. Patch testing can identify ingredients to which patients are allergic, after which a physician can help the patient find a shampoo that is free of the ingredient to which they are allergic. The US bans 11 ingredients from shampoos, Canada bans 587, and the EU bans 1328. Specialized shampoos Dandruff Cosmetic companies have developed shampoos specifically for those who have dandruff. These contain fungicides such as ketoconazole, zinc pyrithione and selenium disulfide, which reduce loose dander by killing fungi like Malassezia furfur''. Coal tar and salicylate derivatives are often used as well. Alternatives to medicated shampoos are available for people who wish to avoid synthetic fungicides. Such shampoos often use tea tree oil, essential oils or herbal extracts. Colored hair Many companies have also developed color-protection shampoos suitable for colored hair; some of these shampoos contain gentle cleansers according to their manufacturers. Shampoos for color-treated hair are a type of moisturizing shampoo. Baby Shampoo for infants and young children is formulated so that it is less irritating and usually less prone to produce a stinging or burning sensation if it were to get into the eyes. For example, Johnson's Baby Shampoo advertises under the premise of "No More Tears". This is accomplished by one or more of the following formulation strategies. dilution, in case the product comes in contact with eyes after running off the top of the head with minimal further dilution adjusting pH to that of non-stress tears, approximately 7, which may be a higher pH than that of shampoos which are pH adjusted for skin or hair effects, and lower than that of shampoo made of soap Use of surfactants which, alone or in combination, are less irritating than those used in other shampoos (e.g. Sodium lauroamphoacetate) use of nonionic surfactants of the form of polyethoxylated synthetic glycolipids and polyethoxylated synthetic monoglycerides, which counteract the eye sting of other surfactants without producing the anesthetizing effect of alkyl polyethoxylates or alkylphenol polyethoxylates The distinction in 4 above does not completely surmount the controversy over the use of shampoo ingredients to mitigate eye sting produced by other ingredients, or the use of the products so formulated. The considerations in 3 and 4 frequently result in a much greater multiplicity of surfactants being used in individual baby shampoos than in other shampoos, and the detergency or foaming of such products may be compromised thereby. The monoanionic sulfonated surfactants and viscosity-increasing or foam stabilizing alkanolamides seen so frequently in other shampoos are much less common in the better baby shampoos. Sulfate-free shampoos Sulfate-free shampoos are composed of natural ingredients and free from both sodium lauryl sulfate and sodium laureth sulfate. These shampoos use alternative surfactants to cleanse the hair. Animal Shampoo intended for animals may contain insecticides or other medications for treatment of skin conditions or parasite infestations such as fleas or mange. These must never be used on humans. While some human shampoos may be harmful when used on animals, any human haircare products that contain active ingredients or drugs (such as zinc in anti-dandruff shampoos) are potentially toxic when ingested by animals. Special care must be taken not to use those products on pets. Cats are at particular risk due to their instinctive method of grooming their fur with their tongues. Shampoos that are especially designed to be used on pets, commonly dogs and cats, are normally intended to do more than just clean the pet's coat or skin. Most of these shampoos contain ingredients which act different and are meant to treat a skin condition or an allergy or to fight against fleas. The main ingredients contained by pet shampoos can be grouped in insecticidals, antiseborrheic, antibacterials, antifungals, emollients, emulsifiers and humectants. Whereas some of these ingredients may be efficient in treating some conditions, pet owners are recommended to use them according to their veterinarian's indications because many of them cannot be used on cats or can harm the pet if it is misused. Generally, insecticidal pet shampoos contain pyrethrin, pyrethroids (such as permethrin and which may not be used on cats) and carbaryl. These ingredients are mostly found in shampoos that are meant to fight against parasite infestations. Antifungal shampoos are used on pets with yeast or ringworm infections. These might contain ingredients such as miconazole, chlorhexidine, providone iodine, ketoconazole or selenium sulfide (which cannot be used on cats). Bacterial infections in pets are sometimes treated with antibacterial shampoos. They commonly contain benzoyl peroxide, chlorhexidine, povidone iodine, triclosan, ethyl lactate, or sulfur. Antipruritic shampoos are intended to provide relief of itching due to conditions such as atopy and other allergies. These usually contain colloidal oatmeal, hydrocortisone, Aloe vera, pramoxine hydrochloride, menthol, diphenhydramine, sulfur or salicylic acid. These ingredients are aimed to reduce the inflammation, cure the condition and ease the symptoms at the same time while providing comfort to the pet. Antiseborrheic shampoos are those especially designed for pets with scales or those with excessive oily coats. These shampoos are made of sulfur, salicylic acid, refined tar (which cannot be used on cats), selenium sulfide (cannot be used on cats) and benzoyl peroxide. All these are meant to treat or prevent seborrhea oleosa, which is a condition characterized by excess oils. Dry scales can be prevented and treated with shampoos that contain sulfur or salicylic acid and which can be used on both cats and dogs. Emollient shampoos are efficient in adding oils to the skin and relieving the symptoms of a dry and itchy skin. They usually contain oils such as almond, corn, cottonseed, coconut, olive, peanut, Persia, safflower, sesame, lanolin, mineral or paraffin oil. The emollient shampoos are typically used with emulsifiers as they help distributing the emollients. These include ingredients such as cetyl alcohol, laureth-5, lecithin, PEG-4 dilaurate, stearic acid, stearyl alcohol, carboxylic acid, lactic acid, urea, sodium lactate, propylene glycol, glycerin, or polyvinylpyrrolidone. Although some of the pet shampoos are highly effective, some others may be less effective for some condition than another. Yet, although natural pet shampoos exist, it has been brought to attention that some of these might cause irritation to the skin of the pet. Natural ingredients that might be potential allergens for some pets include eucalyptus, lemon or orange extracts and tea tree oil. On the contrary, oatmeal appears to be one of the most widely skin-tolerated ingredients that is found in pet shampoos. Most ingredients found in a shampoo meant to be used on animals are safe for the pet as there is a high likelihood that the pets will lick their coats, especially in the case of cats. Pet shampoos which include fragrances, deodorants or colors may harm the skin of the pet by causing inflammations or irritation. Shampoos that do not contain any unnatural additives are known as hypoallergenic shampoos and are increasing in popularity. Solid shampoo bars Solid shampoos or shampoo bars can either be soap-based or use other plant-based surfactants, such as sodium cocoyl isethionate or sodium coco-sulfate combined with oils and waxes. Soap-based shampoo bars are high in pH (alkaline) compared to human hair and scalps, which are slightly acidic. Alkaline pH increases the friction of the hair fibres which may cause damage to the hair cuticle, making it feel rough and drying out the scalp. Jelly and gel Stiff, non-pourable clear gels to be squeezed from a tube were once popular forms of shampoo, and can be produced by increasing a shampoo's viscosity. This type of shampoo cannot be spilled, but unlike a solid, it can still be lost down the drain by sliding off wet skin or hair. Paste and cream Shampoos in the form of pastes or creams were formerly marketed in jars or tubes. The contents were wet but not completely dissolved. They would apply faster than solids and dissolve quickly. Antibacterial Antibacterial shampoos are often used in veterinary medicine for various conditions, as well as in humans before some surgical procedures. No Poo Movement Closely associated with environmentalism, the "no poo" movement consists of people rejecting the societal norm of frequent shampoo use. Some adherents of the no poo movement use baking soda or vinegar to wash their hair, while others use diluted honey. Further methods include the use of raw eggs (potentially mixed with salt water), rye flour, or chickpea flour dissolved in water. Other people use nothing or rinse their hair only with conditioner. Theory In the 1970s, ads featuring Farrah Fawcett and Christie Brinkley asserted that it was unhealthy not to shampoo several times a week. This mindset is reinforced by the greasy feeling of the scalp after a day or two of not shampooing. Using shampoo every day removes sebum, the oil produced by the scalp. This causes the sebaceous glands to produce oil at a higher rate, to compensate for what is lost during shampooing. According to Michelle Hanjani, a dermatologist at Columbia University, a gradual reduction in shampoo use will cause the sebum glands to produce at a slower rate, resulting in less grease in the scalp. Although this approach might seem unappealing to some individuals, many people try alternate shampooing techniques like baking soda and vinegar in order to avoid ingredients used in many shampoos that make hair greasy over time. Whereas the use of baking soda for hair cleansing has been associated with hair damage and skin irritation, likely due to its high pH value and exfoliating properties, honey, egg, rye flour, and chickpea flour hair washes seem gentler for long-term use.
Biology and health sciences
Hygiene products
Health
18842022
https://en.wikipedia.org/wiki/Thoroughbred
Thoroughbred
The Thoroughbred is a horse breed developed for horse racing. Although the word thoroughbred is sometimes used to refer to any breed of purebred horse, it technically refers only to the Thoroughbred breed. Thoroughbreds are considered "hot-blooded" horses that are known for their agility, speed, and spirit. The Thoroughbred, as it is known today, was developed in 17th- and 18th-century England, when native mares were crossbred with imported stallions of Arabian, Barb, and Turkoman breeding. All modern Thoroughbreds can trace their pedigrees to three stallions originally imported into England in the 17th and 18th centuries, and to a larger number of foundation mares of mostly English breeding. During the 18th and 19th centuries, the Thoroughbred breed spread throughout the world; they were imported into North America starting in 1730 and into Australia, Europe, Japan and South America during the 19th century. Millions of Thoroughbreds exist today, and around 100,000 foals are registered each year worldwide. Thoroughbreds are used mainly for racing, but are also bred for other riding disciplines such as show jumping, combined training, dressage, polo, and fox hunting. They are also commonly crossbred to create new breeds or to improve existing ones, and have been influential in the creation of the Quarter Horse, Standardbred, Anglo-Arabian, and various warmblood breeds. Thoroughbred racehorses perform with maximum exertion, which has resulted in high accident rates and health problems such as bleeding from the lungs. Other health concerns include low fertility, abnormally small hearts, and a small hoof-to-body-mass ratio. There are several theories for the reasons behind the prevalence of accidents and health problems in the Thoroughbred breed, and research on the subject is ongoing. Breed characteristics The typical Thoroughbred ranges from high, averaging . They are most often bay, dark bay or brown, chestnut, black, or gray. Less common colors recognized in the United States include roan and palomino. White is very rare, but is a recognized color separate from gray. The face and lower legs may be marked with white, but white will generally not appear on the body. Coat patterns that have more than one color on the body, such as Pinto or Appaloosa, are not recognized by mainstream breed registries. Good-quality Thoroughbreds have a well-chiseled head on a long neck, high withers, a deep chest, a short back, good depth of hindquarters, a lean body, and long legs. Thoroughbreds are classified among the "hot-blooded" breeds, which are animals bred for agility and speed and are generally considered spirited and bold. Thoroughbreds born in the Northern Hemisphere are officially considered a year older on the first of January each year; those born in the Southern Hemisphere officially are one year older on the first of August. These artificial dates have been set to enable the standardization of races and other competitions for horses in certain age groups. Terminology The Thoroughbred is a distinct breed of horse, although people sometimes refer to a purebred horse of any breed as a thoroughbred. The term for any horse or other animal derived from a single breed line is purebred. While the term probably came into general use because the English Thoroughbred's General Stud Book was one of the first breed registries created, in modern usage horse breeders consider it incorrect to refer to any animal as a thoroughbred except for horses belonging to the Thoroughbred breed. Nonetheless, breeders of other species of purebred animals may use the two terms interchangeably, though thoroughbred is less often used for describing purebred animals of other species. The term is a proper noun referring to this specific breed, though often not capitalized, especially in non-specialist publications, and outside the US. For example, the Australian Stud Book, The New York Times, and the BBC do not capitalize the word. History Beginnings in England Early racing Flat racing existed in England by at least 1174, when four-mile races took place at Smithfield, in London. Racing continued at fairs and markets throughout the Middle Ages and into the reign of King James I of England. It was then that handicapping, a system of adding weight to attempt to equalize a horse's chances of winning as well as improved training procedures, began to be used. During the reigns of Charles II, William III, Anne, and George I, the foundation of the Thoroughbred was laid. The term "thro-bred" to describe horses was first used in 1713. Under Charles II, a keen racegoer and owner, and Anne, royal support was given to racing and the breeding of race horses. With royal support, horse racing became popular with the public, and by 1727, a newspaper devoted to racing, the Racing Calendar, was founded. Devoted exclusively to the sport, it recorded race results and advertised upcoming meets. Foundation stallions All modern Thoroughbreds trace back to three stallions imported into England from the Middle East in the late 17th and early 18th centuries: the Byerley Turk (1680s), the Darley Arabian (1704), and the Godolphin Arabian (1729). Other imported stallions were less influential, but still made noteworthy contributions to the breed. These included the Alcock's Arabian, D'Arcy's White Turk, Leedes Arabian, and Curwen's Bay Barb. Another was the Brownlow Turk, who, among other attributes, is thought to be largely responsible for the gray coat color in Thoroughbreds. In all, about 160 stallions have been traced in the historical record as contributing to the creation of the Thoroughbred. The addition of horses of Eastern bloodlines, whether Arabian, Barb, or Turk, to the native English mares ultimately led to the creation of the General Stud Book (GSB) in 1791 and the practice of official registration of horses. According to Peter Willett, about 50% of the foundation stallions appear to have been of Arabian bloodlines, with the remainder being evenly divided between Turkoman and Barb breeding. Each of the three major foundation sires was, coincidentally, the ancestor of a grandson or great-great-grandson who was the only male descendant to perpetuate each respective horse's male line: Matchem was the only descendant of his grandsire, the Godolphin Arabian, to maintain a male line to the present; the Byerley Turk's male line was preserved by Herod (or King Herod), a great-great-grandson; and the male line of the Darley Arabian owes its existence to great-great-grandson Eclipse, who was the dominant racehorse of his day and never defeated. One genetic study indicates that 95% of all male Thoroughbreds trace their direct male line (via the Y chromosome) to the Darley Arabian. However, in modern Thoroughbred pedigrees, most horses have more crosses to the Godolphin Arabian (13.8%) than to the Darley Arabian (6.5%) when all lines of descent (maternal and paternal) are considered. Further, as a percentage of contributions to current Thoroughbred bloodlines, Curwen's Bay Barb (4.2%) appears more often than the Byerley Turk (3.3%). The majority of modern Thoroughbreds alive today trace to a total of only 27 or 28 stallions from the 18th and 19th centuries. Foundation mares The mares used as foundation breeding stock came from a variety of breeds, some of which, such as the Irish Hobby, had developed in northern Europe prior to the 13th century. Other mares were of oriental breeding, including Barb, Turk and other bloodlines, although most researchers conclude that the number of Eastern mares imported into England during the 100 years after 1660 was small. The 19th-century researcher Bruce Lowe identified 50 mare "families" in the Thoroughbred breed, later augmented by other researchers to 74. However, it is probable that fewer genetically unique mare lines existed than Lowe identified. Recent studies of the mtDNA of Thoroughbred mares indicate that some of the mare lines thought to be genetically distinct may actually have had a common ancestor; in 19 mare lines studied, the haplotypes revealed that they traced to only 15 unique foundation mares, suggesting either a common ancestor for foundation mares thought to be unrelated or recording errors in the GSB. Later development in Britain By the end of the 18th century, the English Classic races had been established. These are the St. Leger Stakes, founded in 1776, The Oaks, founded in 1779, and The Derby in 1780. Later, the 2,000 Guineas Stakes and the 1,000 Guineas Stakes were founded in 1809 and 1814. The 1,000 Guineas and the Oaks are restricted to fillies, but the others are open to racehorses of either sex aged three years. The distances of these races, ranging from to , led to a change in breeding practices, as breeders concentrated on producing horses that could race at a younger age than in the past and that had more speed. In the early 18th century, the emphasis had been on longer races, up to , that were run in multiple heats. The older style of race favored older horses, but with the change in distances, younger horses became preferred. Selective breeding for speed and racing ability led to improvements in the size of horses and winning times by the middle of the 19th century. Bay Middleton, a winner of the Epsom Derby, stood over 16 hands high, a full hand higher than the Darley Arabian. Winning times had improved to such a degree that many felt further improvement by adding additional Arabian bloodlines was impossible. This was borne out in 1885, when a race was held between a Thoroughbred, Iambic, considered a mid-grade runner, and the best Arabian of the time, Asil. The race was over , and although Iambic was handicapped by carrying more than Asil, he still managed to beat Asil by 20 lengths. The improvement of the breed for racing in this way was said by noted 19th century racing writer, Nimrod, to have created "the noblest animal in the creation". An aspect of the modern British breeding establishment is that they breed not only for flat racing, but also for steeplechasing. Up until the end of the 19th century, Thoroughbreds were bred not only for racing but also as saddle horses. Soon after the start of the 20th century, fears that the English races would be overrun with American-bred Thoroughbreds because of the closing of US racetracks in the early 1910s, led to the Jersey Act of 1913. It prohibited the registration of any horse in the General Stud Book (GSB) if they could not show that every ancestor traced to the GSB. This excluded most American-bred horses, because the 100-year gap between the founding of the GSB and the American Stud Book meant that most American-bred horses possessed at least one or two crosses to horses not registered in the GSB. The act was not repealed until 1949, after which a horse was only required to show that all its ancestors to the ninth generation were registered in a recognized Stud Book. Many felt that the Jersey Act hampered the development of the British Thoroughbred by preventing breeders in the United Kingdom from using new bloodlines developed outside the British Isles. In America The first Thoroughbred horse in the American Colonies was Bulle Rock, imported in 1730. Maryland and Virginia were the centers of Colonial Thoroughbred breeding, along with South Carolina and New York. During the American Revolution importations of horses from England practically stopped but were restarted after the signing of a peace treaty. Two important stallions were imported around the time of the Revolution; Messenger in 1788 and Diomed before that. Messenger left little impact on the American Thoroughbred, but is considered a foundation sire of the Standardbred breed. Diomed, who won the Derby Stakes in 1780, had a significant impact on American Thoroughbred breeding, mainly through his son Sir Archy. John F. Wall, a racing historian, said that Sir Archy was the "first outstanding stallion we can claim as native American." He was retired from the racetrack because of lack of opponents. Messenger left little impact on the American Thoroughbred, but is considered a foundation sire of the Standardbred breed. Diomed, who won the Derby Stakes in 1780, had a significant impact on American Thoroughbred breeding, mainly through his son Sir Archy. John F. Wall, a racing historian, said that Sir Archy was the "first outstanding stallion we can claim as native American." He was retired from the racetrack because of lack of opponents. Medley and Shark, who arrived in the United States before Messenger and Diomed, became important broodmare and dam sires by producing foundation stock, and their daughters and granddaughters were bred primarily to Diomed. After the American Revolution, the center of Thoroughbred breeding and racing in the United States moved west. Kentucky and Tennessee became significant centers. Andrew Jackson, later President of the United States, was a breeder and racer of Thoroughbreds in Tennessee. Match races held in the early 19th century helped to popularize horse racing in the United States. One took place in 1823, in Long Island, New York, between Sir Henry and American Eclipse. Another was a match race between Boston and Fashion in 1838 that featured bets of $20,000 from each side. The last major match races before the American Civil War were both between Lexington and Lecompte. The first was held in 1854 in New Orleans and was won by Lecompte. Lexington's owner then challenged Lecompte's owner to a rematch, held in 1855 in New Orleans and won by Lexington. Both of these horses were sons of Boston, a descendant of Sir Archy. Lexington went on to a career as a breeding stallion, and led the sires list of number of winners for sixteen years, fourteen of them in a row. After the American Civil War, the emphasis in American racing changed from the older style of four-mile (6 km) races in which the horses ran in at least two heats. The new style of racing involved shorter races not run in heats, over distances from five furlongs up to . This development meant a change in breeding practices, as well as the age that horses were raced, with younger horses and sprinters coming to the fore. It was also after the Civil War that the American Thoroughbred returned to England to race. Iroquois became the first American-bred winner of the Epsom Derby in 1881. The success of American-bred Thoroughbreds in England led to the Jersey Act in 1913, which limited the importation of American Thoroughbreds into England. After World War I, the breeders in America continued to emphasize speed and early racing age but also imported horses from England, and this trend continued past World War II. After World War II, Thoroughbred breeding remained centered in Kentucky, but California, New York, and Florida also emerged as important racing and breeding centers. Thoroughbreds in the United States have historically been used not only for racing but also to improve other breeds. The early import Messenger was the foundation of the Standardbred, and Thoroughbred blood was also instrumental in the development of the American Quarter Horse. The foundation stallion of the Morgan breed is held by some to have been sired by a Thoroughbred. Between World War I and World War II, the U.S. Army used Thoroughbred stallions as part of their Remount Service, which was designed to improve the stock of cavalry mounts. In Europe Thoroughbreds began to be imported to France in 1817 and 1818 with the importation of a number of stallions from England, but initially the sport of horse racing did not prosper in France. The first Jockey Club in France was not formed until 1833, and in 1834 the racing and regulation functions were split off to a new society, the Société d'Encouragement pour l'Amélioration des Races de Chevaux en France, better known as the Jockey-Club de Paris. The French Stud Book was founded at the same time by the government. By 1876, French-bred Thoroughbreds were regularly winning races in England, and in that year a French breeder-owner earned the most money in England on the track. World War I almost destroyed French breeding because of war damage and lack of races. After the war, the premier French race, the Grand Prix, resumed and continues to this day. During World War II, French Thoroughbred breeding did not suffer as it had during the first World War, and thus was able to compete on an equal footing with other countries after the war. Organized racing in Italy started in 1837, when race meets were established in Florence and Naples and a meet in Milan was founded in 1842. Modern flat racing came to Rome in 1868. Later importations, including the Derby Stakes winners Ellington (1856) and Melton (1885), came to Italy before the end of the 19th century. Modern Thoroughbred breeding in Italy is mostly associated with the breeding program of Federico Tesio, who started his breeding program in 1898. Tesio was the breeder of Nearco, one of the dominant sires of Thoroughbreds in the later part of the 20th century. Other countries in Europe have Thoroughbred breeding programs, including Germany, Russia, Poland, and Hungary. In Australia and New Zealand Horses arrived in Australia with the First Fleet in 1788 along with the earliest colonists. Although horses of part-Thoroughbred blood were imported into Australia during the late 18th century, it is thought that the first pureblood Thoroughbred was a stallion named Northumberland who was imported from England in 1802 as a coach horse sire. By 1810, the first formal race meets were organized in Sydney, and by 1825 the first mare of proven Thoroughbred bloodlines arrived to join the Thoroughbred stallions already there. In 1825, the Sydney Turf Club, the first true racing club in Australia, was formed. Throughout the 1830s, the Australian colonies began to import Thoroughbreds, almost exclusively for racing purposes, and to improve the local stock. Each colony formed its own racing clubs and held its own races. Gradually, the individual clubs were integrated into one overarching organization, now known as the Australian Racing Board. Thoroughbreds from Australia were imported into New Zealand in the 1840s and 1850s, with the first direct importation from England occurring in 1862. In other areas Thoroughbreds have been exported to many other areas of the world since the breed was created. Oriental horses were imported into South Africa from the late 17th century in order to improve the local stock through crossbreeding. Horse racing was established there in the late 18th and early 19th centuries, and Thoroughbreds were imported in increasing numbers. The first Thoroughbred stallions arrived in Argentina in 1853, but the first mares did not arrive until 1865. The Argentine Stud Book was first published in 1893. Thoroughbreds were imported into Japan from 1895, although it was not until after World War II that Japan began a serious breeding and racing business involving Thoroughbreds. Registration, breeding, and population The number of Thoroughbred foals registered each year in North America varies greatly, chiefly linked to the success of the auction market which in turn depends on the state of the economy. The foal crop was over 44,000 in 1990, but declined to roughly 22,500 by 2014. The largest numbers are registered in the states of Kentucky, Florida and California. Australia is the second largest producer of Thoroughbreds in the world with almost 30,000 broodmares producing about 18,250 foals annually. Britain produces about 5,000 foals a year, and worldwide, there are more than 195,000 active broodmares, or females being used for breeding, and 118,000 newly registered foals in 2006 alone. The Thoroughbred industry is a large agribusiness, generating around $34 billion in revenue annually in the United States and providing about 470,000 jobs through a network of farms, training centers, and race tracks. Unlike a significant number of registered breeds today, a horse cannot be registered as a Thoroughbred (with The Jockey Club registry) unless conceived by live cover, the witnessed natural mating of a mare and a stallion. Artificial insemination (AI) and embryo transfer (ET), though commonly used and allowable in many other horse breed registries, cannot be used with Thoroughbreds. One reason is that a greater possibility of error exists in assigning parentage with artificial insemination, and although DNA and blood testing eliminate many of those concerns, artificial insemination still requires more detailed record keeping. The main reason, however, may be economic; a stallion has a limited number of mares who can be serviced by live cover. Thus the practice prevents an oversupply of Thoroughbreds, although modern management still allows a stallion to live cover more mares in a season than was once thought possible. As an example, in 2008, the Australian stallion Encosta De Lago covered 227 mares. By allowing a stallion to cover only a couple of hundred mares a year rather than the couple of thousand possible with artificial insemination, it also preserves the high prices paid for horses of the finest or most popular lineages. Concern exists that the closed stud book and tightly regulated population of the Thoroughbred is at risk of loss of genetic diversity because of the level of inadvertent inbreeding inevitable in such a small population. According to one study, 78% of alleles in the current population can be traced to 30 foundation animals, 27 of which are male. Ten foundation mares account for 72% of maternal (tail-female) lineages, and, as noted above, one stallion appears in 95% of lineages. Thoroughbred pedigrees are generally traced through the maternal line, called the distaff line. The line that a horse comes from is a critical factor in determining the price for a young horse. Value Prices of Thoroughbreds vary greatly, depending on age, pedigree, conformation, and other market factors. In 2007, Keeneland Sales, a United States-based sales company, sold 9,124 horses at auction, with a total value of $814,401,000, which gives an average price of $89,259. As a whole for the United States in 2007, The Jockey Club auction statistics indicated that the average weanling sold for $44,407, the average yearling sold for $55,300, average sale price for two-year-olds was $61,843, broodmares averaged $70,150, and horses over two and broodmare prospects sold for an average of $53,243. For Europe, the July 2007 Tattersall's Sale sold 593 horses at auction, with a total for the sale of 10,951,300 guineas, for an average of 18,468 guineas. Also in 2007, Doncaster Bloodstock Sales, another British sales firm, sold 2,248 horses for a total value of 43,033,881 guineas, making an average of 15,110 guineas per horse. Australian prices at auction during the 2007–2008 racing and breeding season were as follows: 1,223 Australian weanlings sold for a total of $31,352,000, an average of $25,635 each. Four thousand, nine hundred and three yearlings sold for a total value of A$372,003,961, an average of A$75,853. Five hundred two-year-olds sold for A$13,030,150, an average of A$26,060, and 2,118 broodmares totalled A$107,720,775, an average of A$50,860. Averages, however, can be deceiving. For example, at the 2007 Fall Yearling sale at Keeneland, 3,799 young horses sold for a total of $385,018,600, for an average of $101,347 per horse. However, that average sales price reflected a variation that included at least 19 horses that sold for only $1,000 each and 34 that sold for over $1,000,000 apiece. The highest price paid at auction for a Thoroughbred was set in 2006 at $16,000,000 for a two-year-old colt named The Green Monkey. Record prices at auction often grab headlines, though they do not necessarily reflect the animal's future success; in the case of The Green Monkey, injuries limited him to only three career starts before being retired to stud in 2008, and he never won a race. Conversely, even a highly successful Thoroughbred may be sold by the pound for a few hundred dollars to become horsemeat. The best-known example of this was the 1986 Kentucky Derby winner Ferdinand, exported to Japan to stand at stud, but was ultimately slaughtered in 2002, presumably for pet food. However, the value of a Thoroughbred may also be influenced by the purse money it wins. In 2007, Thoroughbred racehorses earned a total of $1,217,854,602 in all placings, an average earnings per starter of $16,924. In addition, the track record of a race horse may influence its future value as a breeding animal. Stud fees for stallions that enter breeding can range from $2,500 to $500,000 per mare in the United States, and from £2000 to £75,000 or more in Britain. The record stud fee to date was set in the 1980s, when the stud fee of the late Northern Dancer reached $1 million. During the 2008 Australian breeding season seven stallions stood at a stud fee of A$110,000 or more, with the highest fee in the nation at A$302,500. Uses Although the Thoroughbred is primarily bred for racing, the breed is also used for show jumping and combined training because of its athleticism, and many retired and retrained race horses become fine family riding horses, dressage horses, and youth show horses. The larger horses are sought after for hunter/jumper and dressage competitions, whereas the smaller horses are in demand as polo ponies. Horse racing Thoroughbred horses are primarily bred for racing under saddle at the gallop. Thoroughbreds are often known for being either distance runners or sprinters, and their conformation usually reflects what they have been bred to do. Sprinters are usually well muscled, while stayers, or distance runners, tend to be smaller and slimmer. The size of the horse is one consideration for buyers and trainers when choosing a potential racehorse. Although there have been champion racehorses of every height, from Zenyatta who stood 17.2 hands, to Man o' War and Secretariat who both stood at 16.2 hands, down to Hyperion, who was only 15.1, the best racehorses are generally of average size. Larger horses mature more slowly and have more stress on their legs and feet, predisposing them to lameness. Smaller horses are considered by some to be at a disadvantage due to their shorter stride and a tendency of other horses to bump them, especially in the starting gate. Historically, Thoroughbreds have steadily increased in size: the average height of a Thoroughbred in 1700 was about 13.3 hands high. By 1876 this had increased to 15.3. In 2007, there were 71,959 horses who started in races in the United States, and the average Thoroughbred racehorse in the United States and Canada ran 6.33 times in that year. In Australia, there were 31,416 horses in training during 2007, and those horses started 194,066 times for A$375,512,579 of prize money. During 2007, in Japan, there were 23,859 horses in training and those horses started 182,614 times for A$857,446,268 of prize money. In Britain, the British Racing Authority states there were 8,556 horses in training for flat racing for 2007, and those horses started 60,081 times in 5,659 races. Statistically, fewer than 50% of all race horses ever win a race, and less than 1% ever win a stakes race such as the Kentucky Derby or Epsom Derby. Any horse who has yet to win a race is known as a maiden. Horses finished with a racing career that are not suitable for breeding purposes often become riding horses or other equine companions. A number of agencies exist to help make the transition from the racetrack to another career, or to help find retirement homes for ex-racehorses. Other disciplines In addition to racing, Thoroughbreds compete in eventing, show jumping and dressage at the highest levels of international competition, including the Olympics. They are also used as show hunters, steeplechasers, and in Western riding speed events such as barrel racing. Mounted police divisions employ them in non-competitive work, and recreational riders also use them. Thoroughbreds are one of the most common breeds for use in polo in the United States. They are often seen in the fox hunting field as well. Crossbreeding Thoroughbreds are often crossed with horses of other breeds to create new breeds or to enhance or introduce specific qualities into existing ones. They have been influential on many modern riding horse breeds, such as the American Quarter Horse, the Standardbred, and possibly the Morgan, a breed that went on to influence many of the gaited breeds in North America. Other common crosses with the Thoroughbred include crossbreeding with Arabian bloodlines to produce the Anglo-Arabian as well as with the Irish Draught to produce the Irish Sport Horse. Thoroughbreds have been foundation bloodstock for various Warmblood breeds due to their refinement and performance capabilities. Crossbred horses developed from Thoroughbreds, (informally categorized as "hot bloods" because of temperament) crossed on sturdy draft horse breeds, (classified as "cold bloods" for their more phlegmatic temperament) are known as "warmbloods," which today are commonly seen in competitive events such as show jumping and dressage. Examples include the Dutch Warmblood, Hanoverian, and Selle Français. Some warmblood registries note the percentage of Thoroughbred breeding, and many warmblood breeds have an open stud book that continues to allow Thoroughbred crossbreeding. Health issues Although Thoroughbreds are seen in the hunter-jumper world and in other disciplines, modern Thoroughbreds are primarily bred for speed, and racehorses have a very high rate of accidents as well as other health problems. One tenth of all Thoroughbreds suffer orthopedic problems, including fractures. Current estimates indicate that there are 1.5 career-ending breakdowns for every 1,000 horses starting a race in the United States, an average of two horses per day. The state of California reported a particularly high rate of injury, 3.5 per 1000 starts. Other countries report lower rates of injury, with the United Kingdom having 0.9 injuries/1,000 starts (1990–1999) and the courses in Victoria, Australia, producing a rate of 0.44 injuries/1,000 starts (1989–2004). Thoroughbreds also have other health concerns, including a majority of animals who are prone to bleeding from the lungs (exercise induced pulmonary hemorrhage), 10% with low fertility, and 5% with abnormally small hearts. Thoroughbreds also tend to have smaller hooves relative to their body mass than other breeds, with thin soles and walls and a lack of cartilage mass, which contributes to foot soreness, the most common source of lameness in racehorses. Selective breeding One argument for the health issues involving Thoroughbreds suggests that inbreeding is the culprit. It has also been suggested that capability for speed is enhanced in an already swift animal by raising muscle mass, a form of selective breeding that has created animals designed to win horse races. Thus, according to one postulation, the modern Thoroughbred travels faster than its skeletal structure can support. Veterinarian Robert M. Miller states that "We have selectively bred for speeds that the anatomy of the horse cannot always cope with." Poor breeding may be encouraged by the fact that many horses are sent to the breeding shed following an injury. If the injury is linked to a conformational fault, the fault is likely to be passed to the next generation. Additionally, some breeders will have a veterinarian perform straightening procedures on a horse with crooked legs. This can help increase the horse's price at a sale and perhaps help the horse have a sounder racing career, but the genes for poor legs will still be passed on. Excess stress A high accident rate may also occur because Thoroughbreds, particularly in the United States, are first raced as 2-year-olds, well before they are completely mature. Though they may appear full-grown and are in superb muscular condition, their bones are not fully formed. However, catastrophic injury rates are higher in 4- and 5-year-olds than in 2- and 3-year-olds. Some believe that correct, slow training of a young horse (including foals) may actually be beneficial to the overall soundness of the animal. This is because, during the training process, microfractures occur in the leg followed by bone remodeling. If the remodeling is given sufficient time to heal, the bone becomes stronger. If proper remodeling occurs before hard training and racing begins, the horse will have a stronger musculoskeletal system and will have a decreased chance of injury. Studies have shown that track surfaces, horseshoes with toe grabs, use of certain legal medications, and high-intensity racing schedules may also contribute to a high injury rate. One promising trend is the development of synthetic surfaces for racetracks, and one of the first tracks to install such a surface, Turfway Park in Florence, Kentucky, saw its rate of fatal breakdowns drop from 24 in 2004–05 to three in the year following Polytrack installation. The material is not perfected, and some areas report problems related to winter weather, but studies are continuing. Medical challenges The level of treatment given to injured Thoroughbreds is often more intensive than for horses of lesser financial value but also controversial, due in part to the significant challenges in treating broken bones and other major leg injuries. Leg injuries that are not immediately fatal still may be life-threatening because a horse's weight must be distributed evenly on all four legs to prevent circulatory problems, laminitis, and other infections. If a horse loses the use of one leg temporarily, there is the risk that other legs will break down during the recovery period because they are carrying an abnormal weight load. While horses periodically lie down for brief periods of time, a horse cannot remain lying in the equivalent of a human's "bed rest" because of the risk of developing sores, internal damage, and congestion. Whenever a racing accident severely injures a well-known horse, such as the major leg fractures that led to the euthanization of 2006 Kentucky Derby winner Barbaro, or 2008 Kentucky Derby runner-up Eight Belles, animal rights groups have denounced the Thoroughbred racing industry. On the other hand, advocates of racing argue that without horse racing, far less funding and incentives would be available for medical and biomechanical research on horses. Although horse racing is hazardous, veterinary science has advanced. Previously hopeless cases can now be treated, and earlier detection through advanced imaging techniques like scintigraphy can keep at-risk horses off the track.
Biology and health sciences
Horses
null
18842168
https://en.wikipedia.org/wiki/Semen
Semen
Semen, also known as seminal fluid, is a bodily fluid that contains spermatozoa. Spermatozoa are secreted by the male gonads (sexual glands) and other sexual organs of male or hermaphroditic animals and can fertilize the female ovum. In placental mammals, semen also contains secretions from the male accessory glands and is discharged from the penis through the urethral orifice during ejaculation. In humans, seminal fluid contains several components besides spermatozoa: proteolytic and other enzymes as well as fructose are elements of seminal fluid which promote the survival of spermatozoa and provide a medium through which they can move or "swim". The fluid is adapted to be discharged deep into the vagina, so the spermatozoa can pass into the uterus and form a zygote with an egg. Semen is collected from animals for artificial insemination or cryoconservation of genetic material. Cryoconservation of animal genetic resources is a practice that calls for the collection of semen in efforts for conservation of a particular breed. Physiology Fertilization Depending on the species, spermatozoa can fertilize ova externally or internally. In external fertilization, the spermatozoa fertilize the ova directly, outside of the female's sexual organs. Female fish, for example, spawn ova into their aquatic environment, where they are fertilized by the semen of the male fish. Internal fertilization occurs inside the female's sexual organs after a male inseminates a female through copulation. In most vertebrates, including amphibians, reptiles, birds and monotreme mammals, copulation is achieved through the physical mating of the cloaca of the male and female. In marsupial and placental mammals, copulation occurs through the vagina. In macropods, semen coagulates and forms a mating plug in the vagina after copulation. Human Composition During the process of ejaculation, sperm passes through the ejaculatory ducts and mixes with fluids from the seminal vesicles, the prostate, and the bulbourethral glands to form the semen. The seminal vesicles produce a yellowish viscous fluid rich in fructose and other substances that makes up about 70% of human semen. The prostatic secretion, influenced by dihydrotestosterone, is a whitish (sometimes clear), thin fluid containing proteolytic enzymes, citric acid, acid phosphatase and lipids. The bulbourethral glands secrete a clear secretion into the lumen of the urethra to lubricate it. Sertoli cells, which nurture and support developing spermatocytes, secrete a fluid into seminiferous tubules that helps transport sperm to the genital ducts. The ductuli efferentes possess cuboidal cells with microvilli and lysosomal granules that modify the ductal fluid by reabsorbing some fluid. Once the semen enters the ductus epididymis the principal cells, which contain pinocytotic vessels indicating fluid reabsorption, secrete glycerophosphocholine which most likely inhibits premature capacitation. The accessory genital ducts, the seminal vesicle, prostate glands, and the bulbourethral glands, produce most of the seminal fluid. Seminal plasma of humans contains a complex range of organic and inorganic constituents. The seminal plasma provides a nutritive and protective medium for the spermatozoa during their journey through the female reproductive tract. The normal environment of the vagina is a hostile one (c.f. sexual conflict) for sperm cells, as it is very acidic (from the native microflora producing lactic acid), viscous, and patrolled by immune cells. The components in the seminal plasma attempt to compensate for this hostile environment. Basic amines such as putrescine, spermine, spermidine and cadaverine are responsible for the smell and flavor of semen. These alkaline bases counteract and buffer the acidic environment of the vaginal canal, and protect DNA inside the sperm from acidic denaturation. The components and contributions of semen are as follows: A 1992 World Health Organization report described normal human semen as having a volume of 2 mL or greater, pH of 7.2 to 8.0, sperm concentration of 20×106 spermatozoa/mL or more, sperm count of 40×106 spermatozoa per ejaculate or more, and motility of 50% or more with forward progression (categories a and b) of 25% or more with rapid progression (category a) within 60 minutes of ejaculation. A 2005 review of the literature found that the average reported physical and chemical properties of human semen were as follows: Appearance and consistency Semen is typically translucent with white, grey or even yellowish tint. Blood in the semen can cause a pink or reddish colour, known as hematospermia, and may indicate a medical problem which should be evaluated by a doctor if the symptom persists. After ejaculation, the latter part of the ejaculated semen coagulates immediately, forming globules, while the earlier part of the ejaculate typically does not. After a period typically ranging from 15 to 30 minutes, prostate-specific antigen present in the semen causes the decoagulation of the seminal coagulum. It is postulated that the initial clotting helps keep the semen in the vagina, while liquefaction frees the sperm to make their journey to the ova. A 2005 review found that the average reported viscosity of human semen in the literature was 3–7 centipoises (cP), or, equivalently, millipascal-seconds (mPa·s). Quality Semen quality is a measure of the ability of semen to accomplish fertilization. Thus, it is a measure of fertility in a man. It is the sperm in the semen that is the fertile component, and therefore semen quality involves both sperm quantity and sperm quality. Quantity The volume of semen ejaculate varies but is generally about 1 teaspoonful or less. A review of 30 studies concluded that the average was around 3.4 milliliters (mL), with some studies finding amounts as high as 5.0 mL or as low as 2.3 mL. In a study with Swedish and Danish men, a prolonged interval between ejaculations caused an increase in the sperm count in the semen but not an increase in the semen volume. Storage Semen can be stored in diluents such as the Illini Variable Temperature (IVT) diluent, which have been reported to be able to preserve high fertility of semen for over seven days. The IVT diluent is composed of several salts, sugars and antibacterial agents and gassed with CO2. Semen cryopreservation can be used for far longer storage durations. For human sperm, the longest reported successful storage with this method is 21 years. Health Infection transmission Semen can transmit many sexually transmitted infections and pathogens, including viruses like HIV and Ebola. Swallowing semen carries no additional risk other than those inherent in fellatio. This includes transmission risk for sexually transmitted infections such as human papillomavirus or herpes, especially for people with bleeding gums, gingivitis or open sores. Viruses in semen survive for a long time once outside the body. Bloodiness The presence of blood in semen or hematospermia may be undetectable (it can only be seen microscopically) or visible in the fluid. Its cause could be the result of inflammation, infection, blockage, or injury of the male reproductive tract or a problem within the urethra, testicles, epididymis or prostate. It usually clears up without treatment, or with antibiotics, but if persistent further semen analysis and other urogenital system tests might be needed to find out the cause. Allergy In rare circumstances, humans can develop an allergy to semen, called human seminal plasma sensitivity. It appears as a typical localized or systemic allergic response upon contact with seminal fluid. There is no one protein in semen responsible for the reaction. Symptoms can appear after first intercourse or after subsequent intercourse. A semen allergy can be distinguished from a latex allergy by determining if the symptoms disappear with use of a condom. Desensitization treatments are often very successful. Benefits to females Among numerous species in the animal kingdom, females may benefit from absorbing nutrients and proteins from seminal fluid for food, antiviral and antibacterial properties, and enhanced fertilisation. In humans, seminal fluid provides anti-viral activity towards herpes simplex virus and can transfer anti-microbial peptides cathelicidin and lactoferrin. In birds and mammals, mutalistic bacteria such as Lactobacillus have been detected in fluid transferral. Society and culture Qigong Qigong and Chinese medicine place huge emphasis on a form of energy called 精 (pinyin: jīng, also a morpheme denoting "essence" or "spirit") – which one attempts to develop and accumulate. "Jing" is sexual energy and is considered to dissipate with ejaculation, so masturbation is considered "energy suicide" amongst those who practice this art. According to Qigong theory, energy from many pathways/meridians becomes diverted and transfers itself to the sexual organs during sexual excitement. The ensuing orgasm and ejaculation will then finally expel the energy from the system completely. The Chinese proverb 一滴精,十滴血 (pinyin: yì dī jīng, shí dī xuè, literally: a drop of semen is equal to ten drops of blood) illustrates this point. The scientific term for semen in Chinese is 精液 (pinyin: jīng yè, literally: fluid of essence/jing) and the term for sperm is 精子 (pinyin: jīng zǐ, literally: basic element of essence/jing), two modern terms with classical referents. Indian philosophy In Ayurveda, semen is said to be made from forty drops of blood. It is considered to be the end of the food digestion cycle. One of the key aspects of Hindu religion is abstinence called brahmacharya. It can be lifelong or during a specific period or on specific days. Brahmacharya attaches great importance to semen retention. Many yogic texts also indicate the importance of semen retention and there are specific asanas and bandhas for it like Mula Bandana and Aswini Mudra. Greek philosophy In Ancient Greece, Aristotle remarked on the importance of semen: "For Aristotle, semen is the residue derived from nourishment, that is of blood, that has been highly concocted to the optimum temperature and substance. This can only be emitted by the male as only the male, by nature of his very being, has the requisite heat to concoct blood into semen." According to Aristotle, there is a direct connection between food and semen: "Sperms are the excretion of our food, or to put it more clearly, as the most perfect component of our food." The connection between food and physical growth, on the one hand, and semen, on the other, allows Aristotle to warn against "engag[ing] in sexual activity at too early an age ... [since] this will affect the growth of their bodies. Nourishment that would otherwise make the body grow is diverted to the production of semen. Aristotle is saying that at this stage the body is still growing; it is best for sexual activity to begin when its growth is 'no longer abundant', for when the body is more or less at full height, the transformation of nourishment into semen does not drain the body of needed material." Additionally, "Aristotle tells us that the region round the eyes was the region of the head most fruitful of seed ("most seedy" σπερματικώτατος), pointing to generally recognised effects upon the eyes of sexual indulgence and to practices which imply that seed comes from liquid in the region of the eyes." This may be explained by the belief of the Pythagoreans that "semen is a drop of the brain [τὸ δε σπέρμα εἶναι σταγόνα ἐγκέφαλου]." Greek Stoic philosophy conceived of the Logos spermatikos ("seminal word") as the principle of active reason that fecundated passive matter. The Jewish philosopher Philo similarly spoke in sexual terms of the Logos as the masculine principle of reason that sowed seeds of virtue in the feminine soul. The Christian Platonist Clement of Alexandria likened the Logos to physical blood as the "substance of the soul", and noted that some held "that the animal semen is substantially foam of its blood". Clement reflected an early Christian view that "the seed ought not be wasted nor scattered thoughtlessly nor sown in a way it cannot grow." Women were believed to have their own version, which was stored in the womb and released during climax. Retention was believed to cause female hysteria. In ancient Greek religion as a whole, semen is considered a form of miasma, and ritual purification was to be practised after its discharge. Reverence In some pre-industrial societies, semen and other body fluids were revered because they were believed to be magical. Blood is an example of such a fluid, but semen was also widely believed to be of supernatural origin and effect and was, as a result, considered holy or sacred. The ancient Sumerians believed that semen was "a divine substance, endowed on humanity by Enki", the god of water. The semen of a god was believed to have magical generative powers. In Sumerian mythology, when Enki's seed was planted in the ground, it caused the spontaneous growth of eight previously nonexistent plants. Enki was believed to have created the Tigris and Euphrates rivers by masturbating and ejaculating into their empty riverbeds. The Sumerians believed that rain was the semen of the sky-god An, which fell from the heavens to inseminate his consort, the earth-goddess Ki, causing her to give birth to all the plants of the earth. The orchid's twin bulbs were thought to resemble the testicles, which is the etymology of the disease orchiditis. There was an ancient Roman belief that the flower sprang from the spilled semen of copulating satyrs. In a number of mythologies around the world, semen is often considered analogous to breast milk. In the traditions of Bali, it is considered to be the returning or refunding of the milk of the mother in an alimentary metaphor. The wife feeds her husband who returns to her his semen, the milk of human kindness. Nancy Friday's book, Men in Love – Men's Sexual Fantasies: The Triumph of Love over Rage (1982), suggests that swallowing semen is high on a man's intimacy scale. Espionage When the British Secret Intelligence Service discovered that semen made a good invisible ink, Sir Mansfield George Smith-Cumming noted of his agents that "Every man (is) his own stylo". Ingestion Spiritual The Borborites, also known as the Phibionites, were an early Christian Gnostic sect during the late fourth century AD whose alleged practices involving sacred semen are described by the early Christian heretic-hunter Epiphanius of Salamis in his Panarion. Epiphanius claims that the Borborites had a sacred text called the Greater Questions of Mary, which contained an episode in which, during a post-resurrection appearance, Jesus took Mary Magdalene to the top of a mountain, where he pulled a woman out of his side and engaged in sexual intercourse with her. Then, upon ejaculating, Jesus drank his own semen and told Mary, "Thus we must do, that we may live." Upon hearing this, Mary instantly fainted, to which Jesus responded by helping her up and telling her, "O thou of little faith, wherefore didst thou doubt?" This story was supposedly the basis for the Borborite Eucharist ritual, in which they allegedly engaged in orgies and drank semen and menstrual blood as the "body and blood of Christ" respectively. Bart D. Ehrman, a scholar of early Christianity, casts doubt on the accuracy of Epiphanius's summary, commenting that "the details of Epiphanius's description sound very much like what you can find in the ancient rumor mill about secret societies in the ancient world". In some cultures, semen is considered to have special properties associated with masculinity. Several tribes of Papua New Guinea, including the Sambia and the Etoro, believe that semen promotes sexual maturation among the younger men of their tribe. To them, semen possesses the manly nature of the tribal elders, and in order to pass down their authority and powers, younger men of their next generation must fellate their elders and ingest their semen. Prepubescent and postpubescent males are required to engage in this practice. This act may also be associated with the culturally active homosexuality throughout these and other tribes. Semen ingestion has had central importance in some cultures around the world. In Baruya culture, there is a secret ritual in which boys give fellatio to young males and drink their semen, to "re-engender themselves before marriage". Sexual There are several sexual practices involving the ingestion of semen, which may be done with one or more partners. Practices involving the oral intake of semen include: Felching is a sexual practice involving the act of sucking semen out of the anus of one's partner. According to the entry for "felch" in the Oxford English Dictionary, the earliest occurrence of the word in print appears to have been in The Argot of the Homosexual Subculture by Ronald A. Farrell in 1972, although this usage was as a synonym for anilingus. is a Japanese term for sexual activity in which a person, usually a woman, consumes the semen of one or more men, often from some kind of container. "Gokkun" can also refer to the sexual act of swallowing semen after performing fellatio or participating in a bukkake. The word "gokkun" is onomatopoetic, and translates roughly as the English word "gulp", the sound made by swallowing. Cum swapping / snowballing / snowdropping is the sexual practice in which one person takes someone's semen into their mouth and then passes it to the mouth of the person who ejaculated the semen, usually through kissing. The term was originally used only by gay and bisexual men. Researchers who surveyed over 1,200 gay or bisexual men at New York LGBT community events in 2004 found that around 20% said they had engaged in snowballing at least once. Euphemisms A huge variety of euphemisms and dysphemisms have been invented to describe semen. For a list of terms, see sexual slang. Slang terms for semen include cum, jizz, spunk (primarily British English), spooge and/or splooge, load, nut, seed, and love juice. The term cum can also refer to an orgasm (when used as a verb rather than as a noun), while load is derived from the phrase blowing a load, referring to an ejaculation. The term nut originally refers to the testicles, but can be used to refer to both semen and ejaculation.
Biology and health sciences
Animal reproduction
null
18842281
https://en.wikipedia.org/wiki/Computer%20keyboard
Computer keyboard
A computer keyboard is a built-in or peripheral input device modeled after the typewriter keyboard which uses an arrangement of buttons or keys to act as mechanical levers or electronic switches. Replacing early punched cards and paper tape technology, interaction via teleprinter-style keyboards have been the main input method for computers since the 1970s, supplemented by the computer mouse since the 1980s, and the touchscreen since the 2000s. Keyboard keys (buttons) typically have a set of characters engraved or printed on them, and each press of a key typically corresponds to a single written symbol. However, producing some symbols may require pressing and holding several keys simultaneously or in sequence. While most keys produce characters (letters, numbers or symbols), other keys (such as the escape key) can prompt the computer to execute system commands. In a modern computer, the interpretation of key presses is generally left to the software: the information sent to the computer, the scan code, tells it only which physical key (or keys) was pressed or released. In normal usage, the keyboard is used as a text entry interface for typing text, numbers, and symbols into application software such as a word processor, web browser or social media app. Touchscreens use virtual keyboards. History Typewriters are the definitive ancestor of all key-based text entry devices, but the computer keyboard as a device for electromechanical data entry and communication largely comes from the utility of two devices: teleprinters (or teletypes) and keypunches. It was through such devices that modern computer keyboards inherited their layouts. As early as the 1870s, teleprinter-like devices were used to simultaneously type and transmit stock market text data from the keyboard across telegraph lines to stock ticker machines to be immediately copied and displayed onto ticker tape. The teleprinter, in its more contemporary form, was developed from 1907 to 1910 by American mechanical engineer Charles Krum and his son Howard, with early contributions by electrical engineer Frank Pearne. Earlier models were developed separately by individuals such as Royal Earl House and Frederick G. Creed. Earlier, Herman Hollerith developed the first keypunch devices, which soon evolved to include keys for text and number entry akin to normal typewriters by the 1930s. The keyboard on the teleprinter played a strong role in point-to-point and point-to-multipoint communication for most of the 20th century, while the keyboard on the keypunch device played a strong role in data entry and storage for just as long. The development of some of the earliest computers incorporated electric typewriter keyboards: the development of the ENIAC computer incorporated a keypunch device as both the input and paper-based output device, and the BINAC computer made use of an electromechanically controlled typewriter for both data entry onto magnetic tape (instead of paper) and data output. The keyboard remained the primary, most integrated computer peripheral well into the era of personal computing until the introduction of the mouse as a consumer device in 1984. By this time, text-only user interfaces with sparse graphics gave way to comparatively graphics-rich icons on screen. However, keyboards remain central to human-computer interaction to the present though mobile personal computing devices such as smartphones and tablets use a virtual keyboard. Types and standards Different types of keyboards are available and each is designed with a focus on specific features that suit particular needs. Today, most full-size keyboards use one of three different mechanical layouts, usually referred to as simply ISO (ISO/IEC 9995-2), ANSI (ANSI-INCITS 154-1988), and JIS (JIS X 6002-1980), referring roughly to the organizations issuing the relevant worldwide, United States, and Japanese standards, respectively. (In fact, the mechanical layouts referred such as "ISO" and "ANSI" comply to the primary recommendations in the named standards, while each of these standards in fact also allows the other way.) ANSI standard alphanumeric keyboards have keys that are on three-quarter inch centers (), and have a key travel of at least . Modern keyboard models contain a set number of total keys according to their given standard, described as 101, 104, 105, etc. and sold as "Full-size" keyboards. Modern keyboards matching US conventions typically have 104 keys while the 105 key layout is the norm in the rest of the world. This number is not always followed, and individual keys or whole sections are commonly skipped for the sake of compactness or user preference. The most common choice is to not include the numpad, which can usually be fully replaced by the alphanumeric section; such designs are referred to as "tenkeyless" (or TKL). Laptops and wireless peripherals often lack duplicate keys and ones seldom used. Function- and arrow keys are nearly always present. Another factor determining the size of a keyboard is the size and spacing of the keys. The reduction is limited by the practical consideration that the keys must be large enough to be easily pressed by fingers. Alternatively, a tool is used for pressing small keys. Desktop or full-size Desktop computer keyboards include alphabetic characters and numerals (and usually additionally a numeric keypad), typographical symbols and punctuation marks, one or more currency symbols and other special characters, diacritics and a variety of function keys. The repertoire of glyphs engraved on the keys of a keyboard accords with national conventions and language needs. Computer keyboards are similar to electric-typewriter keyboards but contain additional keys, such as the command key or Windows keys. Laptop-size Keyboards on laptops and notebook computers usually have a shorter travel distance for the keystroke, shorter over travel distance, and a reduced set of keys. They may not have a numeric keypad, and the function keys may be placed in locations that differ from their placement on a standard, full-sized keyboard. The switch mechanism for a laptop keyboard is more likely to be a scissor switch than a rubber dome; this is opposite the trend for full-size keyboards. Flexible keyboards Flexible keyboards are a junction between normal type and laptop type keyboards: normal from the full arrangement of keys, and laptop from the short key distance. Additionally, the flexibility allows the user to fold/roll the keyboard for better storage and transfer. However, for typing the keyboard must be resting on a hard surface. The vast majority of flexible keyboards in the market are made from silicone; this material makes them water- and dust-proof. This is useful in hospitals, where keyboards are subjected to frequent washing, and other dirty or must-be-clean environments. Handheld Handheld ergonomic keyboards are designed to be held like a game controller, and can be used as such, instead of laid out flat on top of a table surface. Typically handheld keyboards hold all the alphanumeric keys and symbols that a standard keyboard would have, yet only be accessed by pressing two sets of keys at once; one acting as a function key similar to a 'Shift' key that would allow for capital letters on a standard keyboard. Handheld keyboards allow the user the ability to move around a room or to lean back on a chair while also being able to type in front or away from the computer. Some variations of handheld ergonomic keyboards also include a trackball mouse that allow mouse movement and typing included in one handheld device. Thumb-sized Smaller external keyboards have been introduced for devices without a built-in keyboard, such as PDAs, and smartphones. Small keyboards are also useful where there is a limited workspace. A thumb keyboard (thumb board) is used in some personal digital assistants such as the Palm Treo and BlackBerry and some Ultra-Mobile PCs such as the OQO. Numeric keyboards contain only numbers, mathematical symbols for addition, subtraction, multiplication, and division, a decimal point, and several function keys. They are often used to facilitate data entry with smaller keyboards that do not have a numeric keypad, commonly those of laptop computers. These keys are collectively known as a numeric pad, numeric keys, or a numeric keypad, and it can consist of the following types of keys: Arithmetic operators, numbers, arrow keys, Navigation keys, Num Lock and Enter key. Multifunctional Multifunctional keyboards provide additional function beyond the standard keyboard. Many are programmable, configurable computer keyboards and some control multiple PCs, workstations and other information sources, usually in multi-screen work environments. Users have additional key functions as well as the standard functions and can typically use a single keyboard and mouse to access multiple sources. Multifunctional keyboards may feature customised keypads, fully programmable function or soft keys for macros/pre-sets, biometric or smart card readers, trackballs, etc. New generation multifunctional keyboards feature a touchscreen display to stream video, control audio visual media and alarms, execute application inputs, configure individual desktop environments, etc. Multifunctional keyboards may also permit users to share access to PCs and other information sources. Multiple interfaces (serial, USB, audio, Ethernet, etc.) are used to integrate external devices. Some multifunctional keyboards are also used to directly and intuitively control video walls. Common environments for multifunctional keyboards are complex, high-performance workplaces for financial traders and control room operators (emergency services, security, air traffic management; industry, utilities management, etc.). Non-standard layout and special-use types One-handed keyboards Many keyboards have been designed for one-handed operation. The first one, a chorded keyboard, was invented by Douglas Engelbart. Other types of one-handed keyboards include the FrogPad, the Half-keyboard, and one-handed Dvorak keyboard layouts designed for one hand typing. Chorded While other keyboards generally associate one action with each key, chorded keyboards associate actions with combinations of key presses. Since there are many combinations available, chorded keyboards can effectively produce more actions on a board with fewer keys. Court reporters' stenotype machines use chorded keyboards to enable them to enter text much faster by typing a syllable with each stroke instead of one letter at a time. The fastest typists (as of 2007) use a stenograph, a kind of chorded keyboard used by most court reporters and closed-caption reporters. Some chorded keyboards are also made for use in situations where fewer keys are preferable, such as on devices that can be used with only one hand, and on small mobile devices that don't have room for larger keyboards. Chorded keyboards are less desirable in many cases because it usually takes practice and memorization of the combinations to become proficient. Virtual Virtual keyboards, sometimes called on-screen keyboards (rarely software keyboards), consist of computer programs that display an image of a keyboard on the screen. Another input device such as a mouse or a touchscreen can be used to operate each virtual key to enter text. Virtual keyboards have become very popular in touchscreen enabled cell phones due to the additional cost and space requirements of other types of hardware keyboards. Microsoft Windows, Mac OS X, and some varieties of Linux include on-screen keyboards that can be controlled with the mouse. In these, the mouse has to be maneuvered onto the on-screen letters given by the software. On the click of a letter, the software writes the respective letter in the respective spot. Projection Projection keyboards project an image of keys, usually with a laser, onto a flat surface. The device then uses a camera or infrared sensor to "watch" where the user's fingers move, and will count a key as being pressed when it "sees" the user's finger touch the projected image. Projection keyboards can simulate a full size keyboard from a very small projector. Because the "keys" are simply projected images, they cannot be felt when pressed. Users of projected keyboards often experience increased discomfort in their fingertips because of the lack of "give" when typing. A flat, non-reflective surface is also required for the keys to be projected. Most projection keyboards are made for use with PDAs and smartphones due to their small form factor. Optical keyboard technology Also known as photo-optical keyboard, light responsive keyboard, photo-electric keyboard and optical key actuation detection technology. An optical keyboard technology utilizes LEDs and photo sensors to optically detect actuated keys. Most commonly the emitters and sensors are located in the perimeter, mounted on a small PCB. The light is directed from side to side of the keyboard interior and it can only be blocked by the actuated keys. Most optical keyboards require at least 2 beams (most commonly vertical beam and horizontal beam) to determine the actuated key. Some optical keyboards use a special key structure that blocks the light in a certain pattern, allowing only one beam per row of keys (most commonly horizontal beam). Key types Alphanumeric Alphabetical, numeric, and punctuation keys are used in the same fashion as a typewriter keyboard to enter their respective symbol into a word processing program, text editor, data spreadsheet, or other program. Many of these keys will produce different symbols when modifier keys or shift keys are pressed. The alphabetic characters become uppercase when the shift key or Caps Lock key is depressed. The numeric characters become symbols or punctuation marks when the shift key is depressed. The alphabetical, numeric, and punctuation keys can also have other functions when they are pressed at the same time as some modifier keys. The Space bar is a horizontal bar in the lowermost row, which is significantly wider than other keys. Like the alphanumeric characters, it is also descended from the mechanical typewriter. Its main purpose is to enter the space between words during typing. It is large enough so that a thumb from either hand can use it easily. Depending on the operating system, when the space bar is used with a modifier key such as the control key, it may have functions such as resizing or closing the current window, half-spacing, or backspacing. In computer games and other applications the key has myriad uses in addition to its normal purpose in typing, such as jumping and adding marks to check boxes. In certain programs for playback of digital video, the space bar is used for pausing and resuming the playback. Modifier keys Modifier keys are special keys that modify the normal action of another key, when the two are pressed in combination. For example, in Microsoft Windows will close the program in an active window. In contrast, pressing just will probably do nothing, unless assigned a specific function in a particular program. By themselves, modifier keys usually do nothing. The most widely used modifier keys include the Control key, Shift key and the Alt key. The AltGr key is used to access additional symbols for keys that have three symbols printed on them. On the Macintosh and Apple keyboards, the modifier keys are the Option key and Command key, respectively. On Sun Microsystems and Lisp machine keyboards, the Meta key is used as a modifier and for Windows keyboards, there is a Windows key. Compact keyboard layouts often use a Fn key. "Dead keys" allow placement of a diacritic mark, such as an accent, on the following letter (e.g., the Compose key). The enter/return key typically causes a command line, window form or dialog box to operate its default function, which is typically to finish an "entry" and begin the desired process. In word processing applications, pressing the enter key ends a paragraph and starts a new one. Cursor keys Navigation keys or cursor keys include a variety of keys which move the cursor to different positions on the screen. Arrow keys are programmed to move the cursor in a specified direction; page scroll keys, such as the Page Up and Page Down keys, scroll the page up and down. The Home key is used to return the cursor to the beginning of the line where the cursor is located; the End key puts the cursor at the end of the line. The Tab key advances the cursor to the next tab stop. The Insert key is mainly used to switch between overtype mode, in which the cursor overwrites any text that is present on and after its current location, and insert mode, where the cursor inserts a character at its current position, forcing all characters past it one position further. The Delete key discards the character ahead of the cursor's position, moving all following characters one position "back" towards the freed place. On many notebook computer keyboards the key labeled Delete (sometimes Delete and Backspace are printed on the same key) serves the same purpose as a Backspace key. The Backspace key deletes the preceding character. Lock keys lock part of a keyboard, depending on the settings selected. The lock keys are scattered around the keyboard. Most styles of keyboards have three LEDs indicating which locks are enabled, in the upper right corner above the numeric pad. The lock keys include Scroll lock, Num lock (which allows the use of the numeric keypad), and Caps lock. System commands The SysRq and Print screen commands often share the same key. SysRq was used in earlier computers as a "panic" button to recover from crashes (and it is still used in this sense to some extent by the Linux kernel; see Magic SysRq key). The Print screen command used to capture the entire screen and send it to the printer, but in the present it usually puts a screenshot in the clipboard. Break key The Break key/Pause key no longer has a well-defined purpose. Its origins go back to teleprinter users, who wanted a key that would temporarily interrupt the communications line. The Break key can be used by software in several different ways, such as to switch between multiple login sessions, to terminate a program, or to interrupt a modem connection. In programming, especially old DOS-style BASIC, Pascal and C, Break is used (in conjunction with Ctrl) to stop program execution. In addition to this, Linux and variants, as well as many DOS programs, treat this combination the same as Ctrl+C. On modern keyboards, the break key is usually labeled Pause/Break. In most Windows environments, the key combination Windows key+Pause brings up the system properties. Escape key The escape key () has a variety of meanings according to Operating System, application or both. "Nearly all of the time", it signals Stop, QUIT, or "let me get out of a dialog" (or pop-up window). It triggers the Stop function in many web browsers. The escape key was part of the standard keyboard of the Teletype Model 33 (introduced in 1964 and used with many early minicomputers). The DEC VT50, introduced July 1974, also had an Esc key. The TECO text editor (ca 1963) and its descendant Emacs (ca 1985) use the Esc key extensively. Historically it also served as a type of shift key, such that one or more following characters were interpreted differently, hence the term escape sequence, which refers to a series of characters, usually preceded by the escape character. On machines running Microsoft Windows, prior to the implementation of the Windows key on keyboards, the typical practice for invoking the "start" button was to hold down the control key and press escape. This process still works in Windows 95, 98, Me, NT 4, 2000, XP, Vista, 7, 8, and 10. Enter key or Return key The 'enter key' and 'return key' are two closely related keys with overlapping and distinct functions dependent on operating system and application. On full-size keyboards, there are two such keys, one in the alphanumeric keys and the other one is in the numeric keys. The purpose of the enter key is to confirm what has been typed. The return key is based on the original line feed/carriage return function of typewriters: in many word processors, for example, the return key ends a paragraph; in a spreadsheet, it completes the current cell and move to the next cell. The shape of the Enter key differs between ISO and ANSI keyboards: in the latter, the Enter key is in a single row (usually the third from the bottom) while in the former it spans over two rows and has an inverse L shape. Shift key The purpose of the key is to invoke the first alternative function of the key with which it is pressed concurrently. For alphabetic keys, shift+letter gives the upper case version of that letter. For other keys, the key is engraved with symbols for both the unshifted and shifted result. When used in combination with other control keys (such as , or ), the effect is system and application dependent. Menu key The Menu key or Application key is a key found on Windows-oriented computer keyboards. It is used to launch a context menu with the keyboard rather than with the usual right mouse button. The key's symbol is usually a small icon depicting a cursor hovering above a menu. On some Samsung keyboards the cursor in the icon is not present, showing the menu only. This key was created at the same time as the Windows key. This key is normally used when the right mouse button is not present on the mouse. Some Windows public terminals do not have a Menu key on their keyboard to prevent users from right-clicking (however, in many Windows applications, a similar functionality can be invoked with the Shift+F10 keyboard shortcut). Number pad Many, but not all, computer keyboards have a numeric keypad to the right of the alphabetic keyboard, often separated from the other groups of keys such as the function keys and system command keys, which contains numbers, basic mathematical symbols (e.g., addition, subtraction, etc.), and a few function keys. In addition to the row of number keys above the top alphabetic row, most desktop keyboards have a number pad or accounting pad, on the right hand side of the keyboard. While num lock is set, the numbers on these keys duplicate the number row; if not, they have alternative functions as engraved. In addition to numbers, this pad has command symbols concerned with calculations such as addition, subtraction, multiplication and division symbols. The enter key in this keys indicate the equal sign. Miscellaneous On Japanese/Korean keyboards, there may be language input keys for changing the language to use. Some keyboards have power management keys (e.g., power key, sleep key and wake key); Internet keys to access a web browser or e-mail; and/or multimedia keys, such as volume controls; or keys that can be programmed by the user to launch a specified application or a command like minimizing all windows. Multiple layouts It is possible to install multiple keyboard layouts within an operating system and switch between them, either through features implemented within the OS, or through an external application. Microsoft Windows, Linux, and Mac provide support to add keyboard layouts and choose from them. Illumination Keyboards and keypads may be illuminated from inside, especially on equipment for mobile use. Both keyboards built into computers and external ones may support backlighting; external backlit keyboards may have a wired USB connection, or be connected wirelessly and powered by batteries. Illumination facilitates the use of the keyboard or keypad in dark environments. For general productivity, only the keys may be uniformly backlit, without distracting light around the keys. Many gaming keyboards are designed to have an aesthetic as well as functional appeal, with multiple colours, and colour-coded keys to make it easier for gamers to find command keys while playing in a dark room. Many keyboards not otherwise illuminated may have small LED indicator lights in a few important function keys, or elsewhere on the housing, if their function is activated (see photo). Technology Key switches In the first electronic keyboards in the early 1970s, the key switches were individual switches inserted into holes in metal frames. These keyboards cost from 80 to 120 USD and were used in mainframe data terminals. The most popular switch types were reed switches (contacts enclosed in a vacuum in a glass capsule, affected by a magnet mounted on the switch plunger). In the mid-1970s, lower-cost direct-contact key switches were introduced, but their life in switch cycles was much shorter (rated ten million cycles) because they were open to the environment. This became more acceptable, however, for use in computer terminals at the time, which began to see increasingly shorter model lifespans as they advanced. In 1978, Key Tronic Corporation introduced keyboards with capacitive-based switches, one of the first keyboard technologies not to use self-contained switches. There was simply a sponge pad with a conductive-coated Mylar plastic sheet on the switch plunger, and two half-moon trace patterns on the printed circuit board below. As the key was depressed, the capacitance between the plunger pad and the patterns on the PCB below changed, which was detected by integrated circuits (IC). These keyboards were claimed to have the same reliability as the other "solid-state switch" keyboards such as inductive and Hall-effect, but competitive with direct-contact keyboards. Prices of $60 for keyboards were achieved, and Key Tronic rapidly became the largest independent keyboard manufacturer. Meanwhile, IBM made their own keyboards, using their own patented technology: Keys on older IBM keyboards were made with a "buckling spring" mechanism, in which a coil spring under the key buckles under pressure from the user's finger, triggering a hammer that presses two plastic sheets (membranes) with conductive traces together, completing a circuit. This produces a clicking sound and gives physical feedback for the typist, indicating that the key has been depressed. The first electronic keyboards had a typewriter key travel distance of 0.187 inches (4.75 mm), keytops were a half-inch (12.7 mm) high, and keyboards were about two inches (5 cm) thick. Over time, less key travel was accepted in the market, finally landing on 0.110 inches (2.79 mm). Coincident with this, Key Tronic was the first company to introduce a keyboard that was only about one inch thick. And now keyboards measure only about a half-inch thick. Keytops are an important element of keyboards. In the beginning, keyboard keytops had a "dish shape" on top, like typewriters before them. Keyboard key legends must be extremely durable over tens of millions of depressions, since they are subjected to extreme mechanical wear from fingers and fingernails, and subject to hand oils and creams, so engraving and filling key legends with paint, as was done previously for individual switches, was never acceptable. So, for the first electronic keyboards, the key legends were produced by two-shot (or double-shot, or two-color) molding, where either the key shell or the inside of the key with the key legend was molded first, and then the other color molded second. But, to save cost, other methods were explored, such as sublimation printing and laser engraving, both methods which could be used to print a whole keyboard at the same time. Initially, sublimation printing, where a special ink is printed onto the keycap surface and the application of heat causes the ink molecules to penetrate and commingle with the plastic modules, had a problem because finger oils caused the molecules to disperse, but then a necessarily very hard clear coating was applied to prevent this. Coincident with sublimation printing, which was first used in high volume by IBM on their keyboards, was the introduction by IBM of single-curved-dish keycaps to facilitate quality printing of key legends by having a consistently curved surface instead of a dish. But one problem with sublimation or laser printing was that the processes took too long and only dark legends could be printed on light-colored keys. On another note, IBM was unique in using separate shells, or "keycaps", on keytop bases. This might have made their manufacturing of different keyboard layouts more flexible, but the reason for doing this was that the plastic material that needed to be used for sublimation printing was different from standard ABS keytop plastic material. Three final mechanical technologies brought keyboards to where they are today, driving the cost well under $10: "Monoblock" keyboard designs were developed where individual switch housings were eliminated and a one-piece "monoblock" housing used instead. This was possible because of molding techniques that could provide very tight tolerances for the switch-plunger holes and guides across the width of the keyboard so that the key plunger-to-housing clearances were not too tight or too loose, either of which could cause the keys to bind. The use of contact-switch membrane sheets under the monoblock. This technology came from flat-panel switch membranes, where the switch contacts are printed inside of a top and bottom layer, with a spacer layer in between, so that when pressure is applied to the area above, a direct electrical contact is made. The membrane layers can be printed by very-high volume, low-cost "reel-to-reel" printing machines, with each keyboard membrane cut and punched out afterwards. Plastic materials played a very important part in the development and progress of electronic keyboards. Until "monoblocks" came along, GE's "self-lubricating" Delrin was the only plastic material for keyboard switch plungers that could withstand the beating over tens of millions of cycles of lifetime use. Greasing or oiling switch plungers was undesirable because it would attract dirt over time which would eventually affect the feel and even bind the key switches (although keyboard manufacturers would sometimes sneak this into their keyboards, especially if they could not control the tolerances of the key plungers and housings well enough to have a smooth key depression feel or prevent binding). But Delrin was only available in black and white, and was not suitable for keytops (too soft), so keytops use ABS plastic. However, as plastic molding advanced in maintaining tight tolerances, and as key travel length reduced from 0.187-inch to 0.110-inch (4.75 mm to 2.79 mm), single-part keytop/plungers could be made of ABS, with the keyboard monoblocks also made of ABS. In common use, the term "mechanical keyboard" refers to a keyboard with individual mechanical key switches, each of which contains a fully encased plunger with a spring below it and metallic electrical contacts on a side. The plunger sits on the spring, and the key will often close the contacts when the plunger is pressed halfway. Other switches require the plunger to be fully pressed down. The depth at which the plunger must be pressed for the contacts to close is known as the activation distance. Analog keyboards with key switches whose activation distance can be reconfigured through software, optical switches that work by blocking laser beams, and Hall Effect keyboards that use key switches that use a magnet to activate a hall sensor are also available. Some keyboards, called pressure-sensitive, allow varying input according to the distance pressed, analogously to the analog joystick. Control processor Computer keyboards include control circuitry to convert key presses into key codes (usually scancodes) that the computer's electronics can understand. The key switches are connected via the printed circuit board in an electrical X-Y matrix where a voltage is provided sequentially to the Y lines and, when a key is depressed, detected sequentially by scanning the X lines. The first computer keyboards were for mainframe computer data terminals and used discrete electronic parts. The first keyboard microprocessor was introduced in 1972 by General Instruments, but keyboards have been using the single-chip 8048 microcontroller variant since it became available in 1978. The keyboard switch matrix is wired to its inputs, it converts the keystrokes to key codes, and, for a detached keyboard, sends the codes down a serial cable (the keyboard cord) to the main processor on the computer motherboard. This serial keyboard cable communication is only bi-directional to the extent that the computer's electronics controls the illumination of the caps lock, num lock and scroll lock lights. One test for whether the computer has crashed is pressing the caps lock key. The keyboard sends the key code to the keyboard driver running in the main computer; if the main computer is operating, it commands the light to turn on. All the other indicator lights work in a similar way. The keyboard driver also tracks the Shift, alt and control state of the keyboard. Some lower-quality keyboards have multiple or false key entries due to inadequate electrical designs. These are caused by inadequate keyswitch "debouncing" or inadequate keyswitch matrix layout that don't allow multiple keys to be depressed at the same time, both circumstances which are explained below: When pressing a keyboard key, the key contacts may "bounce" against each other for several milliseconds before they settle into firm contact. When released, they bounce some more until they revert to the uncontacted state. If the computer were watching for each pulse, it would see many keystrokes for what the user thought was just one. To resolve this problem, the processor in a keyboard (or computer) "debounces" the keystrokes, by aggregating them across time to produce one "confirmed" keystroke. Some low-quality keyboards also suffer problems with rollover (that is, when multiple keys pressed at the same time, or when keys are pressed so fast that multiple keys are down within the same milliseconds). Early "solid-state" keyswitch keyboards did not have this problem because the keyswitches are electrically isolated from each other, and early "direct-contact" keyswitch keyboards avoided this problem by having isolation diodes for every keyswitch. These early keyboards had "n-key" rollover, which means any number of keys can be depressed and the keyboard will still recognize the next key depressed. But when three keys are pressed (electrically closed) at the same time in a "direct contact" keyswitch matrix that doesn't have isolation diodes, the keyboard electronics can see a fourth "phantom" key which is the intersection of the X and Y lines of the three keys. Some types of keyboard circuitry will register a maximum number of keys at one time. "Three-key" rollover, also called "phantom key blocking" or "phantom key lockout", will only register three keys and ignore all others until one of the three keys is lifted. This is undesirable, especially for fast typing (hitting new keys before the fingers can release previous keys), and games (designed for multiple key presses). As direct-contact membrane keyboards became popular, the available rollover of keys was optimized by analyzing the most common key sequences and placing these keys so that they do not potentially produce phantom keys in the electrical key matrix (for example, simply placing three or four keys that might be depressed simultaneously on the same X or same Y line, so that a phantom key intersection/short cannot happen), so that blocking a third key usually isn't a problem. But lower-quality keyboard designs and unknowledgeable engineers may not know these tricks, and it can still be a problem in games due to wildly different or configurable layouts in different games. Connection types There are several ways of connecting a keyboard to a system unit (more precisely, to its keyboard controller) using cables, including the standard AT connector commonly found on motherboards, which was eventually replaced by the PS/2 and the USB connection. Prior to the iMac line of systems, Apple used the proprietary Apple Desktop Bus for its keyboard connector. Wireless keyboards have become popular. A wireless keyboard must have a transmitter built in, and a receiver connected to the computer's keyboard port; it communicates either by radio frequency (RF) or infrared (IR) signals. A wireless keyboard may use industry standard Bluetooth radio communication, in which case the receiver may be built into the computer. Wireless keyboards need batteries for power, and may be at risk of data eavesdropping. Wireless solar keyboards charge their batteries from small solar panels using natural or artificial light. The 1984 Apricot Portable is an early example of an IR keyboard. Alternative text-entering methods Optical character recognition (OCR) is preferable to rekeying for converting existing text that is already written down but not in machine-readable format (for example, a Linotype-composed book from the 1940s). In other words, to convert the text from an image to editable text (that is, a string of character codes), a person could re-type it, or a computer could look at the image and deduce what each character is. OCR technology has already reached an impressive state (for example, Google Book Search) and promises more for the future. Speech recognition converts speech into machine-readable text (that is, a string of character codes). This technology has also reached an advanced state and is implemented in various software products. For certain uses (e.g., transcription of medical or legal dictation; journalism; writing essays or novels) speech recognition is starting to replace the keyboard. However, the lack of privacy when issuing voice commands and dictation makes this kind of input unsuitable for many environments. Pointing devices can be used to enter text or characters in contexts where using a physical keyboard would be inappropriate or impossible. These accessories typically present characters on a display, in a layout that provides fast access to the more frequently used characters or character combinations. Popular examples of this kind of input are Graffiti, Dasher and on-screen virtual keyboards. Other issues Keystroke logging Unencrypted wireless Bluetooth keyboards are known to be vulnerable to signal theft by placing a covert listening device in the same room as the keyboard to sniff and record Bluetooth packets for the purpose of logging keys typed by the user. Microsoft wireless keyboards 2011 and earlier are documented to have this vulnerability. Keystroke logging (often called keylogging) is a method of capturing and recording user keystrokes. While it is used legally to measure employee productivity on certain clerical tasks, or by law enforcement agencies to find out about illegal activities, it is also used by hackers for various illegal or malicious acts. Hackers use keyloggers as a means to obtain passwords or encryption keys and thus bypass other security measures. Keystroke logging can be achieved by both hardware and software means. Hardware key loggers are attached to the keyboard cable or installed inside standard keyboards. Software keyloggers work on the target computer's operating system and gain unauthorized access to the hardware, hook into the keyboard with functions provided by the OS, or use remote access software to transmit recorded data out of the target computer to a remote location. Some hackers also use wireless keylogger sniffers to collect packets of data being transferred from a wireless keyboard and its receiver, and then they crack the encryption key being used to secure wireless communications between the two devices. Anti-spyware applications are able to detect many keyloggers and cleanse them. Responsible vendors of monitoring software support detection by anti-spyware programs, thus preventing abuse of the software. Enabling a firewall does not stop keyloggers per se, but can possibly prevent transmission of the logged material over the net if properly configured. Network monitors (also known as reverse-firewalls) can be used to alert the user whenever an application attempts to make a network connection. This gives the user the chance to prevent the keylogger from "phoning home" with his or her typed information. Automatic form-filling programs can prevent keylogging entirely by not using the keyboard at all. Historically, most keyloggers could be fooled by alternating between typing the login credentials and typing characters somewhere else in the focus window. Keyboards are also known to emit electromagnetic signatures that can be detected using special spying equipment to reconstruct the keys pressed on the keyboard. Neal O'Farrell, executive director of the Identity Theft Council, revealed to InformationWeek that "More than 25 years ago, a couple of former spooks showed me how they could capture a user's ATM PIN, from a van parked across the street, simply by capturing and decoding the electromagnetic signals generated by every keystroke," O'Farrell said. "They could even capture keystrokes from computers in nearby offices, but the technology wasn't sophisticated enough to focus in on any specific computer." Physical injury The use of any keyboard may cause serious injury (that is, carpal tunnel syndrome or other repetitive strain injury) to hands, wrists, arms, neck or back. The risks of injuries can be reduced by taking frequent short breaks to get up and walk around a couple of times every hour. As well, users should vary tasks throughout the day, to avoid overuse of the hands and wrists. When inputting at the keyboard, a person should keep the shoulders relaxed with the elbows at the side, with the keyboard and mouse positioned so that reaching is not necessary. The chair height and keyboard tray should be adjusted so that the wrists are straight, and the wrists should not be rested on sharp table edges. Wrist or palm rests should not be used while typing. Some adaptive technology ranging from special keyboards, mouse replacements and pen tablet interfaces to speech recognition software can reduce the risk of injury. Pause software reminds the user to pause frequently. Switching to a much more ergonomic mouse, such as a vertical mouse or joystick mouse may provide relief. By using a touchpad or a stylus pen with a graphic tablet, in place of a mouse, one can lessen the repetitive strain on the arms and hands.
Technology
User interface
null
18842299
https://en.wikipedia.org/wiki/Pond
Pond
A pond is a small, still, land-based body of water formed by pooling inside a depression, either naturally or artificially. A pond is smaller than a lake and there are no official criteria distinguishing the two, although defining a pond to be less than in area, less than in depth and with less than 30% of its area covered by emergent vegetation helps in distinguishing the ecology of ponds from those of lakes and wetlands. Ponds can be created by a wide variety of natural processes (e.g. on floodplains as cutoff river channels, by glacial processes, by peatland formation, in coastal dune systems, by beavers), or they can simply be isolated depressions (such as a kettle hole, vernal pool, prairie pothole, or simply natural undulations in undrained land) filled by runoff, groundwater, or precipitation, or all three of these. They can be further divided into four zones: vegetation zone, open water, bottom mud and surface film. The size and depth of ponds often varies greatly with the time of year; many ponds are produced by spring flooding from rivers. Ponds are usually freshwater but may be brackish in nature. Saltwater pools, with a direct connection to the sea to maintain full salinity, may sometimes be called 'ponds' but these are normally regarded as part of the marine environment. They do not support fresh or brackish water-based organisms, and are rather tidal pools or lagoons. Ponds are typically shallow water bodies with varying abundances of aquatic plants and animals. Depth, seasonal water level variations, nutrient fluxes, amount of light reaching the ponds, the shape, the presence of visiting large mammals, the composition of any fish communities and salinity can all affect the types of plant and animal communities present. Food webs are based both on free-floating algae and upon aquatic plants. There is usually a diverse array of aquatic life, with a few examples including algae, snails, fish, beetles, water bugs, frogs, turtles, otters, and muskrats. Top predators may include large fish, herons, or alligators. Since fish are a major predator upon amphibian larvae, ponds that dry up each year, thereby killing resident fish, provide important refugia for amphibian breeding. Ponds that dry up completely each year are often known as vernal pools. Some ponds are produced by animal activity, including alligator holes and beaver ponds, and these add important diversity to landscapes. Ponds are frequently man made or expanded beyond their original depths and bounds by anthropogenic causes. Apart from their role as highly biodiverse, fundamentally natural, freshwater ecosystems ponds have had, and still have, many uses, including providing water for agriculture, livestock and communities, aiding in habitat restoration, serving as breeding grounds for local and migrating species, decorative components of landscape architecture, flood control basins, general urbanization, interception basins for pollutants and sources and sinks of greenhouse gases. Classification The technical distinction between a pond and a lake has not been universally standardized. Limnologists and freshwater biologists have proposed formal definitions for pond, in part to include 'bodies of water where light penetrates to the bottom of the waterbody', 'bodies of water shallow enough for rooted water plants to grow throughout', and 'bodies of water which lack wave action on the shoreline'. Each of these definitions are difficult to measure or verify in practice and are of limited practical use, and are mostly not now used. Accordingly, some organizations and researchers have settled on technical definitions of pond and lake that rely on size alone. Some regions of the United States define a pond as a body of water with a surface area of less than 10 acres (4.0 ha). Minnesota, known as the "land of 10,000 lakes", is commonly said to distinguish lakes from ponds, bogs and other water features by this definition, but also says that a lake is distinguished primarily by wave action reaching the shore. Even among organizations and researchers who distinguish lakes from ponds by size alone, there is no universally recognized standard for the maximum size of a pond. The international Ramsar wetland convention sets the upper limit for pond size as 8 hectares (80,000 m2; 20 acres). Researchers for the British charity Pond Conservation (now called Freshwater Habitats Trust) have defined a pond to be 'a man-made or natural waterbody that is between 1 m2 (0.00010 hectares; 0.00025 acres) and 20,000 m2 (2.0 hectares; 4.9 acres) in area, which holds water for four months of the year or more.' Other European biologists have set the upper size limit at 5 hectares (50,000 m2; 12 acres). In North America, even larger bodies of water have been called ponds; for example, Crystal Lake at 33 acres (130,000 m2; 13 ha), Walden Pond in Concord, Massachusetts at 61 acres (250,000 m2; 25 ha), and nearby Spot Pond at 340 acres (140 ha). There are numerous examples in other states, where bodies of water less than 10 acres (40,000 m2; 4.0 ha) are being called lakes. As the case of Crystal Lake shows, marketing purposes can sometimes be the driving factor behind the categorization. In practice, a body of water is called a pond or a lake on an individual basis, as conventions change from place to place and over time. In origin, a pond is a variant form of the word pound, meaning a confining enclosure. In earlier times, ponds were artificial and utilitarian, as stew ponds, mill ponds and so on. The significance of this feature seems, in some cases, to have been lost when the word was carried abroad with emigrants. However, some parts of New England contain "ponds" that are actually the size of a small lake when compared to other countries. In the United States, natural pools are often called ponds. Ponds for a specific purpose keep the adjective, such as "stock pond", used for watering livestock. The term is also used for temporary accumulation of water from surface runoff (ponded water). There are various regional names for naturally occurring ponds. In Scotland, one of the terms is lochan, which may also apply to a large body of water such as a lake. In the South Western parts of North American, lakes or ponds that are temporary and often dried up for most parts of the year are called playas. These playas are simply shallow depressions in dry areas that may only fill with water on certain occasion like excess local drainage, groundwater seeping, or rain. Formation Any depression in the ground which collects and retains a sufficient amount of water can be considered a pond, and such, can be formed by a variety of geological, ecological, and human terraforming events. Natural ponds are those caused by environmental occurrences. These can vary from glacial, volcanic, fluvial, or even tectonic events. Since the Pleistocene epoch, glacial processes have created most of the Northern hemispheric ponds; an example is the Prairie Pothole Region of North America. When glaciers retreat, they may leave behind uneven ground due to bedrock elastic rebound and sediment outwash plains. These areas may develop depressions that can fill up with excess precipitation or seeping ground water, forming a small pond. Kettle lakes and ponds are formed when ice breaks off from a larger glacier, is eventually buried by the surrounding glacial till, and over time melts. Orogenies and other tectonic uplifting events have created some of the oldest lakes and ponds on the globe. These indentions have the tendency to quickly fill with groundwater if they occur below the local water table. Other tectonic rifts or depressions can fill with precipitation, local mountain runoff, or be fed by mountain streams. Volcanic activity can also lead to lake and pond formation through collapsed lava tubes or volcanic cones. Natural floodplains along rivers, as well as landscapes that contain many depressions, may experience spring/rainy season flooding and snow melt. Temporary or vernal ponds are created this way and are important for breeding fish, insects, and amphibians, particularly in large river systems like the Amazon. Some ponds are solely created by animals species such as beavers, bison, alligators and other crocodilians through damning and nest excavation respectively. In landscapes with organic soils, local fires can create depressions during periods of drought. These have the tendency to fill up with small amounts of precipitation until normal water levels return, turning these isolated ponds into open water. Manmade ponds are those created by human intervention for the sake of the local environment, industrial settings, or for recreational/ornamental use. Uses Many ecosystems are linked by water and ponds have been found to hold a greater biodiversity of species than larger freshwater lakes or river systems. As such, ponds are habitats for many varieties of organisms including plants, amphibians, fish, reptiles, waterfowl, insects, and even some mammals. Ponds are used for breeding grounds for these species but also as shelter and even drinking/feeding locations for other wildlife. Aquaculture practices lean heavily on artificial ponds in order to grow and care for many different type of fish either for human consumption, research, species conservation or recreational sport. In agriculture practices, treatment ponds can be created to reduce nutrient runoff from reaching local streams or groundwater storages. Pollutants that enter ponds can often be mitigated by natural sedimentation and other biological and chemical activities within the water. As such, waste stabilization ponds are becoming popular low-cost methods for general wastewater treatment. They may also provide irrigation reservoirs for struggling farms during times of drought. As urbanization continues to spread, retention ponds are becoming more common in new housing developments. These ponds reduce the risk of flooding and erosion damage from excess storm water runoff in local communities. Experimental ponds are used to test hypotheses in the fields of environmental science, chemistry, aquatic biology, and limnology. Some ponds are the life blood of many small villages in arid countries such as those in sub-Saharan Africa where bathing, sanitation, fishing, socialization, and rituals are held. In the Indian subcontinent, Hindu temple monks care for sacred ponds used for religious practices and bathing pilgrims alike. In Europe during medieval times, it was typical for many monastery and castles (small, partly self-sufficient communities) to have fish ponds. These are still common in Europe and in East Asia (notably Japan), where koi may be kept or raised. In Nepal artificial ponds were essential elements of the ancient drinking water supply system. These ponds were fed with rainwater, water coming in through canals, their own springs, or a combination of these sources. They were designed to retain the water, while at the same time letting some water seep away to feed the local aquifers. Pond biodiversity A defining feature of a pond is the presence of standing water which provides habitat for a biological community commonly referred to as pond life. Because of this, many ponds and lakes contain large numbers of endemic species that have gone through adaptive radiation to become specialized to their preferred habitat. Familiar examples might include water lilies and other aquatic plants, frogs, turtles, and fish. Often, the entire margin of the pond is fringed by wetland, and these wetlands support the aquatic food web, provide shelter for wildlife, and stabilize the shore of the pond. This margin is also known as the littoral zone and contains much of the photosynthetic algae and plants of this ecosystem called macrophytes. Other photosynthetic organisms such as phytoplankton (suspended algae) and periphytons (organisms including cyanobacteria, detritus, and other microbes) thrive here and stand as the primary producers of pond food webs. Some grazing animals like geese and muskrats consume the wetland plants directly as a source of food. In many other cases, pond plants will decay in the water. Many invertebrates and herbivorous zooplankton then feed on the decaying plants, and these lower trophic level organisms provide food for wetland species including fish, dragonflies, and herons both in the littoral zone and the limnetic zone. The open water limnetic zone may allow algae to grow as sunlight still penetrates here. These algae may support yet another food web that includes aquatic insects and other small fish species. A pond, therefore, may have combinations of three different food webs, one based on larger plants, one based upon decayed plants, and one based upon algae and their specific upper trophic level consumers and predators. Hence, ponds often have many different animal species using the wide array of food sources though biotic interaction. They, therefore, provide an important source of biological diversity in landscapes. Opposite to long standing ponds are vernal ponds. These ponds dry up for part of the year and are so called because they are typically at their peak depth in the spring (the meaning of "vernal" comes form the Latin word for spring). Naturally occurring vernal ponds do not usually have fish, a major higher tropic level consumer, as these ponds frequently dry up. The absence of fish is a very important characteristic of these ponds since it prevents long chained biotic interactions from establishing. Ponds without these competitive predation pressures provides breeding locations and safe havens for endangered or migrating species. Hence, introducing fish to a pond can have seriously detrimental consequences. In some parts of the world, such as California, the vernal ponds have rare and endangered plant species. On the coastal plain, they provide habitat for endangered frogs such as the Mississippi Gopher Frog. Often groups of ponds in a given landscape - so called 'pondscapes' - offer especially high biodiversity benefits compared to single ponds. A group of ponds provides a higher degree of habitat complexity and habitat connectivity. Stratification Many ponds undergo a regular yearly process in the same matter as larger lakes if they are deep enough and/or protected from the wind. Abiotic factors such as UV radiation, general temperature, wind speed, water density, and even size, all have important roles to play when it comes to the seasonal effects on lakes and ponds. Spring overturn, summer stratification, autumn turnover, and an inverse winter stratification, ponds adjust their stratification or their vertical zonation of temperature due to these influences. These environmental factors affect pond circulation and temperature gradients within the water itself producing distant layers; the epilimnion, metalimnion, and hypolimnion. Each zone has varied traits that sustain or harm specific organisms and biotic interactions below the surface depending on the season. Winter surface ice begins to melt in the Spring. This allows the water column to begin mixing thanks to solar convection and wind velocity. As the pond mixes, an overall constant temperature is reached. As temperatures increase through the summer, thermal stratification takes place. Summer stratification allows for the epilimnion to be mixed by winds, keeping a consistent warm temperature throughout this zone. Here, photosynthesis and primary production flourishes. However, those species that need cooler water with higher dissolved oxygen concentrations will favor the lower metalimnion or hypolimnion. Air temperature drops as fall approaches and a deep mixing layer occurs. Autumn turnover results in isothermal lakes with high levels of dissolved oxygen as the water reaches an average colder temperature. Finally, winter stratification occurs inversely to summer stratification as surface ice begins to form yet again. This ice cover remains until solar radiation and convection return in the spring. Due to this constant change in vertical zonation, seasonal stratification causes habitats to grow and shrink accordingly. Certain species are bound to these distinct layers of the water column where they can thrive and survive with the best efficiency possible. For more information regarding seasonal thermal stratification of ponds and lakes, please look at "Lake Stratification". Conservation and management Ponds provide not only environmental values, but practical benefits to society. One increasingly crucial benefit that ponds provide is their ability to act as greenhouse gas sinks. Most natural lakes and ponds are greenhouse gas sources and aid in the flux of these dissolved compounds. However, manmade farm ponds are becoming significant sinks for gas mitigation and the fight against climate change. These agriculture runoff ponds receive high pH level water from surrounding soils. Highly acidic drainage ponds act as catalysis for excess (carbon dioxide) to be converted into forms of carbon that can easily be stored in sediments. When these new drainage ponds are constructed, concentrations of bacteria that normally break down dead organic matter, such as algae, are low. As a result, breakdown and release of nitrogen gases from these organic materials such as N2O does not occur and thus, not added to our atmosphere. This process is also used with regular denitrification in anoxic layer of ponds. However, not all ponds have the ability to become sinks for greenhouse gasses. Most ponds experience eutrophication where faced with excessive nutrient input from fertilizers and runoff. This over-nitrifies the pond water and results in mass algae blooms and local fish kills. Some farm ponds are not used for runoff control but rather for livestock like cattle or buffalo as watering and bathing holes. As mentioned in the use section, ponds are important hotspots for biodiversity. Sometimes this becomes an issue with invasive or introduced species that disrupt pond ecosystem dynamics such as food-web structure, niche partitioning, and guild assignments. This varies from introduced fish species such as the Common Carp that eat native water plants or Northern Snakeheads that attack breeding amphibians, aquatic snails that carry infectious parasites that kill other species, and even rapid spreading aquatic plants like Hydrilla and Duckweed that can restrict water flow and cause overbank flooding. Ponds, depending on their orientation and size, can spread their wetland habitats into the local riparian zones or watershed boundaries. Gentle slopes of land into ponds provides an expanse of habitat for wetland plants and wet meadows to expand beyond the limitation of the pond. However, the construction of retaining walls, lawns, and other urbanized developments can severely degrade the range of pond habitats and the longevity of the pond itself. Roads and highways act in the same manor, but they also interfere with amphibians and turtles that migrate to and from ponds as part of their annual breeding cycle and should be kept as far away from established ponds as possible. Because of these factors, gently sloping shorelines with broad expanses of wetland plants not only provide the best conditions for wildlife, but they help protect water quality from sources in the surrounding landscapes. It is also beneficial to allow water levels to fall each year during drier periods in order to re-establish these gentile shorelines. In landscapes where ponds are artificially constructed, they are done so to provide wildlife viewing and conservation opportunities, to treat wastewater, for sequestration and pollution containment, or for simply aesthetic purposes. For natural pond conservation and development, one way to stimulate this is with general stream and river restoration. Many small rivers and streams feed into or from local ponds within the same watershed. When these rivers and streams flood and begin to meander, large numbers of natural ponds, including vernal pools and wetlands, develop. Examples Some notable ponds are: Big Pond, Nova Scotia, Canada Christian Pond, Wyoming, United States Walden Pond, Massachusetts, United States, associated with Henry David Thoreau Hampstead Ponds, London Kuttam Pokuna, Medieval artificial pond in Anuradhapura, Sri Lanka Rani Pokhari, 17th-century artificial pond in Kathmandu, Nepal Rožmberk Pond, Czech Republic
Physical sciences
Hydrology
null
18842308
https://en.wikipedia.org/wiki/Stream
Stream
A stream is a continuous body of surface water flowing within the bed and banks of a channel. Depending on its location or certain characteristics, a stream may be referred to by a variety of local or regional names. Long, large streams are usually called rivers, while smaller, less voluminous and more intermittent streams are known as streamlets, brooks or creeks. The flow of a stream is controlled by three inputs – surface runoff (from precipitation or meltwater), daylighted subterranean water, and surfaced groundwater (spring water). The surface and subterranean water are highly variable between periods of rainfall. Groundwater, on the other hand, has a relatively constant input and is controlled more by long-term patterns of precipitation. The stream encompasses surface, subsurface and groundwater fluxes that respond to geological, geomorphological, hydrological and biotic controls. Streams are important as conduits in the water cycle, instruments in groundwater recharge, and corridors for fish and wildlife migration. The biological habitat in the immediate vicinity of a stream is called a riparian zone. Given the status of the ongoing Holocene extinction, streams play an important corridor role in connecting fragmented habitats and thus in conserving biodiversity. The study of streams and waterways in general is known as surface hydrology and is a core element of environmental geography. Types Brook A brook is a stream smaller than a creek, especially one that is fed by a spring or seep. It is usually small and easily forded. A brook is characterised by its shallowness. Creek A creek () or crick (): In Australia, Canada, New Zealand and the United States, a (narrow) stream that is smaller than a river; a minor tributary of a river; a brook. Sometimes navigable by water craft and may be intermittent. In the United Kingdom, India, and parts of Maryland, New England, a tidal inlet, typically in a salt marsh or mangrove swamp, or between enclosed and drained former salt marshes or swamps (e.g. Portsbridge Creek separating Portsea Island from the mainland). In these cases, the "stream" is the tidal stream, the course of the seawater through the creek channel at low and high tide. In hydrography, gut is a small creek; this is seen in proper names in eastern North America from the Mid-Atlantic states (for instance, The Gut in Pennsylvania, Ash Gut in Delaware, and other streams) down into the Caribbean (for instance, Guinea Gut, Fish Bay Gut, Cob Gut, Battery Gut and other rivers and streams in the United States Virgin Islands, in Jamaica (Sandy Gut, Bens Gut River, White Gut River), and in many streams and creeks of the Dutch Caribbean). River A river is a large natural stream that is much wider and deeper than a creek and not easily fordable, and may be a navigable waterway. Runnel The linear channel between the parallel ridges or bars on a shoreline beach or river floodplain, or between a bar and the shore. Also called a swale. Tributary A tributary is a contributory stream to a larger stream, or a stream which does not reach a static body of water such as a lake, bay or ocean but joins another river (a parent river). Sometimes also called a branch or fork. Distributary A distributary, or a distributary channel, is a stream that branches off and flows away from a main stream channel, and the phenomenon is known as river bifurcation. Distributaries are common features of river deltas, and are often found where a valleyed stream enters wide flatlands or approaches the coastal plains around a lake or an ocean. They can also occur inland, on alluvial fans, or where a tributary stream bifurcates as it nears its confluence with a larger stream. Common terms for individual river distributaries in English-speaking countries are arm and channel. Other names There are a number of regional names for a stream. Northern America Branch is used to name streams in Maryland and Virginia. Creek is common throughout the United States, as well as Australia. Falls is also used to name streams in Maryland, for streams/rivers which have waterfalls on them, even if such falls only have a small vertical drop. Little Gunpowder Falls and the Jones Falls are actually rivers named in this manner, unique to Maryland. Kill in New York, Pennsylvania, Delaware, and New Jersey comes from a Dutch language word meaning "riverbed" or "water channel", and can also be used for the UK meaning of 'creek'. Run in Ohio, Maryland, Michigan, New Jersey, Pennsylvania, Virginia, or West Virginia can be the name of a stream. Run in Florida is the name given to streams coming out of small natural springs. River is used for streams from larger springs like the Silver River and Rainbow River. Stream and brook are used in Midwestern states, Mid-Atlantic states, and New England. United Kingdom Allt is used in the Scottish Highlands. Beck is used in areas between Lincolnshire and Cumbria in areas which were once occupied by the Danes and Norwegians. Bourne or winterbourne is used in the chalk downland of southern England for ephemeral rivers. When permanent, they are chalk streams. Brook. Burn is used in Scotland and North East England. Gill or ghyll is seen in the north of England and Kent and Surrey influenced by Old Norse. The variant "ghyll" is used in the Lake District and appears to have been an invention of William Wordsworth. Nant is used in Wales. Rivulet is a term encountered in Victorian era publications. Syke is used in the Scottish Lowlands and Cumbria for a seasonal stream. Related terminology Bar A shoal that develops in a stream as sediment is deposited as the current slows or is impeded by wave action at the confluence. Bifurcation A fork into two or more streams. Channel A depression created by constant erosion that carries the stream's flow. Confluence The point at which the two streams merge. If the two tributaries are of approximately equal size, the confluence may be called a fork. Drainage basin (also known as a watershed in the United States) The area of land where water flows into a stream. A large drainage basin such as the Amazon River contains many smaller drainage basins. Floodplain Lands adjacent to the stream that are subject to flooding when a stream overflows its banks. Headwaters or source The part of a stream or river proximate to its source. The word is most commonly used in the plural where there is no single point source. Knickpoint The point on a stream's profile where a sudden change in stream gradient occurs. Mouth The point at which the stream discharges, possibly via an estuary or delta, into a static body of water such as a lake or ocean. Pool A segment where the water is deeper and slower moving. Rapids A turbulent, fast-flowing stretch of a stream or river. Riffle A segment where the flow is shallower and more turbulent. River A large natural stream, which may be a waterway. Run A somewhat smoothly flowing segment of the stream. Spring The point at which a stream emerges from an underground course through unconsolidated sediments or through caves. A stream can, especially with caves, flow aboveground for part of its course, and underground for part of its course. Stream bed The bottom of a stream. Stream corridor Stream, its floodplains, and the transitional upland fringe.<ref>"Stream Corridor Structure" Adapted from Stream Corridor Restoration: Principles, Processes, and Practices</ref> Streamflow The water moving through a stream channel. Stream gauge A site along the route of a stream or river, used for reference marking or water monitoring. Thalweg The river's longitudinal section, or the line joining the deepest point in the channel at each stage from source to mouth. Watercourse The channel followed by a stream (a flowing body of water) or the stream itself. In the UK, some aspects of criminal law, such as the Rivers (Prevention of Pollution) Act 1951, specify that a watercourse includes those rivers which are dry for part of the year. In some jurisdictions, owners of land over which the water flows may have the legal right to use or retain some or much of that water. This right may extend to estuaries, rivers, streams, anabranches and canals. Waterfall or cascade The fall of water where the stream goes over a sudden drop called a knickpoint; some knickpoints are formed by erosion when water flows over an especially resistant stratum, followed by one less so. The stream expends kinetic energy in "trying" to eliminate the knickpoint. Wetted perimeter The line on which the stream's surface meets the channel walls. Sources A stream's source depends on the surrounding landscape and its function within larger river networks. While perennial and intermittent streams are typically supplied by smaller upstream waters and groundwater, headwater and ephemeral streams often derive most of their water from precipitation in the form of rain and snow. Most of this precipitated water re-enters the atmosphere by evaporation from soil and water bodies, or by the evapotranspiration of plants. Some of the water proceeds to sink into the earth by infiltration and becomes groundwater, much of which eventually enters streams. Some precipitated water is temporarily locked up in snow fields and glaciers, to be released later by evaporation or melting. The rest of the water flows off the land as runoff, the proportion of which varies according to many factors, such as wind, humidity, vegetation, rock types, and relief. This runoff starts as a thin film called sheet wash, combined with a network of tiny rills, together constituting sheet runoff; when this water is concentrated in a channel, a stream has its birth. Some creeks may start from ponds or lakes. The streams typically derive most of their water from rain and snow precipitation. Most of this water re-enters the atmosphere either by evaporation from soil and water bodies, or by plant evapotranspiration. By infiltration some of the water sinks into the earth and becomes groundwater, much of which eventually enters streams. Most precipitated water is partially bottled up by evaporation or freezing in snow fields and glaciers. The majority of the water flows as a runoff from the ground; the proportion of this varies depending on several factors, such as climate, temperature, vegetation, types of rock, and relief. This runoff begins as a thin layer called sheet wash, combined with a network of tiny rills, which together form the sheet runoff; when this water is focused in a channel, a stream is born. Some rivers and streams may begin from lakes or ponds. Freshwater's primary sources are precipitation and mountain snowmelt. However, rivers typically originate in the highlands, and are slowly created by the erosion of mountain snowmelt into lakes or rivers. Rivers usually flow from their source topographically, and erode as they pass until they reach the base stage of erosion. The scientists have offered a way based on data to define the origin of the lake. A classified sample was the one measured by the Chinese researchers from the University of Chinese Academy of Sciences. As an essential symbol of the river formation environment, the river source needs an objective and straightforward and effective method of judging. A calculation model of river source catchment area based on critical support flow (CSD) proposed, and the relationship between CSA and CSD with a minimum catchment area established. Using the model for comparison in two basins in Tibet (Helongqu and Niyang River White Water), the results show that the critical support flow (Qc) of the is 0.0028 m3/s. At the same time, the white water curvature is 0.0085 m3/s. Besides, the critical support flow can vary with hydrologic climate conditions, and the vital support flow Qc in wet areas (white water) is larger than in semi-arid regions (heap slot). The proposed critical support flow (CSD) concept and model method can be used to determine the hydrographic indicators of river sources in complex geographical areas, and it can also reflect the impact of hydrologic climate change on river recharge in different regions. The source of a river or stream (its point of origin) can consist of lakes, swamps, springs, or glaciers. A typical river has several tributaries; each of these may be made up of several other smaller tributaries, so that together this stream and all its tributaries are called a drainage network. Although each tributary has its own source, international practice is to take the source farthest from the river mouth as the source of the entire river system, from which the most extended length of the river measured as the starting point is taken as the length of the whole river system, and that furthest starting point is conventionally taken as the source of the whole river system. For example, the origin of the Nile River is the confluence of the White Nile and the Blue Nile, but the source of the whole river system is in its upper reaches. If there is no specific designation, "length of the Nile" refers to the "river length of the Nile system", rather than to the length of the Nile river from the point where it is formed by a confluence of tributaries. The Nile's source is often cited as Lake Victoria, but the lake has significant feeder rivers. The Kagera River, which flows into Lake Victoria near Bukoba's Tanzanian town, is the longest feeder, though sources do not agree on which is the Kagera's longest tributary and therefore the Nile's most remote source itself. Characteristics Ranking To qualify as a stream, a body of water must be either recurring or perennial. Recurring (intermittent) streams have water in the channel for at least part of the year. A stream of the first order is a stream which does not have any other recurring or perennial stream feeding into it. When two first-order streams come together, they form a second-order stream. When two second-order streams come together, they form a third-order stream. Streams of lower order joining a higher order stream do not change the order of the higher stream. Gradient The gradient of a stream is a critical factor in determining its character and is entirely determined by its base level of erosion. The base level of erosion is the point at which the stream either enters the ocean, a lake or pond, or enters a stretch in which it has a much lower gradient, and may be specifically applied to any particular stretch of a stream. In geological terms, the stream will erode down through its bed to achieve the base level of erosion throughout its course. If this base level is low, then the stream will rapidly cut through underlying strata and have a steep gradient, and if the base level is relatively high, then the stream will form a flood plain and meander. Profile Typically, streams are said to have a particular elevation profile, beginning with steep gradients, no flood plain, and little shifting of channels, eventually evolving into streams with low gradients, wide flood plains, and extensive meanders. The initial stage is sometimes termed a "young" or "immature" stream, and the later state a "mature" or "old" stream. Meander Meanders are looping changes of direction of a stream caused by the erosion and deposition of bank materials. These are typically serpentine in form. Typically, over time the meanders gradually migrate downstream. If some resistant material slows or stops the downstream movement of a meander, a stream may erode through the neck between two legs of a meander to become temporarily straighter, leaving behind an arc-shaped body of water termed an oxbow lake or bayou. A flood may also cause a meander to be cut through in this way. Stream load The stream load is defined as the solid matter carried by a stream. Streams can carry sediment, or alluvium. The amount of load it can carry (capacity) as well as the largest object it can carry (competence) are both dependent on the velocity of the stream. Classification Perennial or not A perennial stream is one which flows continuously all year. Some perennial streams may only have continuous flow in segments of its stream bed year round during years of normal rainfall. Blue-line streams are perennial streams and are marked on topographic maps with a solid blue line. The word "perennial" from the 1640s, meaning "evergreen," is established in Latin perennis, keeping the meaning as "everlasting all year round," per "over" plus annus "year." This has been proved since the 1670s by the "living years" in the sense of botany. The metaphorical sense of "enduring, eternal" originates from 1750. They are related to "perennial." See biennial for shifts in vowels. Perennial streams have one or more of these characteristics: Direct observation or compelling evidence suggests that there is no interruption in the flow at ground. The existence of one or more specific features of the perennial streams, including: Riverbed forms, for example, riffles, pools, runs, gravel bars, other depositional characteristics, bed armor layer. Riverbank erosion and/or polishment. Indications of waterborne debris and sediment transport. Defined river or stream bed and banks. The catchment area exceeds . USGS regression on the VHD data layer-oriented application on the probability of intermittent flow. The existence of aquatic organisms that require uninterrupted circulation. As shown by bank leakage, spring, or other indicators, grass-roots flow mainly supports groundwater recharge. There are high channels of permeability, especially stratospheric, boundary conditions; while stratospheric groundwater also decreases on occasion. Existence of native aquatic organisms which require undisturbed survival flow. The surrounding topography exhibits features of being formed by fluvial processes. Absence of such characteristics supports classifying a stream as intermittent, "showing interruptions in time or space". Ephemeral stream Generally, streams that flow only during and immediately after precipitation are termed ephemeral. There is no clear demarcation between surface runoff and an ephemeral stream, and some ephemeral streams can be classed as intermittent—flow all but disappearing in the normal course of seasons but ample flow (backups) restoring stream presence such circumstances are documented when stream beds have opened up a path into mines or other underground chambers. According to official U.S. definitions, the channels of intermittent streams are well-defined, as opposed to ephemeral streams, which may or may not have a defined channel, and rely mainly on storm runoff, as their aquatic bed is above the water table. An ephemeral stream does not have the biological, hydrological, and physical characteristics of a continuous or intermittent stream. The same non-perennial channel might change characteristics from intermittent to ephemeral over its course. Intermittent or seasonal stream Washes can fill up quickly during rains, and there may be a sudden torrent of water after a thunderstorm begins upstream, such as during monsoonal conditions. In the United States, an intermittent or seasonal stream is one that only flows for part of the year and is marked on topographic maps with a line of blue dashes and dots. A wash, desert wash, or arroyo is normally a dry streambed in the deserts of the American Southwest, which flows after sufficient rainfall. In Italy, an intermittent stream is termed a torrent (). In full flood the stream may or may not be "torrential" in the dramatic sense of the word, but there will be one or more seasons in which the flow is reduced to a trickle or less. Typically torrents have Apennine rather than Alpine sources, and in the summer they are fed by little precipitation and no melting snow. In this case the maximum discharge will be during the spring and autumn. An intermittent stream can also be called a winterbourne in Britain, a wadi in the Arabic-speaking world or torrente or rambla (this last one from arabic origin) in Spain and Latin America. In Australia, an intermittent stream is usually called a creek and marked on topographic maps with a solid blue line. Consequential or not There are five generic classifications: Consequent streams are streams whose course is a direct consequence of the original slope of the surface upon which it developed, i.e., streams that follow slope of the land over which they originally formed. Subsequent streams are streams whose course has been determined by selective headward erosion along weak strata. These streams have generally developed after the original stream. Subsequent streams developed independently of the original relief of the land and generally follow paths determined by the weak rock belts. Resequent streams are streams whose course follows the original relief, but at a lower level than the original slope (e.g., flows down a course determined by the underlying strata in the same direction). These streams develop later and are generally a tributary to a subsequent stream. Obsequent streams are streams flowing in the opposite direction of the consequent drainage. Insequent streams have an almost random drainage often forming dendritic patterns. These are typically tributaries and have developed by a headward erosion on a horizontally stratified belt or on homogeneous rocks. These streams follow courses that apparently were not controlled by the original slope of the surface, its structure or the type of rock. According to the water underneath Gaining: A stream or path to receive water from groundwater. Losing: A stream or reach of a stream which shows a net loss of water to groundwater or evaporation. Isolated: The water flow or channel shall not supply or remove water from the saturated region. Perched'': refers to the loss or isolation flow separated from the groundwater in the air zone. Classification Indicators of a perennial stream Benthic macroinvertebrates "Macroinvertebrate" refers to easily seen invertebrates, larger than 0.5 mm, found in stream and river bottoms. Macroinvertebrates are larval stages of most aquatic insects and their presence is a good indicator that the stream is perennial. Larvae of caddisflies, mayflies, stoneflies, and damselflies require a continuous aquatic habitat until they reach maturity. Crayfish and other crustaceans, snails, bivalves (clams), and aquatic worms also indicate the stream is perennial. These require a persistent aquatic environment for survival. Vertebrates Fish and amphibians are secondary indicators in assessment of a perennial stream because some fish and amphibians can inhabit areas without persistent water regime. When assessing for fish, all available habitat should be assessed: pools, riffles, root clumps and other obstructions. Fish will seek cover if alerted to human presence, but should be easily observed in perennial streams. Amphibians also indicate a perennial stream and include tadpoles, frogs, salamanders, and newts. These amphibians can be found in stream channels, along stream banks, and even under rocks. Frogs and tadpoles usually inhabit shallow and slow moving waters near the sides of stream banks. Frogs will typically jump into water when alerted to human presence. Geological indicators Well defined river beds composed of riffles, pools, runs, gravel bars, a bed armor layer, and other depositional features, plus well defined banks due to bank erosion, are good identifiers when assessing for perennial streams. Particle size will help identify a perennial stream. Perennial streams cut through the soil profile, which removes fine and small particles. By assessing areas for relatively coarse material left behind in the stream bed and finer sediments along the side of the stream or within the floodplain will be a good indicator of persistent water regime. Hydrological indicators A perennial stream can be identified 48 hours after a storm. Direct storm runoff usually has ceased at this point. If a stream is still flowing and contributing inflow is not observed above the channel, the observed water is likely baseflow. Another perennial stream indication is an abundance of red rust material in a slow-moving wetted channel or stagnant area. This is evidence that iron-oxidizing bacteria are present, indicating persistent expression of oxygen-depleted ground water. In a forested area, leaf and needle litter in the stream channel is an additional indicator. Accumulation of leaf litter does not occur in perennial streams since such material is continuously flushed. In the adjacent overbank of a perennial stream, fine sediment may cling to riparian plant stems and tree trunks. Organic debris drift lines or piles may be found within the active overbank area after recent high flow. Importance Streams, headwaters, and streams flowing only part of the year provide many benefits upstream and downstream. They defend against floods, remove contaminants, recycle nutrients that are potentially dangerous as well as provide food and habitat for many forms of fish. Such streams also play a vital role in preserving our drinking water quality and supply, ensuring a steady flow of water to surface waters and helping to restore deep aquifers. Clean drinking water Flood and erosion protection Groundwater recharge Pollution reduction Wildlife habitat Economic importance in fishing, hunting, manufacturing and agriculture. Drainage basins The extent of land basin drained by a stream is termed its drainage basin (also known in North America as the watershed and, in British English, as a catchment). A basin may also be composed of smaller basins. For instance, the Continental Divide in North America divides the mainly easterly-draining Atlantic Ocean and Arctic Ocean basins from the largely westerly-flowing Pacific Ocean basin. The Atlantic Ocean basin, however, may be further subdivided into the Atlantic Ocean and Gulf of Mexico drainages. (This delineation is termed the Eastern Continental Divide.) Similarly, the Gulf of Mexico basin may be divided into the Mississippi River basin and several smaller basins, such as the Tombigbee River basin. Continuing in this vein, a component of the Mississippi River basin is the Ohio River basin, which in turn includes the Kentucky River basin, and so forth. Crossings Stream crossings are where streams are crossed by roads, pipelines, railways, or any other thing which might restrict the flow of the stream in ordinary or flood conditions. Any structure over or in a stream which results in limitations on the movement of fish or other ecological elements may be an issue.
Physical sciences
Hydrology
Earth science
18842323
https://en.wikipedia.org/wiki/Sea
Sea
A sea is a large body of salt water. There are particular seas and the sea. The sea commonly refers to the ocean, the interconnected body of seawaters that spans most of Earth. Particular seas are either marginal seas, second-order sections of the oceanic sea (e.g. the Mediterranean Sea), or certain large, nearly landlocked bodies of water. The salinity of water bodies varies widely, being lower near the surface and the mouths of large rivers and higher in the depths of the ocean; however, the relative proportions of dissolved salts vary little across the oceans. The most abundant solid dissolved in seawater is sodium chloride. The water also contains salts of magnesium, calcium, potassium, and mercury, among other elements, some in minute concentrations. A wide variety of organisms, including bacteria, protists, algae, plants, fungi, and animals live in various marine habitats and ecosystems throughout the seas. These range vertically from the sunlit surface and shoreline to the great depths and pressures of the cold, dark abyssal zone, and in latitude from the cold waters under polar ice caps to the warm waters of coral reefs in tropical regions. Many of the major groups of organisms evolved in the sea and life may have started there. The ocean moderates Earth's climate and has important roles in the water, carbon, and nitrogen cycles. The surface of water interacts with the atmosphere, exchanging properties such as particles and temperature, as well as currents. Surface currents are the water currents that are produced by the atmosphere's currents and its winds blowing over the surface of the water, producing wind waves, setting up through drag slow but stable circulations of water, as in the case of the ocean sustaining deep-sea ocean currents. Deep-sea currents, known together as the global conveyor belt, carry cold water from near the poles to every ocean and significantly influence Earth's climate. Tides, the generally twice-daily rise and fall of sea levels, are caused by Earth's rotation and the gravitational effects of the Moon and, to a lesser extent, of the Sun. Tides may have a very high range in bays or estuaries. Submarine earthquakes arising from tectonic plate movements under the oceans can lead to destructive tsunamis, as can volcanoes, huge landslides, or the impact of large meteorites. The seas have been an integral element for humans throughout history and culture. Humans harnessing and studying the seas have been recorded since ancient times and evidenced well into prehistory, while its modern scientific study is called oceanography and maritime space is governed by the law of the sea, with admiralty law regulating human interactions at sea. The seas provide substantial supplies of food for humans, mainly fish, but also shellfish, mammals and seaweed, whether caught by fishermen or farmed underwater. Other human uses of the seas include trade, travel, mineral extraction, power generation, warfare, and leisure activities such as swimming, sailing, and scuba diving. Many of these activities create marine pollution. Definition The sea is the interconnected system of all the Earth's oceanic waters, including the Atlantic, Pacific, Indian, Southern and Arctic Oceans. However, the word "sea" can also be used for many specific, much smaller bodies of seawater, such as the North Sea or the Red Sea. There is no sharp distinction between seas and oceans, though generally seas are smaller, and are often partly (as marginal seas or particularly as a mediterranean sea) or wholly (as inland seas) enclosed by land. However, an exception to this is the Sargasso Sea which has no coastline and lies within a circular current, the North Atlantic Gyre. Seas are generally larger than lakes and contain salt water, but the Sea of Galilee is a freshwater lake. The United Nations Convention on the Law of the Sea states that all of the ocean is "sea". Legal definition The law of the sea has at its center the definition of the boundaries of the ocean, clarifying its application in marginal seas. But what bodies of water other than the sea the law applies to is being crucially negotiated in the case of the Caspian Sea and its status as "sea", basically revolving around the issue of the Caspian Sea about either being factually an oceanic sea or only a saline body of water and therefore solely a sea in the sense of the common use of the word, like all other saltwater lakes called sea. Physical science Earth is the only known planet with seas of liquid water on its surface, although Mars possesses ice caps and similar planets in other solar systems may have oceans. Earth's of sea contain about 97.2 percent of its known water and covers approximately 71 percent of its surface. Another 2.15% of Earth's water is frozen, found in the sea ice covering the Arctic Ocean, the ice cap covering Antarctica and its adjacent seas, and various glaciers and surface deposits around the world. The remainder (about 0.65% of the whole) form underground reservoirs or various stages of the water cycle, containing the freshwater encountered and used by most terrestrial life: vapor in the air, the clouds it slowly forms, the rain falling from them, and the lakes and rivers spontaneously formed as its waters flow again and again to the sea. The scientific study of water and Earth's water cycle is hydrology; hydrodynamics studies the physics of water in motion. The more recent study of the sea in particular is oceanography. This began as the study of the shape of the ocean's currents but has since expanded into a large and multidisciplinary field: it examines the properties of seawater; studies waves, tides, and currents; charts coastlines and maps the seabeds; and studies marine life. The subfield dealing with the sea's motion, its forces, and the forces acting upon it is known as physical oceanography. Marine biology (biological oceanography) studies the plants, animals, and other organisms inhabiting marine ecosystems. Both are informed by chemical oceanography, which studies the behavior of elements and molecules within the oceans: particularly, at the moment, the ocean's role in the carbon cycle and carbon dioxide's role in the increasing acidification of seawater. Marine and maritime geography charts the shape and shaping of the sea, while marine geology (geological oceanography) has provided evidence of continental drift and the composition and structure of the Earth, clarified the process of sedimentation, and assisted the study of volcanism and earthquakes. Seawater Salinity A characteristic of seawater is that it is salty. Salinity is usually measured in parts per thousand (‰ or per mil), and the open ocean has about solids per litre, a salinity of 35 ‰. The Mediterranean Sea is slightly higher at 38 ‰, while the salinity of the northern Red Sea can reach 41‰. In contrast, some landlocked hypersaline lakes have a much higher salinity, for example, the Dead Sea has dissolved solids per litre (300 ‰). While the constituents of table salt (sodium and chloride) make up about 85 percent of the solids in solution, there are also other metal ions such as magnesium and calcium, and negative ions including sulphate, carbonate, and bromide. Despite variations in the levels of salinity in different seas, the relative composition of the dissolved salts is stable throughout the world's oceans. Seawater is too saline for humans to drink safely, as the kidneys cannot excrete urine as salty as seawater. Although the amount of salt in the ocean remains relatively constant within the scale of millions of years, various factors affect the salinity of a body of water. Evaporation and by-product of ice formation (known as "brine rejection") increase salinity, whereas precipitation, sea ice melt, and runoff from land reduce it. The Baltic Sea, for example, has many rivers flowing into it, and thus the sea could be considered as brackish. Meanwhile, the Red Sea is very salty due to its high evaporation rate. Temperature Sea temperature depends on the amount of solar radiation falling on its surface. In the tropics, with the sun nearly overhead, the temperature of the surface layers can rise to over while near the poles the temperature in equilibrium with the sea ice is about . There is a continuous circulation of water in the oceans. Warm surface currents cool as they move away from the tropics, and the water becomes denser and sinks. The cold water moves back towards the equator as a deep sea current, driven by changes in the temperature and density of the water, before eventually welling up again towards the surface. Deep seawater has a temperature between and in all parts of the globe. Seawater with a typical salinity of 35 ‰ has a freezing point of about . When its temperature becomes low enough, ice crystals form on the surface. These break into small pieces and coalesce into flat discs that form a thick suspension known as frazil. In calm conditions, this freezes into a thin flat sheet known as nilas, which thickens as new ice forms on its underside. In more turbulent seas, frazil crystals join into flat discs known as pancakes. These slide under each other and coalesce to form floes. In the process of freezing, salt water and air are trapped between the ice crystals. Nilas may have a salinity of 12–15 ‰, but by the time the sea ice is one year old, this falls to 4–6 ‰. pH value Seawater is slightly alkaline and had an average pH of about 8.2 over the past 300 million years. More recently, climate change has resulted in an increase of the carbon dioxide content of the atmosphere; about 30–40% of the added CO2 is absorbed by the oceans, forming carbonic acid and lowering the pH (now below 8.1) through a process called ocean acidification. The extent of further ocean chemistry changes, including ocean pH, will depend on climate change mitigation efforts taken by nations and their governments. Oxygen concentration The amount of oxygen found in seawater depends primarily on the plants growing in it. These are mainly algae, including phytoplankton, with some vascular plants such as seagrasses. In daylight, the photosynthetic activity of these plants produces oxygen, which dissolves in the seawater and is used by marine animals. At night, photosynthesis stops, and the amount of dissolved oxygen declines. In the deep sea, where insufficient light penetrates for plants to grow, there is very little dissolved oxygen. In its absence, organic material is broken down by anaerobic bacteria producing hydrogen sulphide. Climate change is likely to reduce levels of oxygen in surface waters since the solubility of oxygen in water falls at higher temperatures. Ocean deoxygenation is projected to increase hypoxia by 10%, and triple suboxic waters (oxygen concentrations 98% less than the mean surface concentrations), for each 1 °C of upper-ocean warming. Light The amount of light that penetrates the sea depends on the angle of the sun, the weather conditions and the turbidity of the water. Much light gets reflected at the surface, and red light gets absorbed in the top few metres. Yellow and green light reach greater depths, and blue and violet light may penetrate as deep as . There is insufficient light for photosynthesis and plant growth beyond a depth of about . Sea level Over most of geologic time, the sea level has been higher than it is today. The main factor affecting sea level over time is the result of changes in the oceanic crust, with a downward trend expected to continue in the very long term. At the last glacial maximum, some 20,000 years ago, the sea level was about lower than in present times (2012). For at least the last 100 years, sea level has been rising at an average rate of about per year. Most of this rise can be attributed to an increase in the temperature of the sea due to climate change, and the resulting slight thermal expansion of the upper of water. Additional contributions, as much as one quarter of the total, come from water sources on land, such as melting snow and glaciers and extraction of groundwater for irrigation and other agricultural and human needs. Waves Wind blowing over the surface of a body of water forms waves that are perpendicular to the direction of the wind. The friction between air and water caused by a gentle breeze on a pond causes ripples to form. A strong blow over the ocean causes larger waves as the moving air pushes against the raised ridges of water. The waves reach their maximum height when the rate at which they are travelling nearly matches the speed of the wind. In open water, when the wind blows continuously as happens in the Southern Hemisphere in the Roaring Forties, long, organised masses of water called swell roll across the ocean. If the wind dies down, the wave formation is reduced, but already-formed waves continue to travel in their original direction until they meet land. The size of the waves depends on the fetch, the distance that the wind has blown over the water and the strength and duration of that wind. When waves meet others coming from different directions, interference between the two can produce broken, irregular seas. Constructive interference can cause individual (unexpected) rogue waves much higher than normal. Most waves are less than high and it is not unusual for strong storms to double or triple that height; offshore construction such as wind farms and oil platforms use metocean statistics from measurements in computing the wave forces (due to for instance the hundred-year wave) they are designed against. Rogue waves, however, have been documented at heights above . The top of a wave is known as the crest, the lowest point between waves is the trough and the distance between the crests is the wavelength. The wave is pushed across the surface of the sea by the wind, but this represents a transfer of energy and not a horizontal movement of water. As waves approach land and move into shallow water, they change their behavior. If approaching at an angle, waves may bend (refraction) or wrap rocks and headlands (diffraction). When the wave reaches a point where its deepest oscillations of the water contact the seabed, they begin to slow down. This pulls the crests closer together and increases the waves' height, which is called wave shoaling. When the ratio of the wave's height to the water depth increases above a certain limit, it "breaks", toppling over in a mass of foaming water. This rushes in a sheet up the beach before retreating into the sea under the influence of gravity. Tsunami A tsunami is an unusual form of wave caused by an infrequent powerful event such as an underwater earthquake or landslide, a meteorite impact, a volcanic eruption or a collapse of land into the sea. These events can temporarily lift or lower the surface of the sea in the affected area, usually by a few feet. The potential energy of the displaced seawater is turned into kinetic energy, creating a shallow wave, a tsunami, radiating outwards at a velocity proportional to the square root of the depth of the water and which therefore travels much faster in the open ocean than on a continental shelf. In the deep open sea, tsunamis have wavelengths of around , travel at speeds of over and usually have a height of less than three feet, so they often pass unnoticed at this stage. In contrast, ocean surface waves caused by winds have wavelengths of a few hundred feet, travel at up to and are up to high. As a tsunami moves into shallower water its speed decreases, its wavelength shortens and its amplitude increases enormously, behaving in the same way as a wind-generated wave in shallow water but on a vastly greater scale. Either the trough or the crest of a tsunami can arrive at the coast first. In the former case, the sea draws back and leaves subtidal areas close to the shore exposed which provides a useful warning for people on land. When the crest arrives, it does not usually break but rushes inland, flooding all in its path. Much of the destruction may be caused by the flood water draining back into the sea after the tsunami has struck, dragging debris and people with it. Often several tsunami are caused by a single geological event and arrive at intervals of between eight minutes and two hours. The first wave to arrive on shore may not be the biggest or most destructive. Currents Wind blowing over the surface of the sea causes friction at the interface between air and sea. Not only does this cause waves to form, but it also makes the surface seawater move in the same direction as the wind. Although winds are variable, in any one place they predominantly blow from a single direction and thus a surface current can be formed. Westerly winds are most frequent in the mid-latitudes while easterlies dominate the tropics. When water moves in this way, other water flows in to fill the gap and a circular movement of surface currents known as a gyre is formed. There are five main gyres in the world's oceans: two in the Pacific, two in the Atlantic and one in the Indian Ocean. Other smaller gyres are found in lesser seas and a single gyre flows around Antarctica. These gyres have followed the same routes for millennia, guided by the topography of the land, the wind direction and the Coriolis effect. The surface currents flow in a clockwise direction in the Northern Hemisphere and anticlockwise in the Southern Hemisphere. The water moving away from the equator is warm, and that flowing in the reverse direction has lost most of its heat. These currents tend to moderate the Earth's climate, cooling the equatorial region and warming regions at higher latitudes. Global climate and weather forecasts are powerfully affected by the world ocean, so global climate modelling makes use of ocean circulation models as well as models of other major components such as the atmosphere, land surfaces, aerosols and sea ice. Ocean models make use of a branch of physics, geophysical fluid dynamics, that describes the large-scale flow of fluids such as seawater. Surface currents only affect the top few hundred metres of the sea, but there are also large-scale flows in the ocean depths caused by the movement of deep water masses. A main deep ocean current flows through all the world's oceans and is known as the thermohaline circulation or global conveyor belt. This movement is slow and is driven by differences in density of the water caused by variations in salinity and temperature. At high latitudes the water is chilled by the low atmospheric temperature and becomes saltier as sea ice crystallizes out. Both these factors make it denser, and the water sinks. From the deep sea near Greenland, such water flows southwards between the continental landmasses on either side of the Atlantic. When it reaches the Antarctic, it is joined by further masses of cold, sinking water and flows eastwards. It then splits into two streams that move northwards into the Indian and Pacific Oceans. Here it is gradually warmed, becomes less dense, rises towards the surface and loops back on itself. It takes a thousand years for this circulation pattern to be completed. Besides gyres, there are temporary surface currents that occur under specific conditions. When waves meet a shore at an angle, a longshore current is created as water is pushed along parallel to the coastline. The water swirls up onto the beach at right angles to the approaching waves but drains away straight down the slope under the effect of gravity. The larger the breaking waves, the longer the beach and the more oblique the wave approach, the stronger is the longshore current. These currents can shift great volumes of sand or pebbles, create spits and make beaches disappear and water channels silt up. A rip current can occur when water piles up near the shore from advancing waves and is funnelled out to sea through a channel in the seabed. It may occur at a gap in a sandbar or near a man-made structure such as a groyne. These strong currents can have a velocity of per second, can form at different places at different stages of the tide and can carry away unwary bathers. Temporary upwelling currents occur when the wind pushes water away from the land and deeper water rises to replace it. This cold water is often rich in nutrients and creates blooms of phytoplankton and a great increase in the productivity of the sea. Tides Tides are the regular rise and fall in water level experienced by seas and oceans in response to the gravitational influences of the Moon and the Sun, and the effects of the Earth's rotation. During each tidal cycle, at any given place the water rises to a maximum height known as "high tide" before ebbing away again to the minimum "low tide" level. As the water recedes, it uncovers more and more of the foreshore, also known as the intertidal zone. The difference in height between the high tide and low tide is known as the tidal range or tidal amplitude. Most places experience two high tides each day, occurring at intervals of about 12 hours and 25 minutes. This is half the 24 hours and 50 minute period that it takes for the Earth to make a complete revolution and return the Moon to its previous position relative to an observer. The Moon's mass is some 27 million times smaller than the Sun, but it is 400 times closer to the Earth. Tidal force or tide-raising force decreases rapidly with distance, so the moon has more than twice as great an effect on tides as the Sun. A bulge is formed in the ocean at the place where the Earth is closest to the Moon because it is also where the effect of the Moon's gravity is stronger. On the opposite side of the Earth, the lunar force is at its weakest and this causes another bulge to form. As the Moon rotates around the Earth, so do these ocean bulges move around the Earth. The gravitational attraction of the Sun is also working on the seas, but its effect on tides is less powerful than that of the Moon, and when the Sun, Moon and Earth are all aligned (full moon and new moon), the combined effect results in the high "spring tides". In contrast, when the Sun is at 90° from the Moon as viewed from Earth, the combined gravitational effect on tides is less causing the lower "neap tides". A storm surge can occur when high winds pile water up against the coast in a shallow area and this, coupled with a low-pressure system, can raise the surface of the sea at high tide dramatically. Ocean basins The Earth is composed of a magnetic central core, a mostly liquid mantle and a hard rigid outer shell (or lithosphere), which is composed of the Earth's rocky crust and the deeper mostly solid outer layer of the mantle. On land the crust is known as the continental crust while under the sea it is known as the oceanic crust. The latter is composed of relatively dense basalt and is some five to ten kilometres (three to six miles) thick. The relatively thin lithosphere floats on the weaker and hotter mantle below and is fractured into a number of tectonic plates. In mid-ocean, magma is constantly being thrust through the seabed between adjoining plates to form mid-oceanic ridges and here convection currents within the mantle tend to drive the two plates apart. Parallel to these ridges and nearer the coasts, one oceanic plate may slide beneath another oceanic plate in a process known as subduction. Deep trenches are formed here and the process is accompanied by friction as the plates grind together. The movement proceeds in jerks which cause earthquakes, heat is produced and magma is forced up creating underwater mountains, some of which may form chains of volcanic islands near to deep trenches. Near some of the boundaries between the land and sea, the slightly denser oceanic plates slide beneath the continental plates and more subduction trenches are formed. As they grate together, the continental plates are deformed and buckle causing mountain building and seismic activity. The Earth's deepest trench is the Mariana Trench which extends for about across the seabed. It is near the Mariana Islands, a volcanic archipelago in the West Pacific. Its deepest point is 10.994 kilometres (nearly 7 miles) below the surface of the sea. Coasts The zone where land meets sea is known as the coast and the part between the lowest spring tides and the upper limit reached by splashing waves is the shore. A beach is the accumulation of sand or shingle on the shore. A headland is a point of land jutting out into the sea and a larger promontory is known as a cape. The indentation of a coastline, especially between two headlands, is a bay, a small bay with a narrow inlet is a cove and a large bay may be referred to as a gulf. Coastlines are influenced by several factors including the strength of the waves arriving on the shore, the gradient of the land margin, the composition and hardness of the coastal rock, the inclination of the off-shore slope and the changes of the level of the land due to local uplift or submergence. Normally, waves roll towards the shore at the rate of six to eight per minute and these are known as constructive waves as they tend to move material up the beach and have little erosive effect. Storm waves arrive on shore in rapid succession and are known as destructive waves as the swash moves beach material seawards. Under their influence, the sand and shingle on the beach is ground together and abraded. Around high tide, the power of a storm wave impacting on the foot of a cliff has a shattering effect as air in cracks and crevices is compressed and then expands rapidly with release of pressure. At the same time, sand and pebbles have an erosive effect as they are thrown against the rocks. This tends to undercut the cliff, and normal weathering processes such as the action of frost follows, causing further destruction. Gradually, a wave-cut platform develops at the foot of the cliff and this has a protective effect, reducing further wave-erosion. Material worn from the margins of the land eventually ends up in the sea. Here it is subject to attrition as currents flowing parallel to the coast scour out channels and transport sand and pebbles away from their place of origin. Sediment carried to the sea by rivers settles on the seabed causing deltas to form in estuaries. All these materials move back and forth under the influence of waves, tides and currents. Dredging removes material and deepens channels but may have unexpected effects elsewhere on the coastline. Governments make efforts to prevent flooding of the land by the building of breakwaters, seawalls, dykes and levees and other sea defences. For instance, the Thames Barrier is designed to protect London from a storm surge, while the failure of the dykes and levees around New Orleans during Hurricane Katrina created a humanitarian crisis in the United States. Water cycle The sea plays a part in the water or hydrological cycle, in which water evaporates from the ocean, travels through the atmosphere as vapour, condenses, falls as rain or snow, thereby sustaining life on land, and largely returns to the sea. Even in the Atacama Desert, where little rain ever falls, dense clouds of fog known as the camanchaca blow in from the sea and support plant life. In central Asia and other large land masses, there are endorheic basins which have no outlet to the sea, separated from the ocean by mountains or other natural geologic features that prevent the water draining away. The Caspian Sea is the largest one of these. Its main inflow is from the River Volga, there is no outflow and the evaporation of water makes it saline as dissolved minerals accumulate. The Aral Sea in Kazakhstan and Uzbekistan, and Pyramid Lake in the western United States are further examples of large, inland saline water-bodies without drainage. Some endorheic lakes are less salty, but all are sensitive to variations in the quality of the inflowing water. Carbon cycle Oceans contain the greatest quantity of actively cycled carbon in the world and are second only to the lithosphere in the amount of carbon they store. The oceans' surface layer holds large amounts of dissolved organic carbon that is exchanged rapidly with the atmosphere. The deep layer's concentration of dissolved inorganic carbon is about 15 percent higher than that of the surface layer and it remains there for much longer periods of time. Thermohaline circulation exchanges carbon between these two layers. Carbon enters the ocean as atmospheric carbon dioxide dissolves in the surface layers and is converted into carbonic acid, carbonate, and bicarbonate: CO2 (gas) CO2 (aq) CO2 (aq) + H2O H2CO3 H2CO3 HCO3− + H+ HCO3− CO32− + H+ It can also enter through rivers as dissolved organic carbon and is converted by photosynthetic organisms into organic carbon. This can either be exchanged throughout the food chain or precipitated into the deeper, more carbon-rich layers as dead soft tissue or in shells and bones as calcium carbonate. It circulates in this layer for long periods of time before either being deposited as sediment or being returned to surface waters through thermohaline circulation. Life in the sea The oceans are home to a diverse collection of life forms that use it as a habitat. Since sunlight illuminates only the upper layers, the major part of the ocean exists in permanent darkness. As the different depth and temperature zones each provide habitat for a unique set of species, the marine environment as a whole encompasses an immense diversity of life. Marine habitats range from surface water to the deepest oceanic trenches, including coral reefs, kelp forests, seagrass meadows, tidepools, muddy, sandy and rocky seabeds, and the open pelagic zone. The organisms living in the sea range from whales long to microscopic phytoplankton and zooplankton, fungi, and bacteria. Marine life plays an important part in the carbon cycle as photosynthetic organisms convert dissolved carbon dioxide into organic carbon and it is economically important to humans for providing fish for use as food. Life may have originated in the sea and all the major groups of animals are represented there. Scientists differ as to precisely where in the sea life arose: the Miller-Urey experiments suggested a dilute chemical "soup" in open water, but more recent suggestions include volcanic hot springs, fine-grained clay sediments, or deep-sea "black smoker" vents, all of which would have provided protection from damaging ultraviolet radiation which was not blocked by the early Earth's atmosphere. Marine habitats Marine habitats can be divided horizontally into coastal and open ocean habitats. Coastal habitats extend from the shoreline to the edge of the continental shelf. Most marine life is found in coastal habitats, even though the shelf area occupies only 7 percent of the total ocean area. Open ocean habitats are found in the deep ocean beyond the edge of the continental shelf. Alternatively, marine habitats can be divided vertically into pelagic (open water), demersal (just above the seabed) and benthic (sea bottom) habitats. A third division is by latitude: from polar seas with ice shelves, sea ice and icebergs, to temperate and tropical waters. Coral reefs, the so-called "rainforests of the sea", occupy less than 0.1 percent of the world's ocean surface, yet their ecosystems include 25 percent of all marine species. The best-known are tropical coral reefs such as Australia's Great Barrier Reef, but cold water reefs harbour a wide array of species including corals (only six of which contribute to reef formation). Algae and plants Marine primary producersplants and microscopic organisms in the planktonare widespread and very essential for the ecosystem. It has been estimated that half of the world's oxygen is produced by phytoplankton. About 45 percent of the sea's primary production of living material is contributed by diatoms. Much larger algae, commonly known as seaweeds, are important locally; Sargassum forms floating drifts, while kelp form seabed forests. Flowering plants in the form of seagrasses grow in "meadows" in sandy shallows, mangroves line the coast in tropical and subtropical regions and salt-tolerant plants thrive in regularly inundated salt marshes. All of these habitats are able to sequester large quantities of carbon and support a biodiverse range of larger and smaller animal life. Light is only able to penetrate the top so this is the only part of the sea where plants can grow. The surface layers are often deficient in biologically active nitrogen compounds. The marine nitrogen cycle consists of complex microbial transformations which include the fixation of nitrogen, its assimilation, nitrification, anammox and denitrification. Some of these processes take place in deep water so that where there is an upwelling of cold waters, and also near estuaries where land-sourced nutrients are present, plant growth is higher. This means that the most productive areas, rich in plankton and therefore also in fish, are mainly coastal. Animals and other marine life There is a broader spectrum of higher animal taxa in the sea than on land, many marine species have yet to be discovered and the number known to science is expanding annually. Some vertebrates such as seabirds, seals and sea turtles return to the land to breed but fish, cetaceans and sea snakes have a completely aquatic lifestyle and many invertebrate phyla are entirely marine. In fact, the oceans teem with life and provide many varying microhabitats. One of these is the surface film which, even though tossed about by the movement of waves, provides a rich environment and is home to bacteria, fungi, microalgae, protozoa, fish eggs and various larvae. The pelagic zone contains macro- and microfauna and myriad zooplankton which drift with the currents. Most of the smallest organisms are the larvae of fish and marine invertebrates which liberate eggs in vast numbers because the chance of any one embryo surviving to maturity is so minute. The zooplankton feed on phytoplankton and on each other and form a basic part of the complex food chain that extends through variously sized fish and other nektonic organisms to large squid, sharks, porpoises, dolphins and whales. Some marine creatures make large migrations, either to other regions of the ocean on a seasonal basis or vertical migrations daily, often ascending to feed at night and descending to safety by day. Ships can introduce or spread invasive species through the discharge of ballast water or the transport of organisms that have accumulated as part of the fouling community on the hulls of vessels. The demersal zone supports many animals that feed on benthic organisms or seek protection from predators and the seabed provides a range of habitats on or under the surface of the substrate which are used by creatures adapted to these conditions. The tidal zone with its periodic exposure to the dehydrating air is home to barnacles, molluscs and crustaceans. The neritic zone has many organisms that need light to flourish. Here, among algal-encrusted rocks live sponges, echinoderms, polychaete worms, sea anemones and other invertebrates. Corals often contain photosynthetic symbionts and live in shallow waters where light penetrates. The extensive calcareous skeletons they extrude build up into coral reefs which are an important feature of the seabed. These provide a biodiverse habitat for reef-dwelling organisms. There is less sea life on the floor of deeper seas but marine life also flourishes around seamounts that rise from the depths, where fish and other animals congregate to spawn and feed. Close to the seabed live demersal fish that feed largely on pelagic organisms or benthic invertebrates. Exploration of the deep sea by submersibles revealed a new world of creatures living on the seabed that scientists had not previously known to exist. Some like the detrivores rely on organic material falling to the ocean floor. Others cluster round deep sea hydrothermal vents where mineral-rich flows of water emerge from the seabed, supporting communities whose primary producers are sulphide-oxidising chemoautotrophic bacteria, and whose consumers include specialised bivalves, sea anemones, barnacles, crabs, worms and fish, often found nowhere else. A dead whale sinking to the bottom of the ocean provides food for an assembly of organisms which similarly rely largely on the actions of sulphur-reducing bacteria. Such places support unique biomes where many new microbes and other lifeforms have been discovered. Humans and the sea History of navigation and exploration Humans have travelled the seas since they first built sea-going craft. Mesopotamians were using bitumen to caulk their reed boats and, a little later, masted sails. By c. 3000 BC, Austronesians on Taiwan had begun spreading into maritime Southeast Asia. Subsequently, the Austronesian "Lapita" peoples displayed great feats of navigation, reaching out from the Bismarck Archipelago to as far away as Fiji, Tonga, and Samoa. Their descendants continued to travel thousands of miles between tiny islands on outrigger canoes, and in the process they found many new islands, including Hawaii, Easter Island (Rapa Nui), and New Zealand. The Ancient Egyptians and Phoenicians explored the Mediterranean and Red Sea with the Egyptian Hannu reaching the Arabian Peninsula and the African Coast around 2750 BC. In the first millennium BC, Phoenicians and Greeks established colonies throughout the Mediterranean and the Black Sea. Around 500 BC, the Carthaginian navigator Hanno left a detailed periplus of an Atlantic journey that reached at least Senegal and possibly Mount Cameroon. In the early Mediaeval period, the Vikings crossed the North Atlantic and even reached the northeastern fringes of North America. Novgorodians had also been sailing the White Sea since the 13th century or before. Meanwhile, the seas along the eastern and southern Asian coast were used by Arab and Chinese traders. The Chinese Ming Dynasty had a fleet of 317 ships with 37,000 men under Zheng He in the early fifteenth century, sailing the Indian and Pacific Oceans. In the late fifteenth century, Western European mariners started making longer voyages of exploration in search of trade. Bartolomeu Dias rounded the Cape of Good Hope in 1487 and Vasco da Gama reached India via the Cape in 1498. Christopher Columbus sailed from Cadiz in 1492, attempting to reach the eastern lands of India and Japan by the novel means of travelling westwards. He made landfall instead on an island in the Caribbean Sea and a few years later, the Venetian navigator John Cabot reached Newfoundland. The Italian Amerigo Vespucci, after whom America was named, explored the South American coastline in voyages made between 1497 and 1502, discovering the mouth of the Amazon River. In 1519 the Portuguese navigator Ferdinand Magellan led the Spanish Magellan-Elcano expedition which would be the first to sail around the world. As for the history of navigational instrument, a compass was first used by the ancient Greeks and Chinese to show where north lies and the direction in which the ship is heading. The latitude (an angle which ranges from 0° at the equator to 90° at the poles) was determined by measuring the angle between the Sun, Moon or a specific star and the horizon by the use of an astrolabe, Jacob's staff or sextant. The longitude (a line on the globe joining the two poles) could only be calculated with an accurate chronometer to show the exact time difference between the ship and a fixed point such as the Greenwich Meridian. In 1759, John Harrison, a clockmaker, designed such an instrument and James Cook used it in his voyages of exploration. Nowadays, the Global Positioning System (GPS) using over thirty satellites enables accurate navigation worldwide. With regards to maps that are vital for navigation, in the second century, Ptolemy mapped the whole known world from the "Fortunatae Insulae", Cape Verde or Canary Islands, eastward to the Gulf of Thailand. This map was used in 1492 when Christopher Columbus set out on his voyages of discovery. Subsequently, Gerardus Mercator made a practical map of the world in 1538, his map projection conveniently making rhumb lines straight. By the eighteenth century better maps had been made and part of the objective of James Cook on his voyages was to further map the ocean. Scientific study has continued with the depth recordings of the Tuscarora, the oceanic research of the Challenger voyages (1872–1876), the work of the Scandinavian seamen Roald Amundsen and Fridtjof Nansen, the Michael Sars expedition in 1910, the German Meteor expedition of 1925, the Antarctic survey work of Discovery II in 1932, and others since. Furthermore, in 1921, the International Hydrographic Organization (IHO) was set up, and it constitutes the world authority on hydrographic surveying and nautical charting. A fourth edition draft was published in 1986 but so far several naming disputes (such as the one over the Sea of Japan) have prevented its ratification. History of oceanography and deep sea exploration Scientific oceanography began with the voyages of Captain James Cook from 1768 to 1779, describing the Pacific with unprecedented precision from 71 degrees South to 71 degrees North. John Harrison's chronometers supported Cook's accurate navigation and charting on two of these voyages, permanently improving the standard attainable for subsequent work. Other expeditions followed in the nineteenth century, from Russia, France, the Netherlands and the United States as well as Britain. On HMS Beagle, which provided Charles Darwin with ideas and materials for his 1859 book On the Origin of Species, the ship's captain, Robert FitzRoy, charted the seas and coasts and published his four-volume report of the ship's three voyages in 1839. Edward Forbes's 1854 book, Distribution of Marine Life argued that no life could exist below around . This was proven wrong by the British biologists W. B. Carpenter and C. Wyville Thomson, who in 1868 discovered life in deep water by dredging. Wyville Thompson became chief scientist on the Challenger expedition of 1872–1876, which effectively created the science of oceanography. On her journey round the globe, HMS Challenger discovered about 4,700 new marine species, and made 492 deep sea soundings, 133 bottom dredges, 151 open water trawls and 263 serial water temperature observations. In the southern Atlantic in 1898/1899, Carl Chun on the Valdivia brought many new life forms to the surface from depths of over . The first observations of deep-sea animals in their natural environment were made in 1930 by William Beebe and Otis Barton who descended to in the spherical steel Bathysphere. This was lowered by cable but by 1960 a self-powered submersible, Trieste developed by Jacques Piccard, took Piccard and Don Walsh to the deepest part of the Earth's oceans, the Mariana Trench in the Pacific, reaching a record depth of about , a feat not repeated until 2012 when James Cameron piloted the Deepsea Challenger to similar depths. An atmospheric diving suit can be worn for deep sea operations, with a new world record being set in 2006 when a US Navy diver descended to in one of these articulated, pressurized suits. At great depths, no light penetrates through the water layers from above and the pressure is extreme. For deep sea exploration it is necessary to use specialist vehicles, either remotely operated underwater vehicles with lights and cameras or crewed submersibles. The battery-operated Mir submersibles have a three-person crew and can descend to . They have viewing ports, 5,000-watt lights, video equipment and manipulator arms for collecting samples, placing probes or pushing the vehicle across the sea bed when the thrusters would stir up excessive sediment. Bathymetry is the mapping and study of the topography of the ocean floor. Methods used for measuring the depth of the sea include single or multibeam echosounders, laser airborne depth sounders and the calculation of depths from satellite remote sensing data. This information is used for determining the routes of undersea cables and pipelines, for choosing suitable locations for siting oil rigs and offshore wind turbines and for identifying possible new fisheries. Ongoing oceanographic research includes marine lifeforms, conservation, the marine environment, the chemistry of the ocean, the studying and modelling of climate dynamics, the air-sea boundary, weather patterns, ocean resources, renewable energy, waves and currents, and the design and development of new tools and technologies for investigating the deep. Whereas in the 1960s and 1970s, research could focus on taxonomy and basic biology, in the 2010s, attention has shifted to larger topics such as climate change. Researchers make use of satellite-based remote sensing for surface waters, with research ships, moored observatories and autonomous underwater vehicles to study and monitor all parts of the sea. Law "Freedom of the seas" is a principle in international law dating from the seventeenth century. It stresses freedom to navigate the oceans and disapproves of war fought in international waters. Today, this concept is enshrined in the United Nations Convention on the Law of the Sea (UNCLOS), the third version of which came into force in 1994. Article 87(1) states: "The high seas are open to all states, whether coastal or land-locked." Article 87(1) (a) to (f) gives a non-exhaustive list of freedoms including navigation, overflight, the laying of submarine cables, building artificial islands, fishing and scientific research. The safety of shipping is regulated by the International Maritime Organization. Its objectives include developing and maintaining a regulatory framework for shipping, maritime safety, environmental concerns, legal matters, technical co-operation and maritime security. UNCLOS defines various areas of water. "Internal waters" are on the landward side of a baseline and foreign vessels have no right of passage in these. "Territorial waters" extend to from the coastline and in these waters, the coastal state is free to set laws, regulate use and exploit any resource. A "contiguous zone" extending a further 12 nautical miles allows for hot pursuit of vessels suspected of infringing laws in four specific areas: customs, taxation, immigration and pollution. An "exclusive economic zone" extends for from the baseline. Within this area, the coastal nation has sole exploitation rights over all natural resources. The "continental shelf" is the natural prolongation of the land territory to the continental margin's outer edge, or 200 nautical miles from the coastal state's baseline, whichever is greater. Here the coastal nation has the exclusive right to harvest minerals and also living resources "attached" to the seabed. War Control of the sea is important to the security of a maritime nation, and the naval blockade of a port can be used to cut off food and supplies in time of war. Battles have been fought on the sea for more than 3,000 years. In about 1210 B.C., Suppiluliuma II, the king of the Hittites, defeated and burned a fleet from Alashiya (modern Cyprus). In the decisive 480 B.C. Battle of Salamis, the Greek general Themistocles trapped the far larger fleet of the Persian king Xerxes in a narrow channel and attacked vigorously, destroying 200 Persian ships for the loss of 40 Greek vessels. At the end of the Age of Sail, the British Royal Navy, led by Horatio Nelson, broke the power of the combined French and Spanish fleets at the 1805 Battle of Trafalgar. With steam and the industrial production of steel plate came greatly increased firepower in the shape of the dreadnought battleships armed with long-range guns. In 1905, the Japanese fleet decisively defeated the Russian fleet, which had travelled over , at the Battle of Tsushima. Dreadnoughts fought inconclusively in the First World War at the 1916 Battle of Jutland between the Royal Navy's Grand Fleet and the Imperial German Navy's High Seas Fleet. In the Second World War, the British victory at the 1940 Battle of Taranto showed that naval air power was sufficient to overcome the largest warships, foreshadowing the decisive sea-battles of the Pacific War including the Battles of the Coral Sea, Midway, the Philippine Sea, and the climactic Battle of Leyte Gulf, in all of which the dominant ships were aircraft carriers. Submarines became important in naval warfare in World War I, when German submarines, known as U-boats, sank nearly 5,000 Allied merchant ships, including the RMS Lusitania, which helped to bring the United States into the war. In World War II, almost 3,000 Allied ships were sunk by U-boats attempting to block the flow of supplies to Britain, but the Allies broke the blockade in the Battle of the Atlantic, which lasted the whole length of the war, sinking 783 U-boats. Since 1960, several nations have maintained fleets of nuclear-powered ballistic missile submarines, vessels equipped to launch ballistic missiles with nuclear warheads from under the sea. Some of these are kept permanently on patrol. Travel Sailing ships or packets carried mail overseas, one of the earliest being the Dutch service to Batavia in the 1670s. These added passenger accommodation, but in cramped conditions. Later, scheduled services were offered but the time journeys took depended much on the weather. When steamships replaced sailing vessels, ocean-going liners took over the task of carrying people. By the beginning of the twentieth century, crossing the Atlantic took about five days and shipping companies competed to own the largest and fastest vessels. The Blue Riband was an unofficial accolade given to the fastest liner crossing the Atlantic in regular service. The Mauretania held the title with for twenty years from 1909. The Hales Trophy, another award for the fastest commercial crossing of the Atlantic, was won by the United States in 1952 for a crossing that took three days, ten hours and forty minutes. The great liners were comfortable but expensive in fuel and staff. The age of the trans-Atlantic liners waned as cheap intercontinental flights became available. In 1958, a regular scheduled air service between New York and Paris taking seven hours doomed the Atlantic ferry service to oblivion. One by one the vessels were laid up, some were scrapped, others became cruise ships for the leisure industry and still others floating hotels. Trade Maritime trade has existed for millennia. The Ptolemaic dynasty had developed trade with India using the Red Sea ports, and in the first millennium BC, the Arabs, Phoenicians, Israelites and Indians traded in luxury goods such as spices, gold, and precious stones. The Phoenicians were noted sea traders and under the Greeks and Romans, commerce continued to thrive. With the collapse of the Roman Empire, European trade dwindled but it continued to flourish among the kingdoms of Africa, the Middle East, India, China and southeastern Asia. From the 16th to the 19th centuries, over a period of 400 years, about 12–13 million Africans were shipped across the Atlantic to be sold as slaves in the Americas as part of the Atlantic slave trade. Large quantities of goods are transported by sea, especially across the Atlantic and around the Pacific Rim. A major trade route passes through the Pillars of Hercules, across the Mediterranean and the Suez Canal to the Indian Ocean and through the Straits of Malacca; much trade also passes through the English Channel. Shipping lanes are the routes on the open sea used by cargo vessels, traditionally making use of trade winds and currents. Over 60 percent of the world's container traffic is conveyed on the top twenty trade routes. Increased melting of Arctic ice since 2007 enables ships to travel the Northwest Passage for some weeks in summertime, avoiding the longer routes via the Suez Canal or the Panama Canal. Shipping is supplemented by air freight, a more expensive process mostly used for particularly valuable or perishable cargoes. Seaborne trade carries more than US$4 trillion worth of goods each year. Bulk cargo in the form of liquids, powder or particles are carried loose in the holds of bulk carriers and include crude oil, grain, coal, ore, scrap metal, sand and gravel. Other cargo, such as manufactured goods, is usually transported within standard-sized, lockable containers, loaded on purpose-built container ships at dedicated terminals. Before the rise of containerization in the 1960s, these goods were loaded, transported and unloaded piecemeal as break-bulk cargo. Containerization greatly increased the efficiency and decreased the cost of moving goods by sea, and was a major factor leading to the rise of globalization and exponential increases in international trade in the mid-to-late 20th century. Food Fish and other fishery products are among the most widely consumed sources of protein and other essential nutrients. In 2009, 16.6% of the world's intake of animal protein and 6.5% of all protein consumed came from fish. In order to fulfill this need, coastal countries have exploited marine resources in their exclusive economic zone, although fishing vessels are increasingly venturing further afield to exploit stocks in international waters. In 2011, the total world production of fish, including aquaculture, was estimated to be 154 million tonnes, of which most was for human consumption. The harvesting of wild fish accounted for 90.4 million tonnes, while annually increasing aquaculture contributes the rest. The north west Pacific is by far the most productive area with 20.9 million tonnes (27 percent of the global marine catch) in 2010. In addition, the number of fishing vessels in 2010 reached 4.36 million, whereas the number of people employed in the primary sector of fish production in the same year amounted to 54.8 million. Modern fishing vessels include fishing trawlers with a small crew, stern trawlers, purse seiners, long-line factory vessels and large factory ships which are designed to stay at sea for weeks, processing and freezing great quantities of fish. The equipment used to capture the fish may be purse seines, other seines, trawls, dredges, gillnets and long-lines and the fish species most frequently targeted are herring, cod, anchovy, tuna, flounder, mullet, squid and salmon. Overexploitation has become a serious concern; it does not only cause the depletion of fish stocks, but also substantially reduce the size of predatory fish populations. It has been estimated that "industrialized fisheries typically reduced community biomass by 80% within 15 years of exploitation." In order to avoid overexploitation, many countries have introduced quotas in their own waters. However, recovery efforts often entail substantial costs to local economies or food provision. Artisan fishing methods include rod and line, harpoons, skin diving, traps, throw nets and drag nets. Traditional fishing boats are powered by paddle, wind or outboard motors and operate in near-shore waters. The Food and Agriculture Organization is encouraging the development of local fisheries to provide food security to coastal communities and help alleviate poverty. Aquaculture About 79 million tonnes (78M long tons; 87M short tons) of food and non-food products were produced by aquaculture in 2010, an all-time high. About six hundred species of plants and animals were cultured, some for use in seeding wild populations. The animals raised included finfish, aquatic reptiles, crustaceans, molluscs, sea cucumbers, sea urchins, sea squirts and jellyfish. Integrated mariculture has the advantage that there is a readily available supply of planktonic food in the ocean, and waste is removed naturally. Various methods are employed. Mesh enclosures for finfish can be suspended in the open seas, cages can be used in more sheltered waters or ponds can be refreshed with water at each high tide. Shrimps can be reared in shallow ponds connected to the open sea. Ropes can be hung in water to grow algae, oysters and mussels. Oysters can be reared on trays or in mesh tubes. Sea cucumbers can be ranched on the seabed. Captive breeding programmes have raised lobster larvae for release of juveniles into the wild resulting in an increased lobster harvest in Maine. At least 145 species of seaweed – red, green, and brown algae – are eaten worldwide, and some have long been farmed in Japan and other Asian countries; there is great potential for additional algaculture. Few maritime flowering plants are widely used for food but one example is marsh samphire which is eaten both raw and cooked. A major difficulty for aquaculture is the tendency towards monoculture and the associated risk of widespread disease. Aquaculture is also associated with environmental risks; for instance, shrimp farming has caused the destruction of important mangrove forests throughout southeast Asia. Leisure Use of the sea for leisure developed in the nineteenth century, and became a significant industry in the twentieth century. Maritime leisure activities are varied, and include beachgoing, cruising, yachting, powerboat racing and fishing; commercially organized voyages on cruise ships; and trips on smaller vessels for ecotourism such as whale watching and coastal birdwatching. Sea bathing became the vogue in Europe in the 18th century after William Buchan advocated the practice for health reasons. Surfing is a sport in which a wave is ridden by a surfer, with or without a surfboard. Other marine water sports include kite surfing, where a power kite propels a rider on a board across the water, windsurfing, where the power is provided by a fixed, manoeuvrable sail and water skiing, where a powerboat is used to pull a skier. Beneath the surface, freediving is necessarily restricted to shallow descents. Pearl divers can dive to with baskets to collect oysters. Human eyes are not adapted for use underwater but vision can be improved by wearing a diving mask. Other useful equipment includes fins and snorkels, and scuba equipment allows underwater breathing and hence a longer time can be spent beneath the surface. The depths that can be reached by divers and the length of time they can stay underwater is limited by the increase of pressure they experience as they descend and the need to prevent decompression sickness as they return to the surface. Recreational divers restrict themselves to depths of beyond which the danger of nitrogen narcosis increases. Deeper dives can be made with specialised equipment and training. Industry Power generation The sea offers a very large supply of energy carried by ocean waves, tides, salinity differences, and ocean temperature differences which can be harnessed to generate electricity. Forms of sustainable marine energy include tidal power, ocean thermal energy and wave power. Electricity power stations are often located on the coast or beside an estuary so that the sea can be used as a heat sink. A colder heat sink enables more efficient power generation, which is important for expensive nuclear power plants in particular. Tidal power uses generators to produce electricity from tidal flows, sometimes by using a dam to store and then release seawater. The Rance barrage, long, near St Malo in Brittany opened in 1967; it generates about 0.5 GW, but it has been followed by few similar schemes. The large and highly variable energy of waves gives them enormous destructive capability, making affordable and reliable wave machines problematic to develop. A small 2 MW commercial wave power plant, "Osprey", was built in Northern Scotland in 1995 about offshore. It was soon damaged by waves, then destroyed by a storm. Offshore wind power is captured by wind turbines placed out at sea; it has the advantage that wind speeds are higher than on land, though wind farms are more costly to construct offshore. The first offshore wind farm was installed in Denmark in 1991, and the installed capacity of worldwide offshore wind farms reached 34 GW in 2020, mainly situated in Europe. Extractive industries The seabed contains large reserves of minerals which can be exploited by dredging. This has advantages over land-based mining in that equipment can be built at specialised shipyards and infrastructure costs are lower. Disadvantages include problems caused by waves and tides, the tendency for excavations to silt up and the washing away of spoil heaps. There is a risk of coastal erosion and environmental damage. Seafloor massive sulphide deposits are potential sources of silver, gold, copper, lead and zinc and trace metals since their discovery in the 1960s. They form when geothermally heated water is emitted from deep sea hydrothermal vents known as "black smokers". The ores are of high quality but prohibitively costly to extract. There are large deposits of petroleum and natural gas, in rocks beneath the seabed. Offshore platforms and drilling rigs extract the oil or gas and store it for transport to land. Offshore oil and gas production can be difficult due to the remote, harsh environment. Drilling for oil in the sea has environmental impacts. Animals may be disorientated by seismic waves used to locate deposits, and there is debate as to whether this causes the beaching of whales. Toxic substances such as mercury, lead and arsenic may be released. The infrastructure may cause damage, and oil may be spilt. Large quantities of methane clathrate exist on the seabed and in ocean sediment, of interest as a potential energy source. Also on the seabed are manganese nodules formed of layers of iron, manganese and other hydroxides around a core. In the Pacific, these may cover up to 30 percent of the deep ocean floor. The minerals precipitate from seawater and grow very slowly. Their commercial extraction for nickel was investigated in the 1970s but abandoned in favour of more convenient sources. In suitable locations, diamonds are gathered from the seafloor using suction hoses to bring gravel ashore. In deeper waters, mobile seafloor crawlers are used and the deposits are pumped to a vessel above. In Namibia, more diamonds are now collected from marine sources than by conventional methods on land. The sea holds large quantities of valuable dissolved minerals. The most important, Salt for table and industrial use has been harvested by solar evaporation from shallow ponds since prehistoric times. Bromine, accumulated after being leached from the land, is economically recovered from the Dead Sea, where it occurs at 55,000 parts per million (ppm). Fresh water production Desalination is the technique of removing salts from seawater to leave fresh water suitable for drinking or irrigation. The two main processing methods, vacuum distillation and reverse osmosis, use large quantities of energy. Desalination is normally only undertaken where fresh water from other sources is in short supply or energy is plentiful, as in the excess heat generated by power stations. The brine produced as a by-product contains some toxic materials and is returned to the sea. Indigenous sea peoples Several nomadic indigenous groups in Maritime Southeast Asia live in boats and derive nearly all they need from the sea. The Moken people live on the coasts of Thailand and Burma and islands in the Andaman Sea. Some Sea Gypsies are accomplished free-divers, able to descend to depths of , though many are adopting a more settled, land-based way of life. The indigenous peoples of the Arctic such as the Chukchi, Inuit, Inuvialuit and Yup'iit hunt marine mammals including seals and whales, and the Torres Strait Islanders of Australia include the Great Barrier Reef among their possessions. They live a traditional life on the islands involving hunting, fishing, gardening and trading with neighbouring peoples in Papua and mainland Aboriginal Australians. In culture The sea appears in human culture in contradictory ways, as both powerful but serene and as beautiful but dangerous. It has its place in literature, art, poetry, film, theatre, classical music, mythology and dream interpretation. The Ancients personified it, believing it to be under the control of a being who needed to be appeased, and symbolically, it has been perceived as a hostile environment populated by fantastic creatures; the Leviathan of the Bible, Scylla in Greek mythology, Isonade in Japanese mythology, and the kraken of late Norse mythology. The sea and ships have been depicted in art ranging from simple drawings on the walls of huts in Lamu to seascapes by Joseph Turner. In Dutch Golden Age painting, artists such as Jan Porcellis, Hendrick Dubbels, Willem van de Velde the Elder and his son, and Ludolf Bakhuizen celebrated the sea and the Dutch navy at the peak of its military prowess. The Japanese artist Katsushika Hokusai created colour prints of the moods of the sea, including The Great Wave off Kanagawa. Music too has been inspired by the ocean, sometimes by composers who lived or worked near the shore and saw its many different aspects. Sea shanties, songs that were chanted by mariners to help them perform arduous tasks, have been woven into compositions and impressions in music have been created of calm waters, crashing waves and storms at sea. As a symbol, the sea has for centuries played a role in literature, poetry and dreams. Sometimes it is there just as a gentle background but often it introduces such themes as storm, shipwreck, battle, hardship, disaster, the dashing of hopes and death. In his epic poem the Odyssey, written in the eighth century BC, Homer describes the ten-year voyage of the Greek hero Odysseus who struggles to return home across the sea's many hazards after the war described in the Iliad. The sea is a recurring theme in the Haiku poems of the Japanese Edo period poet Matsuo Bashō (松尾 芭蕉) (1644–1694). In the works of psychiatrist Carl Jung, the sea symbolizes the personal and the collective unconscious in dream interpretation, the depths of the sea symbolizing the depths of the unconscious mind. Environmental issues The environmental issues that affect the sea can loosely be grouped into those that stem from marine pollution, from over exploitation and those that stem from climate change. They all impact marine ecosystems and food webs and may result in consequences as yet unrecognised for the biodiversity and continuation of marine life forms. An overview of environmental issues is shown below: Marine pollution: Pathways of pollution include direct discharge, land runoff, ship pollution, atmospheric pollution and, potentially, deep sea mining. The types of marine pollution can be grouped as pollution from marine debris, plastic pollution, including microplastics, nutrient pollution, toxins and underwater noise. Over exploitation and biodiversity loss: overfishing, habitat loss, introduction of invasive species Effects of climate change on the sea: an increase in sea surface temperature as well as ocean temperatures at greater depths, more frequent marine heatwaves, a reduction in pH value, a rise in sea level from ocean warming and ice sheet melting, sea ice decline in the Arctic, increased upper ocean stratification, reductions in oxygen levels, increased contrasts in salinity (salty areas becoming saltier and fresher areas becoming less salty), changes to ocean currents including a weakening of the Atlantic meridional overturning circulation, and stronger tropical cyclones and monsoons. Marine pollution Many substances enter the sea as a result of human activities. Combustion products are transported in the air and deposited into the sea by precipitation. Industrial outflows and sewage contribute heavy metals, pesticides, PCBs, disinfectants, household cleaning products and other synthetic chemicals. These become concentrated in the surface film and in marine sediment, especially estuarine mud. The result of all this contamination is largely unknown because of the large number of substances involved and the lack of information on their biological effects. The heavy metals of greatest concern are copper, lead, mercury, cadmium and zinc which may be bio-accumulated by marine organisms and are passed up the food chain. Much floating plastic rubbish does not biodegrade, instead disintegrating over time and eventually breaking down to the molecular level. Rigid plastics may float for years. In the centre of the Pacific gyre there is the permanent Great Pacific Garbage Patch, a floating accumulation of mostly plastic waste. There is a similar garbage patch in the Atlantic. Foraging sea birds such as the albatross and petrel may mistake debris for food, and accumulate indigestible plastic in their digestive systems. Turtles and whales have been found with plastic bags and fishing line in their stomachs. Microplastics may sink, threatening filter feeders on the seabed. Most oil pollution in the sea comes from cities and industry. Oil is dangerous for marine animals. It can clog the feathers of sea birds, reducing their insulating effect and the birds' buoyancy, and be ingested when they preen themselves in an attempt to remove the contaminant. Marine mammals are less seriously affected but may be chilled through the removal of their insulation, blinded, dehydrated or poisoned. Benthic invertebrates are swamped when the oil sinks, fish are poisoned and the food chain is disrupted. In the short term, oil spills result in wildlife populations being decreased and unbalanced, leisure activities being affected and the livelihoods of people dependent on the sea being devastated. The marine environment has self-cleansing properties and naturally occurring bacteria will act over time to remove oil from the sea. In the Gulf of Mexico, where oil-eating bacteria are already present, they take only a few days to consume spilt oil. Run-off of fertilisers from agricultural land is a major source of pollution in some areas and the discharge of raw sewage has a similar effect. The extra nutrients provided by these sources can cause excessive plant growth. Nitrogen is often the limiting factor in marine systems, and with added nitrogen, algal blooms and red tides can lower the oxygen level of the water and kill marine animals. Such events have created dead zones in the Baltic Sea and the Gulf of Mexico. Some algal blooms are caused by cyanobacteria that make shellfish that filter feed on them toxic, harming animals like sea otters. Nuclear facilities too can pollute. The Irish Sea was contaminated by radioactive caesium-137 from the former Sellafield nuclear fuel processing plant and nuclear accidents may also cause radioactive material to seep into the sea, as did the disaster at the Fukushima Daiichi Nuclear Power Plant in 2011. The dumping of waste (including oil, noxious liquids, sewage and garbage) at sea is governed by international law. The London Convention (1972) is a United Nations agreement to control ocean dumping which had been ratified by 89 countries by 8 June 2012. MARPOL 73/78 is a convention to minimize pollution of the seas by ships. By May 2013, 152 maritime nations had ratified MARPOL.
Physical sciences
Geography
null
18842359
https://en.wikipedia.org/wiki/Ocean
Ocean
The ocean is the body of salt water that covers approximately 70.8% of Earth. In English, the term ocean also refers to any of the large bodies of water into which the world ocean is conventionally divided. The following names describe five different areas of the ocean: Pacific, Atlantic, Indian, Antarctic/Southern, and Arctic. The ocean contains 97% of Earth's water and is the primary component of Earth's hydrosphere and is thereby essential to life on Earth. The ocean influences climate and weather patterns, the carbon cycle, and the water cycle by acting as a huge heat reservoir. Ocean scientists split the ocean into vertical and horizontal zones based on physical and biological conditions. The pelagic zone is the open ocean's water column from the surface to the ocean floor. The water column is further divided into zones based on depth and the amount of light present. The photic zone starts at the surface and is defined to be "the depth at which light intensity is only 1% of the surface value" (approximately 200 m in the open ocean). This is the zone where photosynthesis can occur. In this process plants and microscopic algae (free-floating phytoplankton) use light, water, carbon dioxide, and nutrients to produce organic matter. As a result, the photic zone is the most biodiverse and the source of the food supply which sustains most of the ocean ecosystem. Ocean photosynthesis also produces half of the oxygen in the Earth's atmosphere. Light can only penetrate a few hundred more meters; the rest of the deeper ocean is cold and dark (these zones are called mesopelagic and aphotic zones). The continental shelf is where the ocean meets dry land. It is more shallow, with a depth of a few hundred meters or less. Human activity often has negative impacts on marine life within the continental shelf. Ocean temperatures depend on the amount of solar radiation reaching the ocean surface. In the tropics, surface temperatures can rise to over . Near the poles where sea ice forms, the temperature in equilibrium is about . In all parts of the ocean, deep ocean temperatures range between and . Constant circulation of water in the ocean creates ocean currents. Those currents are caused by forces operating on the water, such as temperature and salinity differences, atmospheric circulation (wind), and the Coriolis effect. Tides create tidal currents, while wind and waves cause surface currents. The Gulf Stream, Kuroshio Current, Agulhas Current and Antarctic Circumpolar Current are all major ocean currents. Such currents transport massive amounts of water, gases, pollutants and heat to different parts of the world, and from the surface into the deep ocean. All this has impacts on the global climate system. Ocean water contains dissolved gases, including oxygen, carbon dioxide and nitrogen. An exchange of these gases occurs at the ocean's surface. The solubility of these gases depends on the temperature and salinity of the water. The carbon dioxide concentration in the atmosphere is rising due to CO2 emissions, mainly from fossil fuel combustion. As the oceans absorb CO2 from the atmosphere, a higher concentration leads to ocean acidification (a drop in pH value). The ocean provides many benefits to humans such as ecosystem services, access to seafood and other marine resources, and a means of transport. The ocean is known to be the habitat of over 230,000 species, but may hold considerably more – perhaps over two million species. Yet, the ocean faces many environmental threats, such as marine pollution, overfishing, and the effects of climate change. Those effects include ocean warming, ocean acidification and sea level rise. The continental shelf and coastal waters are most affected by human activity. Terminology Ocean and sea The terms "the ocean" or "the sea" used without specification refer to the interconnected body of salt water covering the majority of Earth's surface. It includes the Pacific, Atlantic, Indian, Southern/Antarctic, and Arctic oceans. As a general term, "the ocean" and "the sea" are often interchangeable. Strictly speaking, a "sea" is a body of water (generally a division of the world ocean) partly or fully enclosed by land. The word "sea" can also be used for many specific, much smaller bodies of seawater, such as the North Sea or the Red Sea. There is no sharp distinction between seas and oceans, though generally seas are smaller, and are often partly (as marginal seas) or wholly (as inland seas) bordered by land. World Ocean The contemporary concept of the World Ocean was coined in the early 20th century by the Russian oceanographer Yuly Shokalsky to refer to the continuous ocean that covers and encircles most of Earth. The global, interconnected body of salt water is sometimes referred to as the World Ocean, global ocean or the great ocean. The concept of a continuous body of water with relatively unrestricted exchange between its components is critical in oceanography. Etymology The word ocean comes from the figure in classical antiquity, Oceanus (; Ōkeanós, ), the elder of the Titans in classical Greek mythology. Oceanus was believed by the ancient Greeks and Romans to be the divine personification of an enormous river encircling the world. The concept of Ōkeanós has an Indo-European connection. Greek Ōkeanós has been compared to the Vedic epithet ā-śáyāna-, predicated of the dragon Vṛtra-, who captured the cows/rivers. Related to this notion, the Okeanos is represented with a dragon-tail on some early Greek vases. Natural history Origin of water Scientists believe that a sizable quantity of water would have been in the material that formed Earth. Water molecules would have escaped Earth's gravity more easily when it was less massive during its formation. This is called atmospheric escape. During planetary formation, Earth possibly had magma oceans. Subsequently, outgassing, volcanic activity and meteorite impacts, produced an early atmosphere of carbon dioxide, nitrogen and water vapor, according to current theories. The gases and the atmosphere are thought to have accumulated over millions of years. After Earth's surface had significantly cooled, the water vapor over time would have condensed, forming Earth's first oceans. The early oceans might have been significantly hotter than today and appeared green due to high iron content. Geological evidence helps constrain the time frame for liquid water existing on Earth. A sample of pillow basalt (a type of rock formed during an underwater eruption) was recovered from the Isua Greenstone Belt and provides evidence that water existed on Earth 3.8 billion years ago. In the Nuvvuagittuq Greenstone Belt, Quebec, Canada, rocks dated at 3.8 billion years old by one study and 4.28 billion years old by another show evidence of the presence of water at these ages. If oceans existed earlier than this, any geological evidence either has yet to be discovered, or has since been destroyed by geological processes like crustal recycling. However, in August 2020, researchers reported that sufficient water to fill the oceans may have always been on the Earth since the beginning of the planet's formation. In this model, atmospheric greenhouse gases kept the oceans from freezing when the newly forming Sun had only 70% of its current luminosity. Ocean formation The origin of Earth's oceans is unknown. Oceans are thought to have formed in the Hadean eon and may have been the cause for the emergence of life. Plate tectonics, post-glacial rebound, and sea level rise continually change the coastline and structure of the world ocean. A global ocean has existed in one form or another on Earth for eons. Since its formation the ocean has taken many conditions and shapes with many past ocean divisions and potentially at times covering the whole globe. During colder climatic periods, more ice caps and glaciers form, and enough of the global water supply accumulates as ice to lessen the amounts in other parts of the water cycle. The reverse is true during warm periods. During the last ice age, glaciers covered almost one-third of Earth's land mass with the result being that the oceans were about 122 m (400 ft) lower than today. During the last global "warm spell," about 125,000 years ago, the seas were about 5.5 m (18 ft) higher than they are now. About three million years ago the oceans could have been up to 50 m (165 ft) higher. Geography The entire ocean, containing 97% of Earth's water, spans 70.8% of Earth's surface, making it Earth's global ocean or world ocean. This makes Earth, along with its vibrant hydrosphere a "water world" or "ocean world", particularly in Earth's early history when the ocean is thought to have possibly covered Earth completely. The ocean's shape is irregular, unevenly dominating the Earth's surface. This leads to the distinction of the Earth's surface into a water and land hemisphere, as well as the division of the ocean into different oceans. Seawater covers about and the ocean's furthest pole of inaccessibility, known as "Point Nemo", in a region known as spacecraft cemetery of the South Pacific Ocean, at . This point is roughly from the nearest land. Oceanic divisions There are different customs to subdivide the ocean and are adjourned by smaller bodies of water such as, seas, gulfs, bays, bights, and straits. The ocean is customarily divided into five principal oceans – listed below in descending order of area and volume: Ocean basins The ocean fills Earth's oceanic basins. Earth's oceanic basins cover different geologic provinces of Earth's oceanic crust as well as continental crust. As such it covers mainly Earth's structural basins, but also continental shelfs. In mid-ocean, magma is constantly being thrust through the seabed between adjoining plates to form mid-oceanic ridges and here convection currents within the mantle tend to drive the two plates apart. Parallel to these ridges and nearer the coasts, one oceanic plate may slide beneath another oceanic plate in a process known as subduction. Deep trenches are formed here and the process is accompanied by friction as the plates grind together. The movement proceeds in jerks which cause earthquakes, heat is produced and magma is forced up creating underwater mountains, some of which may form chains of volcanic islands near to deep trenches. Near some of the boundaries between the land and sea, the slightly denser oceanic plates slide beneath the continental plates and more subduction trenches are formed. As they grate together, the continental plates are deformed and buckle causing mountain building and seismic activity. Every ocean basin has a mid-ocean ridge, which creates a long mountain range beneath the ocean. Together they form the global mid-oceanic ridge system that features the longest mountain range in the world. The longest continuous mountain range is . This underwater mountain range is several times longer than the longest continental mountain rangethe Andes. Oceanographers state that less than 20% of the oceans have been mapped. Interaction with the coast The zone where land meets sea is known as the coast, and the part between the lowest spring tides and the upper limit reached by splashing waves is the shore. A beach is the accumulation of sand or shingle on the shore. A headland is a point of land jutting out into the sea and a larger promontory is known as a cape. The indentation of a coastline, especially between two headlands, is a bay, a small bay with a narrow inlet is a cove and a large bay may be referred to as a gulf. Coastlines are influenced by several factors including the strength of the waves arriving on the shore, the gradient of the land margin, the composition and hardness of the coastal rock, the inclination of the off-shore slope and the changes of the level of the land due to local uplift or submergence. Normally, waves roll towards the shore at the rate of six to eight per minute and these are known as constructive waves as they tend to move material up the beach and have little erosive effect. Storm waves arrive on shore in rapid succession and are known as destructive waves as the swash moves beach material seawards. Under their influence, the sand and shingle on the beach is ground together and abraded. Around high tide, the power of a storm wave impacting on the foot of a cliff has a shattering effect as air in cracks and crevices is compressed and then expands rapidly with release of pressure. At the same time, sand and pebbles have an erosive effect as they are thrown against the rocks. This tends to undercut the cliff, and normal weathering processes such as the action of frost follows, causing further destruction. Gradually, a wave-cut platform develops at the foot of the cliff and this has a protective effect, reducing further wave-erosion. Material worn from the margins of the land eventually ends up in the sea. Here it is subject to attrition as currents flowing parallel to the coast scour out channels and transport sand and pebbles away from their place of origin. Sediment carried to the sea by rivers settles on the seabed causing deltas to form in estuaries. All these materials move back and forth under the influence of waves, tides and currents. Dredging removes material and deepens channels but may have unexpected effects elsewhere on the coastline. Governments make efforts to prevent flooding of the land by the building of breakwaters, seawalls, dykes and levees and other sea defences. For instance, the Thames Barrier is designed to protect London from a storm surge, while the failure of the dykes and levees around New Orleans during Hurricane Katrina created a humanitarian crisis in the United States. Physical properties Color Water cycle, weather, and rainfall Ocean water represents the largest body of water within the global water cycle (oceans contain 97% of Earth's water). Evaporation from the ocean moves water into the atmosphere to later rain back down onto land and the ocean. Oceans have a significant effect on the biosphere. The ocean as a whole is thought to cover approximately 90% of the Earth's biosphere. Oceanic evaporation, as a phase of the water cycle, is the source of most rainfall (about 90%), causing a global cloud cover of 67% and a consistent oceanic cloud cover of 72%. Ocean temperatures affect climate and wind patterns that affect life on land. One of the most dramatic forms of weather occurs over the oceans: tropical cyclones (also called "typhoons" and "hurricanes" depending upon where the system forms). As the world's ocean is the principal component of Earth's hydrosphere, it is integral to life on Earth, forms part of the carbon cycle and water cycle, and – as a huge heat reservoir – influences climate and weather patterns. Waves and swell The motions of the ocean surface, known as undulations or wind waves, are the partial and alternate rising and falling of the ocean surface. The series of mechanical waves that propagate along the interface between water and air is called swell – a term used in sailing, surfing and navigation. These motions profoundly affect ships on the surface of the ocean and the well-being of people on those ships who might suffer from sea sickness. Wind blowing over the surface of a body of water forms waves that are perpendicular to the direction of the wind. The friction between air and water caused by a gentle breeze on a pond causes ripples to form. A stronger gust blowing over the ocean causes larger waves as the moving air pushes against the raised ridges of water. The waves reach their maximum height when the rate at which they are travelling nearly matches the speed of the wind. In open water, when the wind blows continuously as happens in the Southern Hemisphere in the Roaring Forties, long, organized masses of water called swell roll across the ocean. If the wind dies down, the wave formation is reduced, but already-formed waves continue to travel in their original direction until they meet land. The size of the waves depends on the fetch, the distance that the wind has blown over the water and the strength and duration of that wind. When waves meet others coming from different directions, interference between the two can produce broken, irregular seas. Constructive interference can lead to the formation of unusually high rogue waves. Most waves are less than high and it is not unusual for strong storms to double or triple that height. Rogue waves, however, have been documented at heights above . The top of a wave is known as the crest, the lowest point between waves is the trough and the distance between the crests is the wavelength. The wave is pushed across the surface of the ocean by the wind, but this represents a transfer of energy and not horizontal movement of water. As waves approach land and move into shallow water, they change their behavior. If approaching at an angle, waves may bend (refraction) or wrap around rocks and headlands (diffraction). When the wave reaches a point where its deepest oscillations of the water contact the ocean floor, they begin to slow down. This pulls the crests closer together and increases the waves' height, which is called wave shoaling. When the ratio of the wave's height to the water depth increases above a certain limit, it "breaks", toppling over in a mass of foaming water. This rushes in a sheet up the beach before retreating into the ocean under the influence of gravity. Earthquakes, volcanic eruptions or other major geological disturbances can set off waves that can lead to tsunamis in coastal areas which can be very dangerous. Sea level and surface The ocean's surface is an important reference point for oceanography and geography, particularly as mean sea level. The ocean surface has globally little, but measurable topography, depending on the ocean's volumes. The ocean surface is a crucial interface for oceanic and atmospheric processes. Allowing interchange of particles, enriching the air and water, as well as grounds by some particles becoming sediments. This interchange has fertilized life in the ocean, on land and air. All these processes and components together make up ocean surface ecosystems. Tides Tides are the regular rise and fall in water level experienced by oceans, primarily driven by the Moon's gravitational tidal forces upon the Earth. Tidal forces affect all matter on Earth, but only fluids like the ocean demonstrate the effects on human timescales. (For example, tidal forces acting on rock may produce tidal locking between two planetary bodies.) Though primarily driven by the Moon's gravity, oceanic tides are also substantially modulated by the Sun's tidal forces, by the rotation of the Earth, and by the shape of the rocky continents blocking oceanic water flow. (Tidal forces vary more with distance than the "base" force of gravity: the Moon's tidal forces on Earth are more than double the Sun's, despite the latter's much stronger gravitational force on Earth. Earth's tidal forces upon the Moon are 20x stronger than the Moon's tidal forces on the Earth.) The primary effect of lunar tidal forces is to bulge Earth matter towards the near and far sides of the Earth, relative to the moon. The "perpendicular" sides, from which the Moon appears in line with the local horizon, experience "tidal troughs". Since it takes nearly 25 hours for the Earth to rotate under the Moon (accounting for the Moon's 28-day orbit around Earth), tides thus cycle over a course of 12.5 hours. However, the rocky continents pose obstacles for the tidal bulges, so the timing of tidal maxima may not actually align with the Moon in most localities on Earth, as the oceans are forced to "dodge" the continents. Timing and magnitude of tides vary widely across the Earth as a result of the continents. Thus, knowing the Moon's position does not allow a local to predict tide timings, instead requiring precomputed tide tables which account for the continents and the Sun, among others. During each tidal cycle, at any given place the tidal waters rise to maximum height, high tide, before ebbing away again to the minimum level, low tide. As the water recedes, it gradually reveals the foreshore, also known as the intertidal zone. The difference in height between the high tide and low tide is known as the tidal range or tidal amplitude. When the sun and moon are aligned (full moon or new moon), the combined effect results in the higher "spring tides", while the sun and moon misaligning (half moons) result in lesser tidal ranges. In the open ocean tidal ranges are less than 1 meter, but in coastal areas these tidal ranges increase to more than 10 meters in some areas. Some of the largest tidal ranges in the world occur in the Bay of Fundy and Ungava Bay in Canada, reaching up to 16 meters. Other locations with record high tidal ranges include the Bristol Channel between England and Wales, Cook Inlet in Alaska, and the Río Gallegos in Argentina. Tides are not to be confused with storm surges, which can occur when high winds pile water up against the coast in a shallow area and this, coupled with a low pressure system, can raise the surface of the ocean dramatically above a typical high tide. Depth The average depth of the oceans is about 4 km. More precisely the average depth is . Nearly half of the world's marine waters are over deep. "Deep ocean," which is anything below 200 meters (660 ft), covers about 66% of Earth's surface. This figure does not include seas not connected to the World Ocean, such as the Caspian Sea. The deepest region of the ocean is at the Mariana Trench, located in the Pacific Ocean near the Northern Mariana Islands. The maximum depth has been estimated to be . The British naval vessel Challenger II surveyed the trench in 1951 and named the deepest part of the trench the "Challenger Deep". In 1960, the Trieste successfully reached the bottom of the trench, manned by a crew of two men. Oceanic zones Oceanographers classify the ocean into vertical and horizontal zones based on physical and biological conditions. The pelagic zone consists of the water column of the open ocean, and can be divided into further regions categorized by light abundance and by depth. Grouped by light penetration The ocean zones can be grouped by light penetration into (from top to bottom): the photic zone, the mesopelagic zone and the aphotic deep ocean zone: The photic zone is defined to be "the depth at which light intensity is only 1% of the surface value". This is usually up to a depth of approximately 200 m in the open ocean. It is the region where photosynthesis can occur and is, therefore, the most biodiverse. Photosynthesis by plants and microscopic algae (free floating phytoplankton) allows the creation of organic matter from chemical precursors including water and carbon dioxide. This organic matter can then be consumed by other creatures. Much of the organic matter created in the photic zone is consumed there but some sinks into deeper waters. The pelagic part of the photic zone is known as the epipelagic. The actual optics of light reflecting and penetrating at the ocean surface are complex. Below the photic zone is the mesopelagic or twilight zone where there is a very small amount of light. The basic concept is that with that little light photosynthesis is unlikely to achieve any net growth over respiration. Below that is the aphotic deep ocean to which no surface sunlight at all penetrates. Life that exists deeper than the photic zone must either rely on material sinking from above (see marine snow) or find another energy source. Hydrothermal vents are a source of energy in what is known as the aphotic zone (depths exceeding 200 m). Grouped by depth and temperature The pelagic part of the aphotic zone can be further divided into vertical regions according to depth and temperature: The mesopelagic is the uppermost region. Its lowermost boundary is at a thermocline of which generally lies at in the tropics. Next is the bathypelagic lying between , typically between and . Lying along the top of the abyssal plain is the abyssopelagic, whose lower boundary lies at about . The last and deepest zone is the hadalpelagic which includes the oceanic trench and lies between . The benthic zones are aphotic and correspond to the three deepest zones of the deep-sea. The bathyal zone covers the continental slope down to about . The abyssal zone covers the abyssal plains between 4,000 and 6,000 m. Lastly, the hadal zone corresponds to the hadalpelagic zone, which is found in oceanic trenches. Distinct boundaries between ocean surface waters and deep waters can be drawn based on the properties of the water. These boundaries are called thermoclines (temperature), haloclines (salinity), chemoclines (chemistry), and pycnoclines (density). If a zone undergoes dramatic changes in temperature with depth, it contains a thermocline, a distinct boundary between warmer surface water and colder deep water. In tropical regions, the thermocline is typically deeper compared to higher latitudes. Unlike polar waters, where solar energy input is limited, temperature stratification is less pronounced, and a distinct thermocline is often absent. This is due to the fact that surface waters in polar latitudes are nearly as cold as deeper waters. Below the thermocline, water everywhere in the ocean is very cold, ranging from −1 °C to 3 °C. Because this deep and cold layer contains the bulk of ocean water, the average temperature of the world ocean is 3.9 °C. If a zone undergoes dramatic changes in salinity with depth, it contains a halocline. If a zone undergoes a strong, vertical chemistry gradient with depth, it contains a chemocline. Temperature and salinity control ocean water density. Colder and saltier water is denser, and this density plays a crucial role in regulating the global water circulation within the ocean. The halocline often coincides with the thermocline, and the combination produces a pronounced pycnocline, a boundary between less dense surface water and dense deep water. Grouped by distance from land The pelagic zone can be further subdivided into two sub regions based on distance from land: the neritic zone and the oceanic zone. The neritic zone covers the water directly above the continental shelves, including coastal waters. On the other hand, the oceanic zone includes all the completely open water. The littoral zone covers the region between low and high tide and represents the transitional area between marine and terrestrial conditions. It is also known as the intertidal zone because it is the area where tide level affects the conditions of the region. Volumes The combined volume of water in all the oceans is roughly 1.335 billion cubic kilometers (1.335 sextillion liters, 320.3 million cubic miles). Temperature Ocean temperatures depends on the amount of solar radiation falling on its surface. In the tropics, with the Sun nearly overhead, the temperature of the surface layers can rise to over while near the poles the temperature in equilibrium with the sea ice is about . There is a continuous circulation of water in the oceans. Warm surface currents cool as they move away from the tropics, and the water becomes denser and sinks. The cold water moves back towards the equator as a deep sea current, driven by changes in the temperature and density of the water, before eventually welling up again towards the surface. Deep ocean water has a temperature between and in all parts of the globe. The temperature gradient over the water depth is related to the way the surface water mixes with deeper water or does not mix (a lack of mixing is called ocean stratification). This depends on the temperature: in the tropics the warm surface layer of about 100 m is quite stable and does not mix much with deeper water, while near the poles winter cooling and storms makes the surface layer denser and it mixes to great depth and then stratifies again in summer. The photic depth is typically about 100 m (but varies) and is related to this heated surface layer. Temperature and salinity by region The temperature and salinity of ocean waters vary significantly across different regions. This is due to differences in the local water balance (precipitation vs. evaporation) and the "sea to air" temperature gradients. These characteristics can vary widely from one ocean region to another. The table below provides an illustration of the sort of values usually encountered. Sea ice Seawater with a typical salinity of 35‰ has a freezing point of about −1.8 °C (28.8 °F). Because sea ice is less dense than water, it floats on the ocean's surface (as does fresh water ice, which has an even lower density). Sea ice covers about 7% of the Earth's surface and about 12% of the world's oceans. Sea ice usually starts to freeze at the very surface, initially as a very thin ice film. As further freezing takes place, this ice film thickens and can form ice sheets. The ice formed incorporates some sea salt, but much less than the seawater it forms from. As the ice forms with low salinity this results in saltier residual seawater. This in turn increases density and promotes vertical sinking of the water. Ocean currents and global climate Types of ocean currents An ocean current is a continuous, directed flow of seawater caused by several forces acting upon the water. These include wind, the Coriolis effect, temperature and salinity differences. Ocean currents are primarily horizontal water movements that have different origins such as tides for tidal currents, or wind and waves for surface currents. Tidal currents are in phase with the tide, hence are quasiperiodic; associated with the influence of the moon and sun pull on the ocean water. Tidal currents may form various complex patterns in certain places, most notably around headlands. Non-periodic or non-tidal currents are created by the action of winds and changes in density of water. In littoral zones, breaking waves are so intense and the depth measurement so low, that maritime currents reach often 1 to 2 knots. The wind and waves create surface currents (designated as "drift currents"). These currents can decompose in one quasi-permanent current (which varies within the hourly scale) and one movement of Stokes drift under the effect of rapid waves movement (which vary on timescales of a couple of seconds). The quasi-permanent current is accelerated by the breaking of waves, and in a lesser governing effect, by the friction of the wind on the surface. This acceleration of the current takes place in the direction of waves and dominant wind. Accordingly, when the ocean depth increases, the rotation of the earth changes the direction of currents in proportion with the increase of depth, while friction lowers their speed. At a certain ocean depth, the current changes direction and is seen inverted in the opposite direction with current speed becoming null: known as the Ekman spiral. The influence of these currents is mainly experienced at the mixed layer of the ocean surface, often from 400 to 800 meters of maximum depth. These currents can considerably change and are dependent on the yearly seasons. If the mixed layer is less thick (10 to 20 meters), the quasi-permanent current at the surface can adopt quite a different direction in relation to the direction of the wind. In this case, the water column becomes virtually homogeneous above the thermocline. The wind blowing on the ocean surface will set the water in motion. The global pattern of winds (also called atmospheric circulation) creates a global pattern of ocean currents. These are driven not only by the wind but also by the effect of the circulation of the earth (coriolis force). These major ocean currents include the Gulf Stream, Kuroshio Current, Agulhas Current and Antarctic Circumpolar Current. The Antarctic Circumpolar Current encircles Antarctica and influences the area's climate, connecting currents in several oceans. Relationship of currents and climate Collectively, currents move enormous amounts of water and heat around the globe influencing climate. These wind driven currents are largely confined to the top hundreds of meters of the ocean. At greater depth, the thermohaline circulation drives water motion. For example, the Atlantic meridional overturning circulation (AMOC) is driven by the cooling of surface waters in the polar latitudes in the north and south, creating dense water which sinks to the bottom of the ocean. This cold and dense water moves slowly away from the poles which is why the waters in the deepest layers of the world ocean are so cold. This deep ocean water circulation is relatively slow and water at the bottom of the ocean can be isolated from the ocean surface and atmosphere for hundreds or even a few thousand years. This circulation has important impacts on the global climate system and on the uptake and redistribution of pollutants and gases such as carbon dioxide, for example by moving contaminants from the surface into the deep ocean. Ocean currents greatly affect Earth's climate by transferring heat from the tropics to the polar regions. This affects air temperature and precipitation in coastal regions and further inland. Surface heat and freshwater fluxes create global density gradients, which drive the thermohaline circulation that is a part of large-scale ocean circulation. It plays an important role in supplying heat to the polar regions, and thus in sea ice regulation. Oceans moderate the climate of locations where prevailing winds blow in from the ocean. At similar latitudes, a place on Earth with more influence from the ocean will have a more moderate climate than a place with more influence from land. For example, the cities San Francisco (37.8 N) and New York (40.7 N) have different climates because San Francisco has more influence from the ocean. San Francisco, on the west coast of North America, gets winds from the west over the Pacific Ocean. New York, on the east coast of North America gets winds from the west over land, so New York has colder winters and hotter, earlier summers than San Francisco. Warmer ocean currents yield warmer climates in the long term, even at high latitudes. At similar latitudes, a place influenced by warm ocean currents will have a warmer climate overall than a place influenced by cold ocean currents. Changes in the thermohaline circulation are thought to have significant impacts on Earth's energy budget. Because the thermohaline circulation determines the rate at which deep waters reach the surface, it may also significantly influence atmospheric carbon dioxide concentrations. Modern observations, climate simulations and paleoclimate reconstructions suggest that the Atlantic meridional overturning circulation (AMOC) has weakened since the preindustrial era. The latest climate change projections in 2021 suggest that the AMOC is likely to weaken further over the 21st century. Such a weakening could cause large changes to global climate, with the North Atlantic particularly vulnerable. Chemical properties Salinity Salinity is a measure of the total amounts of dissolved salts in seawater. It was originally measured via measurement of the amount of chloride in seawater and hence termed chlorinity. It is now standard practice to gauge it by measuring electrical conductivity of the water sample. Salinity can be calculated using the chlorinity, which is a measure of the total mass of halogen ions (includes fluorine, chlorine, bromine, and iodine) in seawater. According to an international agreement, the following formula is used to determine salinity: Salinity (in ‰) = 1.80655 × Chlorinity (in ‰) The average ocean water chlorinity is about 19.2‰, and, thus, the average salinity is around 34.7‰. Salinity has a major influence on the density of seawater. A zone of rapid salinity increase with depth is called a halocline. As seawater's salt content increases, so does the temperature at which its maximum density occurs. Salinity affects both the freezing and boiling points of water, with the boiling point increasing with salinity. At atmospheric pressure, normal seawater freezes at a temperature of about −2 °C. Salinity is higher in Earth's oceans where there is more evaporation and lower where there is more precipitation. If precipitation exceeds evaporation, as is the case in polar and some temperate regions, salinity will be lower. Salinity will be higher if evaporation exceeds precipitation, as is sometimes the case in tropical regions. For example, evaporation is greater than precipitation in the Mediterranean Sea, which has an average salinity of 38‰, more saline than the global average of 34.7‰. Thus, oceanic waters in polar regions have lower salinity content than oceanic waters in tropical regions. However, when sea ice forms at high latitudes, salt is excluded from the ice as it forms, which can increase the salinity in the residual seawater in polar regions such as the Arctic Ocean. Due to the effects of climate change on oceans, observations of sea surface salinity between 1950 and 2019 indicate that regions of high salinity and evaporation have become more saline while regions of low salinity and more precipitation have become fresher. It is very likely that the Pacific and Antarctic/Southern Oceans have freshened while the Atlantic has become more saline. Dissolved gases Ocean water contains large quantities of dissolved gases, including oxygen, carbon dioxide and nitrogen. These dissolve into ocean water via gas exchange at the ocean surface, with the solubility of these gases depending on the temperature and salinity of the water. The four most abundant gases in earth's atmosphere and oceans are nitrogen, oxygen, argon, and carbon dioxide. In the ocean by volume, the most abundant gases dissolved in seawater are carbon dioxide (including bicarbonate and carbonate ions, 14 mL/L on average), nitrogen (9 mL/L), and oxygen (5 mL/L) at equilibrium at All gases are more soluble – more easily dissolved – in colder water than in warmer water. For example, when salinity and pressure are held constant, oxygen concentration in water almost doubles when the temperature drops from that of a warm summer day to freezing . Similarly, carbon dioxide and nitrogen gases are more soluble at colder temperatures, and their solubility changes with temperature at different rates. Oxygen, photosynthesis and carbon cycling Photosynthesis in the surface ocean releases oxygen and consumes carbon dioxide. Phytoplankton, a type of microscopic free-floating algae, controls this process. After the plants have grown, oxygen is consumed and carbon dioxide released, as a result of bacterial decomposition of the organic matter created by photosynthesis in the ocean. The sinking and bacterial decomposition of some organic matter in deep ocean water, at depths where the waters are out of contact with the atmosphere, leads to a reduction in oxygen concentrations and increase in carbon dioxide, carbonate and bicarbonate. This cycling of carbon dioxide in oceans is an important part of the global carbon cycle. The oceans represent a major carbon sink for carbon dioxide taken up from the atmosphere by photosynthesis and by dissolution (see also carbon sequestration). There is also increased attention on carbon dioxide uptake in coastal marine habitats such as mangroves and saltmarshes. This process is often referred to as "Blue carbon". The focus is on these ecosystems because they are strong carbon sinks as well as ecologically important habitats under threat from human activities and environmental degradation. As deep ocean water circulates throughout the globe, it contains gradually less oxygen and gradually more carbon dioxide with more time away from the air at the surface. This gradual decrease in oxygen concentration happens as sinking organic matter continuously gets decomposed during the time the water is out of contact with the atmosphere. Most of the deep waters of the ocean still contain relatively high concentrations of oxygen sufficient for most animals to survive. However, some ocean areas have very low oxygen due to long periods of isolation of the water from the atmosphere. These oxygen deficient areas, called oxygen minimum zones or hypoxic waters, will generally be made worse by the effects of climate change on oceans. pH The pH value at the surface of oceans (global mean surface pH) is currently approximately in the range of 8.05 to 8.08. This makes it slightly alkaline. The pH value at the surface used to be about 8.2 during the past 300 million years. However, between 1950 and 2020, the average pH of the ocean surface fell from approximately 8.15 to 8.05. Carbon dioxide emissions from human activities are the primary cause of this process called ocean acidification, with atmospheric carbon dioxide (CO2) levels exceeding 410 ppm (in 2020). CO2 from the atmosphere is absorbed by the oceans. This produces carbonic acid (H2CO3) which dissociates into a bicarbonate ion () and a hydrogen ion (H+). The presence of free hydrogen ions (H+) lowers the pH of the ocean. There is a natural gradient of pH in the ocean which is related to the breakdown of organic matter in deep water which slowly lowers the pH with depth: The pH value of seawater is naturally as low as 7.8 in deep ocean waters as a result of degradation of organic matter there. It can be as high as 8.4 in surface waters in areas of high biological productivity. The definition of global mean surface pH refers to the top layer of the water in the ocean, up to around 20 or 100 m depth. In comparison, the average depth of the ocean is about 4 km. The pH value at greater depths (more than 100 m) has not yet been affected by ocean acidification in the same way. There is a large body of deeper water where the natural gradient of pH from 8.2 to about 7.8 still exists and it will take a very long time to acidify these waters, and equally as long to recover from that acidification. But as the top layer of the ocean (the photic zone) is crucial for its marine productivity, any changes to the pH value and temperature of the top layer can have many knock-on effects, for example on marine life and ocean currents (see also effects of climate change on oceans). The key issue in terms of the penetration of ocean acidification is the way the surface water mixes with deeper water or does not mix (a lack of mixing is called ocean stratification). This in turn depends on the water temperature and hence is different between the tropics and the polar regions (see ocean#Temperature). The chemical properties of seawater complicate pH measurement, and several distinct pH scales exist in chemical oceanography. There is no universally accepted reference pH-scale for seawater and the difference between measurements based on multiple reference scales may be up to 0.14 units. Alkalinity Alkalinity is the balance of base (proton acceptors) and acids (proton donors) in seawater, or indeed any natural waters. The alkalinity acts as a chemical buffer, regulating the pH of seawater. While there are many ions in seawater that can contribute to the alkalinity, many of these are at very low concentrations. This means that the carbonate, bicarbonate and borate ions are the only significant contributors to seawater alkalinity in the open ocean with well oxygenated waters. The first two of these ions contribute more than 95% of this alkalinity. The chemical equation for alkalinity in seawater is: AT = [HCO3−] + 2[CO32-] + [B(OH)4−] The growth of phytoplankton in surface ocean waters leads to the conversion of some bicarbonate and carbonate ions into organic matter. Some of this organic matter sinks into the deep ocean where it is broken down back into carbonate and bicarbonate. This process is related to ocean productivity or marine primary production. Thus alkalinity tends to increase with depth and also along the global thermohaline circulation from the Atlantic to the Pacific and Indian Ocean, although these increases are small. The concentrations vary overall by only a few percent. The absorption of CO2 from the atmosphere does not affect the ocean's alkalinity. It does lead to a reduction in pH value though (termed ocean acidification). Residence times of chemical elements and ions The ocean waters contain many chemical elements as dissolved ions. Elements dissolved in ocean waters have a wide range of concentrations. Some elements have very high concentrations of several grams per liter, such as sodium and chloride, together making up the majority of ocean salts. Other elements, such as iron, are present at tiny concentrations of just a few nanograms (10−9 grams) per liter. The concentration of any element depends on its rate of supply to the ocean and its rate of removal. Elements enter the ocean from rivers, the atmosphere and hydrothermal vents. Elements are removed from ocean water by sinking and becoming buried in sediments or evaporating to the atmosphere in the case of water and some gases. By estimating the residence time of an element, oceanographers examine the balance of input and removal. Residence time is the average time the element would spend dissolved in the ocean before it is removed. Heavily abundant elements in ocean water such as sodium, have high input rates. This reflects high abundance in rocks and rapid rock weathering, paired with very slow removal from the ocean due to sodium ions being comparatively unreactive and highly soluble. In contrast, other elements such as iron and aluminium are abundant in rocks but very insoluble, meaning that inputs to the ocean are low and removal is rapid. These cycles represent part of the major global cycle of elements that has gone on since the Earth first formed. The residence times of the very abundant elements in the ocean are estimated to be millions of years, while for highly reactive and insoluble elements, residence times are only hundreds of years. Nutrients A few elements such as nitrogen, phosphorus, iron, and potassium essential for life, are major components of biological material, and are commonly known as "nutrients". Nitrate and phosphate have ocean residence times of 10,000 and 69,000 years, respectively, while potassium is a much more abundant ion in the ocean with a residence time of 12 million years. The biological cycling of these elements means that this represents a continuous removal process from the ocean's water column as degrading organic material sinks to the ocean floor as sediment. Phosphate from intensive agriculture and untreated sewage is transported via runoff to rivers and coastal zones to the ocean where it is metabolized. Eventually, it sinks to the ocean floor and is no longer available to humans as a commercial resource. Production of rock phosphate, an essential ingredient in inorganic fertilizer, is a slow geological process that occurs in some of the world's ocean sediments, rendering mineable sedimentary apatite (phosphate) a non-renewable resource (see peak phosphorus). This continual net deposition loss of non-renewable phosphate from human activities, may become a resource issue for fertilizer production and food security in future. Marine life Life within the ocean evolved 3 billion years prior to life on land. Both the depth and the distance from shore strongly influence the biodiversity of the plants and animals present in each region. The diversity of life in the ocean is immense, including: Animals: most animal phyla have species that inhabit the ocean, including many that are found only in marine environments such as sponges, Cnidaria (such as corals and jellyfish), comb jellies, Brachiopods, and Echinoderms (such as sea urchins and sea stars). Many other familiar animal groups primarily live in the ocean, including cephalopods (includes octopus and squid), crustaceans (includes lobsters, crabs, and shrimp), fish, sharks, cetaceans (includes whales, dolphins, and porpoises). In addition, many land animals have adapted to living a major part of their life on the oceans. For instance, seabirds are a diverse group of birds that have adapted to a life mainly on the oceans. They feed on marine animals and spend most of their lifetime on water, many going on land only for breeding. Other birds that have adapted to oceans as their living space are penguins, seagulls and pelicans. Seven species of turtles, the sea turtles, also spend most of their time in the oceans. Plants: including sea grasses, or mangroves Algae: algae is a "catch-all" term to include many photosynthetic, single-celled eukaryotes, such as green algae, diatoms, and dinoflagellates, but also multicellular algae, such as some red algae (including organisms like Pyropia, which is the source of the edible nori seaweed), and brown algae (including organisms like kelp). Bacteria: ubiquitous single-celled prokaryotes found throughout the world Archaea: prokaryotes distinct from bacteria, that inhabit many environments of the ocean, as well as many extreme environments Fungi: many marine fungi with diverse roles are found in oceanic environments Human uses of the oceans The ocean has been linked to human activity throughout history. These activities serve a wide variety of purposes, including navigation and exploration, naval warfare, travel, shipping and trade, food production (e.g. fishing, whaling, seaweed farming, aquaculture), leisure (cruising, sailing, recreational boat fishing, scuba diving), power generation (see marine energy and offshore wind power), extractive industries (offshore drilling and deep sea mining), freshwater production via desalination. Many of the world's goods are moved by ship between the world's seaports. Large quantities of goods are transported across the ocean, especially across the Atlantic and around the Pacific Rim. Many types of cargo including manufactured goods, are typically transported in standard sized, lockable containers that are loaded on purpose-built container ships at dedicated terminals. Containerization greatly boosted the efficiency and reduced the cost of shipping products by sea. This was a major factor in the rise of globalization and exponential increases in international trade in the mid-to-late 20th century. Oceans are also the major supply source for the fishing industry. Some of the major harvests are shrimp, fish, crabs, and lobster. The biggest global commercial fishery is for anchovies, Alaska pollock and tuna. A report by FAO in 2020 stated that "in 2017, 34 percent of the fish stocks of the world's marine fisheries were classified as overfished". Fish and other fishery products from both wild fisheries and aquaculture are among the most widely consumed sources of protein and other essential nutrients. Data in 2017 showed that "fish consumption accounted for 17 percent of the global population's intake of animal proteins". To fulfill this need, coastal countries have exploited marine resources in their exclusive economic zone. Fishing vessels are increasingly venturing out to exploit stocks in international waters. The ocean has a vast amount of energy carried by ocean waves, tides, salinity differences, and ocean temperature differences which can be harnessed to generate electricity. Forms of sustainable marine energy include tidal power, ocean thermal energy and wave power. Offshore wind power is captured by wind turbines placed out on the ocean; it has the advantage that wind speeds are higher than on land, though wind farms are more costly to construct offshore. There are large deposits of petroleum, as oil and natural gas, in rocks beneath the ocean floor. Offshore platforms and drilling rigs extract the oil or gas and store it for transport to land. "Freedom of the seas" is a principle in international law dating from the seventeenth century. It stresses freedom to navigate the oceans and disapproves of war fought in international waters. Today, this concept is enshrined in the United Nations Convention on the Law of the Sea (UNCLOS). The International Maritime Organization (IMO), which was ratified in 1958, is mainly responsible for maritime safety, liability and compensation, and has held some conventions on marine pollution related to shipping incidents. Ocean governance is the conduct of the policy, actions and affairs regarding the world's oceans. Threats from human activities Human activities affect marine life and marine habitats through many negative influences, such as marine pollution (including marine debris and microplastics) overfishing, ocean acidification and other effects of climate change on oceans. Climate change Marine pollution Overfishing Protection Ocean protection serves to safeguard the ecosystems in the oceans upon which humans depend. Protecting these ecosystems from threats is a major component of environmental protection. One of protective measures is the creation and enforcement of marine protected areas (MPAs). Marine protection may need to be considered within a national, regional and international context. Other measures include supply chain transparency requirement policies, policies to prevent marine pollution, ecosystem-assistance (e.g. for coral reefs) and support for sustainable seafood (e.g. sustainable fishing practices and types of aquaculture). There is also the protection of marine resources and components whose extraction or disturbance would cause substantial harm, engagement of broader publics and impacted communities, and the development of ocean clean-up projects (removal of marine plastic pollution). Examples of the latter include Clean Oceans International and The Ocean Cleanup. In 2021, 43 expert scientists published the first scientific framework version that – via integration, review, clarifications and standardization – enables the evaluation of levels of protection of marine protected areas and can serve as a guide for any subsequent efforts to improve, plan and monitor marine protection quality and extents. Examples are the efforts towards the 30%-protection-goal of the "Global Deal For Nature" and the UN's Sustainable Development Goal 14 ("life below water"). In March 2023 a High Seas Treaty was signed. It is legally binding. The main achievement is the new possibility to create marine protected areas in international waters. By doing so the agreement now makes it possible to protect 30% of the oceans by 2030 (part of the 30 by 30 target). The treaty has articles regarding the principle "polluter-pays", and different impacts of human activities including areas beyond the national jurisdiction of the countries making those activities. The agreement was adopted by the 193 United Nations Member States.
Physical sciences
Hydrological features
null
18842395
https://en.wikipedia.org/wiki/River
River
A river is a natural freshwater stream that flows on land or inside caves towards another body of water at a lower elevation, such as an ocean, lake, or another river. A river may run dry before reaching the end of its course if it runs out of water, or only flow during certain seasons. Rivers are regulated by the water cycle, the processes by which water moves around the Earth. Water first enters rivers through precipitation, whether from rainfall, the runoff of water down a slope, the melting of glaciers or snow, or seepage from aquifers beneath the surface of the Earth. Rivers flow in channeled watercourses and merge in confluences to form drainage basins, areas where surface water eventually flows to a common outlet. Rivers have a great effect on the landscape around them. They may regularly overflow their banks and flood the surrounding area, spreading nutrients to the surrounding area. Sediment or alluvium carried by rivers shapes the landscape around it, forming deltas and islands where the flow slows down. Rivers rarely run in a straight line, instead, they bend or meander; the locations of a river's banks can change frequently. Rivers get their alluvium from erosion, which carves rock into canyons and valleys. Rivers have sustained human and animal life for millennia, including the first human civilizations. The organisms that live around or in a river such as fish, aquatic plants, and insects have different roles, including processing organic matter and predation. Rivers have produced abundant resources for humans, including food, transportation, drinking water, and recreation. Humans have engineered rivers to prevent flooding, irrigate crops, perform work with water wheels, and produce hydroelectricity from dams. People associate rivers with life and fertility and have strong religious, political, social, and mythological attachments to them. Rivers and river ecosystems are threatened by water pollution, climate change, and human activity. The construction of dams, canals, levees, and other engineered structures has eliminated habitats, has caused the extinction of some species, and lowered the amount of alluvium flowing through rivers. Decreased snowfall from climate change has resulted in less water available for rivers during the summer. Regulation of pollution, dam removal, and sewage treatment have helped to improve water quality and restore river habitats. Topography Definition A river is a natural flow of freshwater that flows on or through land towards another body of water downhill. This flow can be into a lake, an ocean, or another river. A stream refers to water that flows in a natural channel, a geographic feature that can contain flowing water. A stream may also be referred to as a watercourse. The study of the movement of water as it occurs on Earth is called hydrology, and their effect on the landscape is covered by geomorphology. Source and drainage basin Rivers are part of the water cycle, the continuous processes by which water moves about Earth. This means that all water that flows in rivers must ultimately come from precipitation. The sides of rivers have land that is at a higher elevation than the river itself, and in these areas, water flows downhill into the river. The headwaters of a river are the smaller streams that feed a river, and make up the river's source. These streams may be small and flow rapidly down the sides of mountains. All of the land uphill of a river that feeds it with water in this way is in that river's drainage basin or watershed. A ridge of higher elevation land is what typically separates drainage basins; water on one side of a ridge will flow into one set of rivers, and water on the other side will flow into another. One example of this is the Continental Divide of the Americas in the Rocky Mountains. Water on the western side of the divide flows into the Pacific Ocean, whereas water on the other side flows into the Atlantic Ocean. Not all precipitation flows directly into rivers; some water seeps into underground aquifers. These, in turn, can still feed rivers via the water table, the groundwater beneath the surface of the land stored in the soil. Water flows into rivers in places where the river's elevation is lower than that of the water table. This phenomenon is why rivers can still flow even during times of drought. Rivers are also fed by the melting of snow glaciers present in higher elevation regions. In summer months, higher temperatures melt snow and ice, causing additional water to flow into rivers. Glacier melt can supplement snow melt in times like the late summer, when there may be less snow left to melt, helping to ensure that the rivers downstream of the glaciers have a continuous supply of water. The flow of rivers Rivers flow downhill, with their direction determined by gravity. A common misconception holds that all or most rivers flow from North to South, but this is not true. As rivers flow downstream, they eventually merge to form larger rivers. A river that feeds into another is a tributary, and the place they meet is a confluence. Rivers must flow to lower altitudes due to gravity. The bed of a river is typically within a river valley between hills or mountains. Rivers flowing through an impermeable section of land such as rocks will erode the slopes on the sides of the river. When a river carves a plateau or a similar high-elevation area, a canyon can form, with cliffs on either side of the river. Areas of a river with softer rock weather faster than areas with harder rock, causing a difference in elevation between two points of a river. This can cause the formation of a waterfall as the river's flow falls down a vertical drop. A river in a permeable area does not exhibit this behavior and may even have raised banks due to sediment. Rivers also change their landscape through their transportation of sediment, often known as alluvium when applied specifically to rivers. This debris comes from erosion performed by the rivers themselves, debris swept into rivers by rainfall, as well as erosion caused by the slow movement of glaciers. The sand in deserts and the sediment that forms bar islands is from rivers. The particle size of the debris is gradually sorted by the river, with heavier particles like rocks sinking to the bottom, and finer particles like sand or silt carried further downriver. This sediment may be deposited in river valleys or carried to the sea. The sediment yield of a river is the quantity of sand per unit area within a watershed that is removed over a period of time. The monitoring of the sediment yield of a river is important for ecologists to understand the health of its ecosystems, the rate of erosion of the river's environment, and the effects of human activity. Rivers rarely run in a straight direction, instead preferring to bend or meander. This is because any natural impediment to the flow of the river may cause the current to deflect in a different direction. When this happens, the alluvium carried by the river can build up against this impediment, redirecting the course of the river. The flow is then directed against the opposite bank of the river, which will erode into a more concave shape to accommodate the flow. The bank will still block the flow, causing it to reflect in the other direction. Thus, a bend in the river is created. Rivers may run through low, flat regions on their way to the sea. These places may have floodplains that are periodically flooded when there is a high level of water running through the river. These events may be referred to as "wet seasons' and "dry seasons" when the flooding is predictable due to the climate. The alluvium carried by rivers, laden with minerals, is deposited into the floodplain when the banks spill over, providing new nutrients to the soil, allowing them to support human activity like farming as well as a host of plant and animal life. Deposited sediment from rivers can form temporary or long-lasting fluvial islands. These islands exist in almost every river. Non-perennial rivers About half of all waterways on Earth are intermittent rivers, which do not always have a continuous flow of water throughout the year. This may be because an arid climate is too dry depending on the season to support a stream, or because a river is seasonally frozen in the winter (such as in an area with substantial permafrost), or in the headwaters of rivers in mountains, where snowmelt is required to fuel the river. These rivers can appear in a variety of climates, and still provide a habitat for aquatic life and perform other ecological functions. Subterranean rivers Subterranean rivers may flow underground through flooded caves. This can happen in karst systems, where rock dissolves to form caves. These rivers provide a habitat for diverse microorganisms and have become an important target of study by microbiologists. Other rivers and streams have been covered over or converted to run in tunnels due to human development. These rivers do not typically host any life, and are often used only for stormwater or flood control. One such example is the Sunswick Creek in New York City, which was covered in the 1800s and now exists only as a sewer-like pipe. The terminus While rivers may flow into lakes or man-made features such as reservoirs, the water they contain will always tend to flow down toward the ocean. However, if human activity siphons too much water away from a river for other uses, the riverbed may run dry before reaching the sea. The outlets mouth of a river can take several forms. Tidal rivers (often part of an estuary) have their levels rise and fall with the tide. Since the levels of these rivers are often already at or near sea level, the flow of alluvium and the brackish water that flows in these rivers may be either upriver or downriver depending on the time of day. Rivers that are not tidal may form deltas that continuously deposit alluvium into the sea from their mouths. Depending on the activity of waves, the strength of the river, and the strength of the tidal current, the sediment can accumulate to form new land. When viewed from above, a delta can appear to take the form of several triangular shapes as the river mouth appears to fan out from the original coastline. Classification In hydrology, a stream order is a positive integer used to describe the level of river branching in a drainage basin. Several systems of stream order exist, one of which is the Strahler number. In this system, the first tributaries of a river are 1st order rivers. When two 1st order rivers merge, the resulting river is 2nd order. If a river of a higher order and a lower order merge, the order is incremented from whichever of the previous rivers had the higher order. Stream order is correlated with and thus can be used to predict certain data points related to rivers, such as the size of the drainage basin (drainage area), and the length of the channel. Ecology Models River Continuum Concept The ecosystem of a river includes the life that lives in its water, on its banks, and in the surrounding land. The width of the channel of a river, its velocity, and how shaded it is by nearby trees. Creatures in a river ecosystem may be divided into many roles based on the River Continuum Concept. "Shredders" are organisms that consume this organic material. The role of a "grazer" or "scraper" organism is to feed on the algae that collects on rocks and plants. "Collectors" consume the detritus of dead organisms. Lastly, predators feed on living things to survive. The river can then be modeled by the availability of resources for each creature's role. A shady area with deciduous trees might experience frequent deposits of organic matter in the form of leaves. In this type of ecosystem, collectors and shredders will be most active. As the river becomes deeper and wider, it may move slower and receive more sunlight. This supports invertebrates and a variety of fish, as well as scrapers feeding on algae. Further downstream, the river may get most of its energy from organic matter that was already processed upstream by collectors and shredders. Predators may be more active here, including fish that feed on plants, plankton, and other fish. Flood pulse concept The flood pulse concept focuses on habitats that flood seasonally, including lakes and marshes. The land that interfaces with a water body is that body's riparian zone. Plants in the riparian zone of a river help stabilize its banks to prevent erosion and filter alluvium deposited by the river on the shore, including processing the nitrogen and other nutrients it contains. Forests in a riparian zone also provide important animal habitats. Fish zonation concept River ecosystems have also been categorized based on the variety of aquatic life they can sustain, also known as the fish zonation concept. Smaller rivers can only sustain smaller fish that can comfortably fit in its waters, whereas larger rivers can contain both small fish and large fish. This means that larger rivers can host a larger variety of species. This is analogous to the species-area relationship, the concept of larger habitats being host to more species. In this case, it is known as the species-discharge relationship, referring specifically to the discharge of a river, the amount of water passing through it at a particular time. Movement of organisms The flow of a river can act as a means of transportation for plant and animal species, as well as a barrier. For example, the Amazon River is so wide in parts that the variety of species on either side of its basin are distinct. Some fish may swim upstream to spawn as part of a seasonal migration. Species that travel from the sea to breed in freshwater rivers are anadromous. Salmon are an anadromous fish that may die in the river after spawning, contributing nutrients back to the river ecosystem. Human uses Infrastructure Modern river engineering involves a large-scale collection of independent river engineering structures that have the goal of flood control, improved navigation, recreation, and ecosystem management. Many of these projects have the effect of normalizing the effects of rivers; the greatest floods are smaller and more predictable, and larger sections are open for navigation by boats and other watercraft. A major effect of river engineering has been a reduced sediment output of large rivers. For example, the Mississippi River produced 400 million tons of sediment per year. Due to the construction of reservoirs, sediment buildup in man-made levees, and the removal of natural banks replaced with revetments, this sediment output has been reduced by 60%. The most basic river projects involve the clearing of obstructions like fallen trees. This can scale up to dredging, the excavation of sediment buildup in a channel, to provide a deeper area for navigation. These activities require regular maintenance as the location of the river banks changes over time, floods bring foreign objects into the river, and natural sediment buildup continues. Artificial channels are often constructed to "cut off" winding sections of a river with a shorter path, or to direct the flow of a river in a straighter direction. This effect, known as channelization, has made the distance required to traverse the Missouri River in shorter. Dikes are channels built perpendicular to the flow of the river beneath its surface. These help rivers flow straighter by increasing the speed of the water at the middle of the channel, helping to control floods. Levees are also used for this purpose. They can be thought of as dams constructed on the sides of rivers, meant to hold back water from flooding the surrounding area during periods of high rainfall. They are often constructed by building up the natural terrain with soil or clay. Some levees are supplemented with floodways, channels used to redirect floodwater away from farms and populated areas. Dams restrict the flow of water through a river. They can be built for navigational purposes, providing a higher level of water upstream for boats to travel in. They may also be used for hydroelectricity, or power generation from rivers. Dams typically transform a section of the river behind them into a lake or reservoir. This can provide nearby cities with a predictable supply of drinking water. Hydroelectricity is desirable as a form of renewable energy that does not require any inputs beyond the river itself. Dams are very common worldwide, with at least 75,000 higher than in the U.S. Globally, reservoirs created by dams cover . Dam-building reached a peak in the 1970s, when between two or three dams were completed every day, and has since begun to decline. New dam projects are primarily focused in China, India, and other areas in Asia. History Pre-industrial era The first civilizations of Earth were born on floodplains between 5,500 and 3,500 years ago. The freshwater, fertile soil, and transportation provided by rivers helped create the conditions for complex societies to emerge. Three such civilizations were the Sumerians in the Tigris–Euphrates river system, the Ancient Egyptian civilization in the Nile, and the Indus Valley Civilization on the Indus River. The desert climates of the surrounding areas made these societies especially reliant on rivers for survival, leading to people clustering in these areas to form the first cities. It is also thought that these civilizations were the first to organize the irrigation of desert environments for growing food. Growing food at scale allowed people to specialize in other roles, form hierarchies, and organize themselves in new ways, leading to the birth of civilization. In pre-industrial society, rivers were a source of transportation and abundant resources. Many civilizations depended on what resources were local to them to survive. Shipping of commodities, especially the floating of wood on rivers to transport it, was especially important. Rivers also were an important source of drinking water. For civilizations built around rivers, fish were an important part of the diet of humans. Some rivers supported fishing activities, but were ill-suited to farming, such as those in the Pacific Northwest. Other animals that live in or near rivers like frogs, mussels, and beavers could provide food and valuable goods such as fur. Humans have been building infrastructure to use rivers for thousands of years. The Sadd el-Kafara dam near Cairo, Egypt, is an ancient dam built on the Nile 4,500 years ago. The Ancient Roman civilization used aqueducts to transport water to urban areas. Spanish Muslims used mills and water wheels beginning in the seventh century. Between 130 and 1492, larger dams were built in Japan, Afghanistan, and India, including 20 dams higher than . Canals began to be cut in Egypt as early as 3000 BC, and the mechanical shadoof began to be used to raise the elevation of water. Drought years harmed crop yields, and leaders of society were incentivized to ensure regular water and food availability to remain in power. Engineering projects like the shadoof and canals could help prevent these crises. Despite this, there is evidence that floodplain-based civilizations may have been abandoned occasionally at a large scale. This has been attributed to unusually large floods destroying infrastructure; however, there is evidence that permanent changes to climate causing higher aridity and lower river flow may have been the determining factor in what river civilizations succeeded or dissolved. Water wheels began to be used at least 2,000 years ago to harness the energy of rivers. Water wheels turn an axle that can supply rotational energy to move water into aqueducts, work metal using a trip hammer, and grind grains with a millstone. In the Middle Ages, water mills began to automate many aspects of manual labor, and spread rapidly. By 1300, there were at least 10,000 mills in England alone. A medieval watermill could do the work of 30–60 human workers. Water mills were often used in conjunction with dams to focus and increase the speed of the water. Water wheels continued to be used up to and through the Industrial Revolution as a source of power for textile mills and other factories, but were eventually supplanted by steam power. Industrial era Rivers became more industrialized with the growth of technology and the human population. As fish and water could be brought from elsewhere, and goods and people could be transported via railways, pre-industrial river uses diminished in favor of more complex uses. This meant that the local ecosystems of rivers needed less protection as humans became less reliant on them for their continued flourishing. River engineering began to develop projects that enabled industrial hydropower, canals for the more efficient movement of goods, as well as projects for flood prevention. River transportation has historically been significantly cheaper and faster than transportation by land. Rivers helped fuel urbanization as goods such as grain and fuel could be floated downriver to supply cities with resources. River transportation is also important for the lumber industry, as logs can be shipped via river. Countries with dense forests and networks of rivers like Sweden have historically benefited the most from this method of trade. The rise of highways and the automobile has made this practice less common. One of the first large canals was the Canal du Midi, connecting rivers within France to create a path from the Atlantic Ocean to the Mediterranean Sea. The nineteenth century saw canal-building become more common, with the U.S. building of canals by 1830. Rivers began to be used by cargo ships at a larger scale, and these canals were used in conjunction with river engineering projects like dredging and straightening to ensure the efficient flow of goods. One of the largest such projects is that of the Mississippi River, whose drainage basin covers 40% of the contiguous United States. The river was then used for shipping crops from the American Midwest and cotton from the American South to other states as well as the Atlantic Ocean. The role of urban rivers has evolved from when they were a center of trade, food, and transportation to modern times when these uses are less necessary. Rivers remain central to the cultural identity of cities and nations. Famous examples include the River Thames's relationship to London, the Seine to Paris, and the Hudson River to New York City. The restoration of water quality and recreation to urban rivers has been a goal of modern administrations. For example, swimming was banned in the Seine for over 100 years due to concerns about pollution and the spread of E. coli, until cleanup efforts to allow its use in the 2024 Summer Olympics. Another example is the restoration of the Isar in Munich from being a fully canalized channel with hard embankments to being wider with naturally sloped banks and vegetation. This has improved wildlife habitat in the Isar, and provided more opportunities for recreation in the river. Politics of rivers As a natural barrier, rivers are often used as a border between countries, cities, and other territories. For example, the Lamari River in New Guinea separates the Angu and the Fore people in New Guinea. The two cultures speak different languages and rarely mix. 23% of international borders are large rivers (defined as those over 30 meters wide). The traditional northern border of the Roman Empire was the Danube, a river that today forms the border of Hungary and Slovakia. Since the flow of a river is rarely static, the exact location of a river border may be called into question by countries. The Rio Grande between the United States and Mexico is regulated by the International Boundary and Water Commission to manage the right to fresh water from the river, as well as mark the exact location of the border. Up to 60% of fresh water used by countries comes from rivers that cross international borders. This can cause disputes between countries that live upstream and downstream of the river. A country that is downstream of another may object to the upstream country diverting too much water for agricultural uses, pollution, as well as the creation of dams that change the river's flow characteristics. For example, Egypt has an agreement with Sudan requiring a specific minimum volume of water to pass into the Nile yearly over the Aswan Dam, to maintain both countries access to water. Religion and mythology The importance of rivers throughout human history has given them an association with life and fertility. They have also become associated with the reverse, death and destruction, especially through floods. This power has caused rivers to have a central role in religion, ritual, and mythology. In Greek mythology, the underworld is bordered by several rivers. Ancient Greeks believed that the souls of those who perished had to be borne across the River Styx on a boat by Charon in exchange for money. Souls that were judged to be good were admitted to Elysium and permitted to drink water from the River Lethe to forget their previous life. Rivers also appear in descriptions of paradise in Abrahamic religions, beginning with the story of Genesis. A river beginning in the Garden of Eden waters the garden and then splits into four rivers that flow to provide water to the world. These rivers include the Tigris and Euphrates, and two rivers that are possibly apocryphal but may refer to the Nile and the Ganges. The Quran describes these four rivers as flowing with water, milk, wine, and honey, respectively. The book of Genesis also contains a story of a great flood. Similar myths are present in the Epic of Gilgamesh, Sumerian mythology, and in other cultures. In Genesis, the flood's role was to cleanse Earth of the wrongdoing of humanity. The act of water working to cleanse humans in a ritualistic sense has been compared to the Christian ritual of baptism, famously the Baptism of Jesus in the Jordan River. Floods also appear in Norse mythology, where the world is said to emerge from a void that eleven rivers flowed into. Aboriginal Australian religion and Mesoamerican mythology also have stories of floods, some of which contain no survivors, unlike the Abrahamic flood. Along with mythological rivers, religions have also cared for specific rivers as sacred rivers. The Ancient Celtic religion saw rivers as goddesses. The Nile had many gods attached to it. The tears of the goddess Isis were said to be the cause of the river's yearly flooding, itself personified by the goddess Hapi. Many African religions regard certain rivers as the originator of life. In Yoruba religion, Yemọja rules over the Ogun River in modern-day Nigeria and is responsible for creating all children and fish. Some sacred rivers have religious prohibitions attached to them, such as not being allowed to drink from them or ride in a boat along certain stretches. In these religions, such as that of the Altai in Russia, the river is considered a living being that must be afforded respect. Rivers are some of the most sacred places in Hinduism. There is archeological evidence that mass ritual bathing in rivers at least 5,000 years ago in the Indus river valley. While most rivers in India are revered, the Ganges is most sacred. The river has a central role in various Hindu myths, and its water is said to have properties of healing as well as absolution from sins. Hindus believe that when the cremated remains of a person is released into the Ganges, their soul is released from the mortal world. Threats Freshwater fish make up 40% of the world's fish species, but 20% of these species are known to have gone extinct in recent years. Human uses of rivers make these species especially vulnerable. Dams and other engineered changes to rivers can block the migration routes of fish and destroy habitats. Rivers that flow freely from headwaters to the sea have better water quality, and also retain their ability to transport nutrient-rich alluvium and other organic material downstream, keeping the ecosystem healthy. The creation of a lake changes the habitat of that portion of water, and blocks the transportation of sediment, as well as preventing the natural meandering of the river. Dams block the migration of fish such as salmon for which fish ladder and other bypass systems have been attempted, but these are not always effective. Pollution from factories and urban areas can also damage water quality. "Per- and polyfluoroalkyl substances (PFAS) is a widely used chemical that breaks down at a slow rate. It has been found in the bodies of humans and animals worldwide, as well as in the soil, with potentially negative health effects. Research into how to remove it from the environment, and how harmful exposure is, is ongoing. Fertilizer from farms can lead to a proliferation of algae on the surface of rivers and oceans, which prevents oxygen and light from dissolving into water, making it impossible for underwater life to survive in these so-called dead zones. Urban rivers are typically surrounded by impermeable surfaces like stone, asphalt, and concrete. Cities often have storm drains that direct this water to rivers. This can cause flooding risk as large amounts of water are directed into the rivers. Due to these impermeable surfaces, these rivers often have very little alluvium carried in them, causing more erosion once the river exits the impermeable area. It has historically been common for sewage to be directed directly to rivers via sewer systems without being treated, along with pollution from industry. This has resulted in a loss of animal and plant life in urban rivers, as well as the spread of waterborne diseases such as cholera. In modern times, sewage treatment and controls on pollution from factories have improved the water quality of urban rivers. Climate change can change the flooding cycles and water supply available to rivers. Floods can be larger and more destructive than expected, causing damage to the surrounding areas. Floods can also wash unhealthy chemicals and sediment into rivers. Droughts can be deeper and longer, causing rivers to run dangerously low. This is in part because of a projected loss of snowpack in mountains, meaning that melting snow can't replenish rivers during warm summer months, leading to lower water levels. Lower-level rivers also have warmer temperatures, threatening species like salmon that prefer colder upstream temperatures. Attempts have been made to regulate the exploitation of rivers to preserve their ecological functions. Many wetland areas have become protected from development. Water restrictions can prevent the complete draining of rivers. Limits on the construction of dams, as well as dam removal, can restore the natural habitats of river species. Regulators can also ensure regular releases of water from dams to keep animal habitats supplied with water. Limits on pollutants like pesticides can help improve water quality. Extraterrestrial rivers Today, the surface of Mars does not have liquid water. All water on Mars is part of permafrost ice caps, or trace amounts of water vapor in the atmosphere. However, there is evidence that rivers flowed on Mars for at least 100,000 years. The Hellas Planitia is a crater left behind by an impact from an asteroid. It has sedimentary rock that was formed 3.7 billion years ago, and lava fields that are 3.3 billion years old. High resolution images of the surface of the plain show evidence of a river network, and even river deltas. These images reveal channels formed in the rock, recognized by geologists who study rivers on Earth as being formed by rivers, as well as "bench and slope" landforms, outcroppings of rock that show evidence of river erosion. Not only do these formations suggest that rivers once existed, but that they flowed for extensive time periods, and were part of a water cycle that involved precipitation. The term flumen, in planetary geology, refers to channels on Saturn's moon Titan that may carry liquid. Titan's rivers flow with liquid methane and ethane. There are river valleys that exhibit wave erosion, seas, and oceans. Scientists hope to study these systems to see how coasts erode without the influence of human activity, something that isn't possible when studying terrestrial rivers. Rivers by amount of discharge
Physical sciences
Hydrological features
null
18842431
https://en.wikipedia.org/wiki/Lake
Lake
A lake is an often naturally occurring, relatively large and fixed body of water on or near the Earth's surface. It is localized in a basin or interconnected basins surrounded by dry land. Lakes lie completely on land and are separate from the ocean, although they may be connected with the ocean by rivers. Lakes, as with other bodies of water, are part of the water cycle, the processes by which water moves around the Earth. Most lakes are fresh water and account for almost all the world's surface freshwater, but some are salt lakes with salinities even higher than that of seawater. Lakes vary significantly in surface area and volume of water. Lakes are typically larger and deeper than ponds, which are also water-filled basins on land, although there are no official definitions or scientific criteria distinguishing the two. Lakes are also distinct from lagoons, which are generally shallow tidal pools dammed by sandbars or other material at coastal regions of oceans or large lakes. Most lakes are fed by springs, and both fed and drained by creeks and rivers, but some lakes are endorheic without any outflow, while volcanic lakes are filled directly by precipitation runoffs and do not have any inflow streams. Natural lakes are generally found in mountainous areas (i.e. alpine lakes), dormant volcanic craters, rift zones and areas with ongoing glaciation. Other lakes are found in depressed landforms or along the courses of mature rivers, where a river channel has widened over a basin formed by eroded floodplains and wetlands. Some lakes are found in caverns underground. Some parts of the world have many lakes formed by the chaotic drainage patterns left over from the last ice age. All lakes are temporary over long periods of time, as they will slowly fill in with sediments or spill out of the basin containing them. Artificially controlled lakes are known as reservoirs, and are usually constructed for industrial or agricultural use, for hydroelectric power generation, for supplying domestic drinking water, for ecological or recreational purposes, or for other human activities. Etymology, meaning, and usage of "lake" The word lake comes from Middle English ('lake, pond, waterway'), from Old English ('pond, pool, stream'), from Proto-Germanic ('pond, ditch, slow moving stream'), from the Proto-Indo-European root ('to leak, drain'). Cognates include Dutch ('lake, pond, ditch'), Middle Low German ('water pooled in a riverbed, puddle') as in: :de:Wolfslake, :de:Butterlake, German ('pool, puddle'), and Icelandic ('slow flowing stream'). Also related are the English words leak and leach. There is considerable uncertainty about defining the difference between lakes and ponds, and neither term has an internationally accepted definition across scientific disciplines or political boundaries. For example, limnologists have defined lakes as water bodies that are simply a larger version of a pond, which can have wave action on the shoreline or where wind-induced turbulence plays a major role in mixing the water column. None of these definitions completely excludes ponds and all are difficult to measure. For this reason, simple size-based definitions are increasingly used to separate ponds and lakes. Definitions for lake range in minimum sizes for a body of water from to . Pioneering animal ecologist Charles Elton regarded lakes as waterbodies of or more. The term lake is also used to describe a feature such as Lake Eyre, which is a dry basin most of the time but may become filled under seasonal conditions of heavy rainfall. In common usage, many lakes bear names ending with the word pond, and a lesser number of names ending with lake are, in quasi-technical fact, ponds. One textbook illustrates this point with the following: "In Newfoundland, for example, almost every lake is called a pond, whereas in Wisconsin, almost every pond is called a lake." One hydrology book proposes to define the term "lake" as a body of water with the following five characteristics: It partially or totally fills one or several basins connected by straits; It has essentially the same water level in all parts (except for relatively short-lived variations caused by wind, varying ice cover, large inflows, etc.); It does not have regular intrusion of seawater; A considerable portion of the sediment suspended in the water is captured by the basins (for this to happen they need to have a sufficiently small inflow-to-volume ratio); The area measured at the mean water level exceeds an arbitrarily chosen threshold (for instance, one hectare). With the exception of criterion 3, the others have been accepted or elaborated upon by other hydrology publications. Distribution The majority of lakes on Earth are freshwater, and most lie in the Northern Hemisphere at higher latitudes. Canada, with a deranged drainage system, has an estimated 31,752 lakes larger than in surface area. The total number of lakes in Canada is unknown but is estimated to be at least 2 million. Finland has 168,000 lakes of in area, or larger, of which 57,000 are large ( or larger). Most lakes have at least one natural outflow in the form of a river or stream, which maintain a lake's average level by allowing the drainage of excess water. Some lakes do not have a natural outflow and lose water solely by evaporation or underground seepage, or both. These are termed endorheic lakes. Many lakes are artificial and are constructed for hydroelectric power generation, aesthetic purposes, recreational purposes, industrial use, agricultural use, or domestic water supply. The number of lakes on Earth is undetermined because most lakes and ponds are very small and do not appear on maps or satellite imagery. Despite this uncertainty, a large number of studies agree that small ponds are much more abundant than large lakes. For example, one widely cited study estimated that Earth has 304 million lakes and ponds, and that 91% of these are or less in area. Despite the overwhelming abundance of ponds, almost all of Earth's lake water is found in fewer than 100 large lakes; this is because lake volume scales superlinearly with lake area. Extraterrestrial lakes exist on the moon Titan, which orbits the planet Saturn. The shape of lakes on Titan is very similar to those on Earth. Lakes were formerly present on the surface of Mars, but are now dry lake beds. Types In 1957, G. Evelyn Hutchinson published a monograph titled A Treatise on Limnology, which is regarded as a landmark discussion and classification of all major lake types, their origin, morphometric characteristics, and distribution. Hutchinson presented in his publication a comprehensive analysis of the origin of lakes and proposed what is a widely accepted classification of lakes according to their origin. This classification recognizes 11 major lake types that are divided into 76 subtypes. The 11 major lake types are: tectonic lakes volcanic lakes glacial lakes fluvial lakes solution lakes landslide lakes aeolian lakes shoreline lakes organic lakes anthropogenic lakes meteorite (extraterrestrial impact) lakes Tectonic lakes Tectonic lakes are lakes formed by the deformation and resulting lateral and vertical movements of the Earth's crust. These movements include faulting, tilting, folding, and warping. Some of the largest lakes on Earth are rift lakes occupying rift valleys, e.g. Central African Rift lakes and Lake Baikal. Other well-known tectonic lakes, Caspian Sea, the Sea of Aral, and other lakes from the Pontocaspian occupy basins that have been separated from the sea by the tectonic uplift of the sea floor above the ocean level. Often, the tectonic action of crustal extension has created an alternating series of parallel grabens and horsts that form elongate basins alternating with mountain ranges. Not only does this promote the creation of lakes by the disruption of preexisting drainage networks, it also creates within arid regions endorheic basins that contain salt lakes (also called saline lakes). They form where there is no natural outlet, a high evaporation rate and the drainage surface of the water table has a higher-than-normal salt content. Examples of these salt lakes include Great Salt Lake and the Dead Sea. Another type of tectonic lake caused by faulting is sag ponds. Volcanic lakes Volcanic lakes are lakes that occupy either local depressions, e.g. craters and maars, or larger basins, e.g. calderas, created by volcanism. Crater lakes are formed in volcanic craters and calderas, which fill up with precipitation more rapidly than they empty via either evaporation, groundwater discharge, or a combination of both. Sometimes the latter are called caldera lakes, although often no distinction is made. An example is Crater Lake in Oregon, in the caldera of Mount Mazama. The caldera was created in a massive volcanic eruption that led to the subsidence of Mount Mazama around 4860 BCE. Other volcanic lakes are created when either rivers or streams are dammed by lava flows or volcanic lahars. The basin which is now Malheur Lake, Oregon was created when a lava flow dammed the Malheur River. Among all lake types, volcanic crater lakes most closely approximate a circular shape. Glacial lakes Glacial lakes are lakes created by the direct action of glaciers and continental ice sheets. A wide variety of glacial processes create enclosed basins. As a result, there are a wide variety of different types of glacial lakes and it is often difficult to define clear-cut distinctions between different types of glacial lakes and lakes influenced by other activities. The general types of glacial lakes that have been recognized are lakes in direct contact with ice, glacially carved rock basins and depressions, morainic and outwash lakes, and glacial drift basins. Glacial lakes are the most numerous lakes in the world. Most lakes in northern Europe and North America have been either influenced or created by the latest, but not last, glaciation, to have covered the region. Glacial lakes include proglacial lakes, subglacial lakes, finger lakes, and epishelf lakes. Epishelf lakes are highly stratified lakes in which a layer of freshwater, derived from ice and snow melt, is dammed behind an ice shelf that is attached to the coastline. They are mostly found in Antarctica. Fluvial lakes Fluvial (or riverine) lakes are lakes produced by running water. These lakes include plunge pool lakes, fluviatile dams and meander lakes. Oxbow lakes The most common type of fluvial lake is a crescent-shaped lake called an oxbow lake due to the distinctive curved shape. They can form in river valleys as a result of meandering. The slow-moving river forms a sinuous shape as the outer side of bends are eroded away more rapidly than the inner side. Eventually a horseshoe bend is formed and the river cuts through the narrow neck. This new passage then forms the main passage for the river and the ends of the bend become silted up, thus forming a bow-shaped lake. Their crescent shape gives oxbow lakes a higher perimeter to area ratio than other lake types. Fluviatile dams These form where sediment from a tributary blocks the main river. Lateral lakes These form where sediment from the main river blocks a tributary, usually in the form of a levee. Floodplain lakes Lakes formed by other processes responsible for floodplain basin creation. During high floods they are flushed with river water. There are four types: 1. Confluent floodplain lake, 2. Contrafluent-confluent floodplain lake, 3. Contrafluent floodplain lake, 4. Profundal floodplain lake. Solution lakes A solution lake is a lake occupying a basin formed by surface dissolution of bedrock. In areas underlain by soluble bedrock, its solution by precipitation and percolating water commonly produce cavities. These cavities frequently collapse to form sinkholes that form part of the local karst topography. Where groundwater lies near the grounds surface, a sinkhole will be filled water as a solution lake. If such a lake consists of a large area of standing water that occupies an extensive closed depression in limestone, it is also called a karst lake. Smaller solution lakes that consist of a body of standing water in a closed depression within a karst region are known as karst ponds. Limestone caves often contain pools of standing water, which are known as underground lakes. Classic examples of solution lakes are abundant in the karst regions at the Dalmatian coast of Croatia and within large parts of Florida. Landslide lakes A landslide lake is created by the blockage of a river valley by either mudflows, rockslides, or screes. Such lakes are most common in mountainous regions. Although landslide lakes may be large and quite deep, they are typically short-lived. An example of a landslide lake is Quake Lake, which formed as a result of the 1959 Hebgen Lake earthquake. Most landslide lakes disappear in the first few months after formation, but a landslide dam can burst suddenly at a later stage and threaten the population downstream when the lake water drains out. In 1911, an earthquake triggered a landslide that blocked a deep valley in the Pamir Mountains region of Tajikistan, forming the Sarez Lake. The Usoi Dam at the base of the valley has remained in place for more than 100 years but the terrain below the lake is in danger of a catastrophic flood if the dam were to fail during a future earthquake. Tal-y-llyn Lake in north Wales is a landslide lake dating back to the last glaciation in Wales some 20000 years ago. Aeolian lakes Aeolian lakes are produced by wind action. These lakes are found mainly in arid environments, although some aeolian lakes are relict landforms indicative of arid paleoclimates. Aeolian lakes consist of lake basins dammed by wind-blown sand; interdunal lakes that lie between well-oriented sand dunes; and deflation basins formed by wind action under previously arid paleoenvironments. Moses Lake in Washington, United States, was originally a shallow natural lake and an example of a lake basin dammed by wind-blown sand. China's Badain Jaran Desert is a unique landscape of megadunes and elongated interdunal aeolian lakes, particularly concentrated in the southeastern margin of the desert. Shoreline lakes Shoreline lakes are generally lakes created by blockage of estuaries or by the uneven accretion of beach ridges by longshore and other currents. They include maritime coastal lakes, ordinarily in drowned estuaries; lakes enclosed by two tombolos or spits connecting an island to the mainland; lakes cut off from larger lakes by a bar; or lakes divided by the meeting of two spits. Organic lakes Organic lakes are lakes created by the actions of plants and animals. On the whole they are relatively rare in occurrence and quite small in size. In addition, they typically have ephemeral features relative to the other types of lakes. The basins in which organic lakes occur are associated with beaver dams, coral lakes, or dams formed by vegetation. Peat lakes Peat lakes are a form of organic lake. They form where a buildup of partly decomposed plant material in a wet environment leaves the vegetated surface below the water table for a sustained period of time. They are often low in nutrients and mildly acidic, with bottom waters low in dissolved oxygen. Artificial lakes Artificial lakes or anthropogenic lakes are large waterbodies created by human activity. They can be formed by the intentional damming of rivers and streams, rerouting of water to inundate a previously dry basin, or the deliberate filling of abandoned excavation pits by either precipitation runoff, ground water, or a combination of both. Artificial lakes may be used as storage reservoirs that provide drinking water for nearby settlements, to generate hydroelectricity, for flood management, for supplying agriculture or aquaculture, or to provide an aquatic sanctuary for parks and nature reserves. The Upper Silesian region of southern Poland contains an anthropogenic lake district consisting of more than 4,000 water bodies created by human activity. The diverse origins of these lakes include: reservoirs retained by dams, flooded mines, water bodies formed in subsidence basins and hollows, levee ponds, and residual water bodies following river regulation. Same for the Lusatian Lake District, Germany. In India, Sudarshana Lake is a historical artificial lake located in the semi-arid region of Girnar, Gujarat, originally constructed during the reign of Chandragupta Maurya. See: List of notable artificial lakes in the United States Meteorite (extraterrestrial impact) lakes Meteorite lakes, also known as crater lakes (not to be confused with volcanic crater lakes), are created by catastrophic impacts with the Earth by extraterrestrial objects (either meteorites or asteroids). Examples of meteorite lakes are Lonar Lake in India, Lake El'gygytgyn in northeast Siberia, and the Pingualuit crater lake in Quebec, Canada. As in the cases of El'gygytgyn and Pingualuit, meteorite lakes can contain unique and scientifically valuable sedimentary deposits associated with long records of paleoclimatic changes. Other classification methods In addition to the mode of origin, lakes have been named and classified according to various other important factors such as thermal stratification, oxygen saturation, seasonal variations in lake volume and water level, salinity of the water mass, relative seasonal permanence, degree of outflow, and so on. The names used by the lay public and in the scientific community for different types of lakes are often informally derived from the morphology of the lakes' physical characteristics or other factors. Also, different cultures and regions of the world have their own popular nomenclature. By thermal stratification One important method of lake classification is on the basis of thermal stratification, which has a major influence on the animal and plant life inhabiting a lake, and the fate and distribution of dissolved and suspended material in the lake. For example, the thermal stratification, as well as the degree and frequency of mixing, has a strong control over the distribution of oxygen within the lake. Professor F.-A. Forel, also referred to as the "Father of limnology", was the first scientist to classify lakes according to their thermal stratification. His system of classification was later modified and improved upon by Hutchinson and Löffler. As the density of water varies with temperature, with a maximum at +4 degrees Celsius, thermal stratification is an important physical characteristic of a lake that controls the fauna and flora, sedimentation, chemistry, and other aspects of individual lakes. First, the colder, denser water typically forms a layer near the bottom, which is called the hypolimnion. Second, normally overlying the hypolimnion is a transition zone known as the metalimnion. Finally, overlying the metalimnion is a surface layer of warmer water with a lower density, called the epilimnion. This typical stratification sequence can vary widely, depending on the specific lake or the time of year, or a combination of both. The classification of lakes by thermal stratification presupposes lakes with sufficient depth to form a hypolimnion; accordingly, very shallow lakes are excluded from this classification system. Based upon their thermal stratification, lakes are classified as either holomictic, with a uniform temperature and density from top to bottom at a given time of year, or meromictic, with layers of water of different temperature and density that do not intermix. The deepest layer of water in a meromictic lake does not contain any dissolved oxygen so there are no living aerobic organisms. Consequently, the layers of sediment at the bottom of a meromictic lake remain relatively undisturbed, which allows for the development of lacustrine deposits. In a holomictic lake, the uniformity of temperature and density allows the lake waters to completely mix. Based upon thermal stratification and frequency of turnover, holomictic lakes are divided into amictic lakes, cold monomictic lakes, dimictic lakes, warm monomictic lakes, polymictic lakes, and oligomictic lakes. Lake stratification does not always result from a variation in density because of thermal gradients. Stratification can also result from a density variation caused by gradients in salinity. In this case, the hypolimnion and epilimnion are separated not by a thermocline but by a halocline, which is sometimes referred to as a chemocline. By seasonal variations in water level and volume Lakes are informally classified and named according to the seasonal variation in their lake level and volume. Some of the names include: Ephemeral lake is a short-lived lake or pond. If it fills with water and dries up (disappears) seasonally it is known as an intermittent lake They often fill poljes. Dry lake is a popular name for an ephemeral lake that contains water only intermediately at irregular and infrequent intervals. Perennial lake is a lake that has water in its basin throughout the year and is not subject to extreme fluctuations in level. Playa lake is a typically shallow, intermittent lake that covers or occupies a playa either in wet seasons or in especially wet years but subsequently drying up in an arid or semiarid region. Vlei is a name used in South Africa for a shallow lake which varies considerably in level with the seasons. By water chemistry Lakes may be informally classified and named according to the general chemistry of their water mass. Using this classification method, the lake types include: An acid lake contains water with a below-neutral pH of less than 6.5. A lake is considered to be highly acidic if its pH drops below 5.5, leading to biological consequences. Such lakes include: acidic pit lakes occupying abandoned mines and excavations; naturally acidic lakes of igneous and metamorphic landscapes; peat bogs in northern regions; crater lakes of active and dormant volcanoes; and lakes acidified by acid rain. A salt lake, also known as a saline lake or brine lake, is an inland body of water situated in an arid or semiarid region, with no outlet to the sea, containing a high concentration of dissolved neutral salts (principally sodium chloride). Examples include the Great Salt Lake in Utah, and the Dead Sea in southwestern Asia. An alkali sink, also known as an alkali flat or salt flat, is a shallow saline feature that can be found in low-lying areas of arid regions and in groundwater discharge zones. These features are typically classified as dry lakes, or playas, because they are periodically flooded by rain or flood events and then dry up during drier intervals, leaving accumulations of brines and evaporitic minerals. A salt pan is a small shallow natural depression in which water accumulates and evaporates, leaving a salt deposit, or the shallow lake of brackish water that occupies a salt pan. (The term "salt pan" comes from open-pan salt making, a method of extracting salt from brine using large open pans.) A saline pan is another name for an ephemeral acid saline lake which precipitates a bottom crust that is subsequently modified during subaerial exposure. Composed of other liquids Lava lake is a large volume of molten lava, usually basaltic, contained in a volcanic vent, crater, or broad depression. Hydrocarbon lakes are bodies of liquid ethane and methane that occupy depressions on the surface of Titan. They were detected by the Cassini–Huygens space probe. Paleolakes A paleolake (also palaeolake) is a lake that existed in the past when hydrological conditions were different. Quaternary paleolakes can often be identified on the basis of relict lacustrine landforms, such as relict lake plains and coastal landforms that form recognizable relict shorelines called paleoshorelines. Paleolakes can also be recognized by characteristic sedimentary deposits that accumulated in them and any fossils that might be contained in these sediments. The paleoshorelines and sedimentary deposits of paleolakes provide evidence for prehistoric hydrological changes during the times that they existed. There are two types of paleolake: A former lake is a paleolake that no longer exists. Such lakes include prehistoric lakes and those that have permanently dried up, often as the result of either evaporation or human intervention. An example of a former lake is Owens Lake in California, United States. Former lakes are a common feature of the Basin and Range area of southwestern North America. A shrunken lake is a paleolake that still exists but has considerably decreased in size over geological time. An example of a shrunken lake is Lake Agassiz, which once covered much of central North America. Two notable remnants of Lake Agassiz are Lake Winnipeg and Lake Winnipegosis. Paleolakes are of scientific and economic importance. For example, Quaternary paleolakes in semidesert basins are important for two reasons: they played an extremely significant, if transient, role in shaping the floors and piedmonts of many basins; and their sediments contain enormous quantities of geologic and paleontologic information concerning past environments. In addition, the organic-rich deposits of pre-Quaternary paleolakes are important either for the thick deposits of oil shale and shale gas contained in them, or as source rocks of petroleum and natural gas. Although of significantly less economic importance, strata deposited along the shore of paleolakes sometimes contain coal seams. Characteristics Lakes have numerous features in addition to lake type, such as drainage basin (also known as catchment area), inflow and outflow, nutrient content, dissolved oxygen, pollutants, pH, and sedimentation. Changes in the level of a lake are controlled by the difference between the input and output compared to the total volume of the lake. Significant input sources are precipitation onto the lake, runoff carried by streams and channels from the lake's catchment area, groundwater channels and aquifers, and artificial sources from outside the catchment area. Output sources are evaporation from the lake, surface and groundwater flows, and any extraction of lake water by humans. As climate conditions and human water requirements vary, these will create fluctuations in the lake level. Lakes can be also categorized on the basis of their richness in nutrients, which typically affect plant growth. Nutrient-poor lakes are said to be oligotrophic and are generally clear, having a low concentration of plant life. Mesotrophic lakes have good clarity and an average level of nutrients. Eutrophic lakes are enriched with nutrients, resulting in good plant growth and possible algal blooms. Hypertrophic lakes are bodies of water that have been excessively enriched with nutrients. These lakes typically have poor clarity and are subject to devastating algal blooms. Lakes typically reach this condition due to human activities, such as heavy use of fertilizers in the lake catchment area. Such lakes are of little use to humans and have a poor ecosystem due to decreased dissolved oxygen. Due to the unusual relationship between water's temperature and its density, lakes form layers called thermoclines, layers of drastically varying temperature relative to depth. Fresh water is most dense at about 4 degrees Celsius (39.2 °F) at sea level. When the temperature of the water at the surface of a lake reaches the same temperature as deeper water, as it does during the cooler months in temperate climates, the water in the lake can mix, bringing oxygen-starved water up from the depths and bringing oxygen down to decomposing sediments. Deep temperate lakes can maintain a reservoir of cold water year-round, which allows some cities to tap that reservoir for deep lake water cooling. Since the surface water of deep tropical lakes never reaches the temperature of maximum density, there is no process that makes the water mix. The deeper layer becomes oxygen starved and can become saturated with carbon dioxide, or other gases such as sulfur dioxide if there is even a trace of volcanic activity. Exceptional events, such as earthquakes or landslides, can cause mixing which rapidly brings the deep layers up to the surface and release a vast cloud of gas which lay trapped in solution in the colder water at the bottom of the lake. This is called a limnic eruption. An example is the disaster at Lake Nyos in Cameroon. The amount of gas that can be dissolved in water is directly related to pressure. As deep water surfaces, the pressure drops and a vast amount of gas comes out of solution. Under these circumstances carbon dioxide is hazardous because it is heavier than air and displaces it, so it may flow down a river valley to human settlements and cause mass asphyxiation. The material at the bottom of a lake, or lake bed, may be composed of a wide variety of inorganics, such as silt or sand, and organic material, such as decaying plant or animal matter. The composition of the lake bed has a significant impact on the flora and fauna found within the lake's environs by contributing to the amounts and the types of nutrients available. A paired (black and white) layer of the varved lake sediments correspond to a year. During winter, when organisms die, carbon is deposited down, resulting to a black layer. At the same year, during summer, only few organic materials are deposited, resulting to a white layer at the lake bed. These are commonly used to track past paleontological events. Natural lakes provide a microcosm of living and nonliving elements that are relatively independent of their surrounding environments. Therefore, lake organisms can often be studied in isolation from the lake's surroundings. Limnology Limnology is the study of inland bodies of water and related ecosystems. Limnology divides lakes into three zones: the littoral zone, a sloped area close to land; the photic or open-water zone, where sunlight is abundant; and the deep-water profundal or benthic zone, where little sunlight can reach. The depth to which light can penetrate depends on the turbidity of the water, which is determined by the density and size of suspended particles. A particle will be in suspension if its weight is less than the random turbidity forces acting upon it. These particles can be sedimentary or biological in origin (including algae and detritus) and are responsible for the color of the water. Decaying plant matter, for instance, may account for a yellow or brown color, while algae may cause a greenish coloration. In very shallow water bodies, iron oxides make the water reddish brown. Bottom-dwelling detritivorous fish stir the mud in search of food and can be the cause of turbid waters. Piscivorous fish contribute to turbidity by eating plant-eating (planktonivorous) fish, thus increasing the amount of algae (see aquatic trophic cascade). The light depth or transparency is measured using a Secchi disk, a 20-cm (8 in) disk with alternating white and black quadrants. The depth at which the disk is no longer visible is the Secchi depth, a measure of transparency. The Secchi disk is commonly used to test for eutrophication. For a detailed look at these processes, see lentic ecosystems. A lake moderates the surrounding region's temperature and climate because water has a very high specific heat capacity (4,186 J·kg−1·K−1). In the daytime a lake can cool the land beside it with local winds, resulting in a sea breeze; in the night it can warm it with a land breeze. Biological properties Lake zones: Epilittoral: The zone that is entirely above the lake's normal water level and never submerged by lake water Littoral: The zone that encompasses the small area above the normal water level (which is sometimes submerged when the lake's water level increases), reaching to the deepest part of the lake that still allows for submerged macrophytic growth Littoriprofundal: Transition zone commonly aligned with stratified lakes' metalimnions – too deep for macrophytes but includes photosynthetic algae and bacteria Profundal: Sedimentary zone containing no vegetation Algal community types: Epipelic: Algae that grow on sediments Epilithic: Algae that grow on rocks Epipsammic: Algae that grow on (or within) sand Epiphytic: Algae that grow on macrophytes Epizooic: Algae that grow on living animals Metaphyton: Algae present in the littoral zone, not in a state of suspension nor attached to a substratum (such as a macrophyte) Circulation Flora and fauna Disappearance The lake may be infilled with deposited sediment and gradually become a wetland such as a swamp or marsh. Large water plants, typically reeds, accelerate this closing process significantly because they partially decompose to form peat soils that fill the shallows. Conversely, peat soils in a marsh can naturally burn and reverse this process to recreate a shallow lake resulting in a dynamic equilibrium between marsh and lake. This is significant since wildfire has been largely suppressed in the developed world over the past century. This has artificially converted many shallow lakes into emergent marshes. Turbid lakes and lakes with many plant-eating fish tend to disappear more slowly. A "disappearing" lake (barely noticeable on a human timescale) typically has extensive plant mats at the water's edge. These become a new habitat for other plants, like peat moss when conditions are right, and animals, many of which are very rare. Gradually, the lake closes and young peat may form, forming a fen. In lowland river valleys where a river can meander, the presence of peat is explained by the infilling of historical oxbow lakes. In the final stages of succession, trees can grow in, eventually turning the wetland into a forest. Some lakes can disappear seasonally. These are called intermittent lakes, ephemeral lakes, or seasonal lakes and can be found in karstic terrain. A prime example of an intermittent lake is Lake Cerknica in Slovenia or Lag Prau Pulte in Graubünden. Other intermittent lakes are only the result of above-average precipitation in a closed, or endorheic basin, usually filling dry lake beds. This can occur in some of the driest places on earth, like Death Valley. This occurred in the spring of 2005, after unusually heavy rains. The lake did not last into the summer, and was quickly evaporated (see photos to right). A more commonly filled lake of this type is Sevier Lake of west-central Utah. Sometimes a lake will disappear quickly. On 3 June 2005, in Nizhny Novgorod Oblast, Russia, a lake called Lake Beloye vanished in a matter of minutes. News sources reported that government officials theorized that this strange phenomenon may have been caused by a shift in the soil underneath the lake that allowed its water to drain through channels leading to the Oka River. The presence of ground permafrost is important to the persistence of some lakes. Thawing permafrost may explain the shrinking or disappearance of hundreds of large Arctic lakes across western Siberia. The idea here is that rising air and soil temperatures thaw permafrost, allowing the lakes to drain away into the ground. Some lakes disappear because of human development factors. The shrinking Aral Sea is described as being "murdered" by the diversion for irrigation of the rivers feeding it. Between 1990 and 2020, more than half of the world's large lakes decreased in size, in part due to climate change. Extraterrestrial lakes Only one astronomical body other than Earth is known to harbor large lakes: Saturn's largest moon, Titan. Photographs and spectroscopic analysis by the Cassini–Huygens spacecraft show liquid ethane on the surface, which is thought to be mixed with liquid methane. The largest lake on Titan is Kraken Mare which, at an estimated 400,000 km2, is roughly five times the size of Lake Superior (~80,000 km2) and nearly the size of all five Great Lakes of North America combined. The second largest Titanean lake, Ligeia Mare, is almost twice the size of Lake Superior, at an estimated 150,000 km2. Jupiter's large moon Io is volcanically active, leading to the accumulation of sulfur deposits on the surface. Some photographs taken during the Galileo mission appear to show lakes of liquid sulfur in volcanic caldera, though these are more analogous to lakes of lava than of water on Earth. The planet Mars has only one confirmed lake which is underground and near the south pole. Although the surface of Mars is too cold and has too little atmospheric pressure to permit permanent surface water, geologic evidence appears to confirm that ancient lakes once formed on the surface. There are dark basaltic plains on the Moon, similar to lunar maria but smaller, which are called lacus (singular lacus, Latin for "lake") because they were thought by early astronomers to be lakes of water. Notable lakes on Earth The largest lake by surface area is Caspian Sea, which is despite its name considered as a lake from the point of view of geography. Its surface area is 143,000 sq. mi./371,000 km2. The second largest lake by surface area, and the largest freshwater lake by surface area, is Lake Michigan-Huron, which is hydrologically a single lake. Its surface area is 45,300 sq. mi./117,400 km2. For those who consider Lake Michigan-Huron to be separate lakes, and Caspian Sea to be a sea, Lake Superior would be the largest lake at 82,100 km2 (31,700 square miles) Lake Baikal is the deepest lake in the world, located in Siberia, with a bottom at . Its mean depth is also the greatest in the world (). It is also the world's largest freshwater lake by volume (, but much smaller than the Caspian Sea at ), and the second longest (about from tip to tip). The world's oldest lake is Lake Baikal, followed by Lake Tanganyika in Tanzania. Lake Maracaibo is considered by some to be the second-oldest lake on Earth, but since it lies at sea level and nowadays is a contiguous body of water with the sea, others consider that it has turned into a small bay. The longest lake is Lake Tanganyika, with a length of about (measured along the lake's center line).It is also the third largest by volume, the second oldest, and the second deepest () in the world, after Lake Baikal. The world's highest lake, if size is not a criterion, may be the crater lake of Ojos del Salado, at . The highest large (greater than ) lake in the world is the Pumoyong Tso (Pumuoyong Tso), in the Tibet Autonomous Region of China, at , above sea level. The world's highest commercially navigable lake is Lake Titicaca in Peru and Bolivia at . It is also the largest lake in South America. The world's lowest lake is the Dead Sea, bordered by Jordan to the east and Israel and Palestine to the west, at below sea level. It is also one of the lakes with highest salt concentration. Lake Michigan–Huron has the longest lake coastline in the world: about , excluding the coastline of its many inner islands. Even if it is considered two lakes, Lake Huron alone would still have the longest coastline in the world at . The largest island in a lake is Manitoulin Island in Lake Michigan-Huron, with a surface area of . Lake Manitou, on Manitoulin Island, is the largest lake on an island in a lake. The largest lake on an island is Nettilling Lake on Baffin Island, with an area of and a maximum length of . The largest lake in the world that drains naturally in two directions is Wollaston Lake. Lake Toba on the island of Sumatra is in what is probably the largest resurgent caldera on Earth. The largest lake completely within the boundaries of a single city is Lake Wanapitei in the city of Sudbury, Ontario, Canada. Before the current city boundaries came into effect in 2001, this status was held by Lake Ramsey, also in Sudbury. Lake Enriquillo in Dominican Republic is the only saltwater lake in the world inhabited by crocodiles. Lake Bernard, Ontario, Canada, claims to be the largest lake in the world with no islands. Lake Saimaa in both South Savonia and South Karelia, Finland, forms the much larger Saimaa basin, which have more shorelines per unit of area than anywhere else in the world, with the total length being nearly . The largest lake in one country is Lake Michigan, in the United States. However, it is sometimes considered part of Lake Michigan-Huron, making the record go to Great Bear Lake, Northwest Territories, in Canada, the largest lake within one jurisdiction. The largest lake on an island in a lake on an island is Crater Lake on Vulcano Island in Lake Taal on the island of Luzon, The Philippines. The northernmost named lake on Earth is Upper Dumbell Lake in the Qikiqtaaluk Region of Nunavut, Canada at a latitude of 82°28'N. It is southwest of Alert, the northernmost settlement in the world. There are also several small lakes north of Upper Dumbell Lake, but they are all unnamed and only appear on very detailed maps. There are only 20 ancient lakes - those over a million years old Largest by continent The largest lakes (surface area) by continent are: Australia – Lake Eyre (salt lake) Africa – Lake Victoria, also the third-largest freshwater lake on Earth. It is one of the Great Lakes of Africa. Antarctica – Lake Vostok (subglacial) Asia – Lake Baikal (if the Caspian Sea is considered a lake, it is the largest in Eurasia, but is divided between the two geographic continents) Oceania – Lake Eyre when filled; the largest permanent (and freshwater) lake in Oceania is Lake Taupō. Europe – Lake Ladoga, followed by Lake Onega, both in northwestern Russia. North America – Lake Michigan–Huron, which is hydrologically a single lake. However, lakes Huron and Michigan are usually considered separate lakes, in which case Lake Superior would be the largest. South America – Lake Titicaca, which is also the highest navigable body of water on Earth at above sea level. (The much larger – and older – Lake Maracaibo is perceived by some to no longer be genuinely a lake, but a lagoon.)
Physical sciences
Hydrological features
null
864017
https://en.wikipedia.org/wiki/Planetary%20core
Planetary core
A planetary core consists of the innermost layers of a planet. Cores may be entirely liquid, or a mixture of solid and liquid layers as is the case in the Earth. In the Solar System, core sizes range from about 20% (the Moon) to 85% of a planet's radius (Mercury). Gas giants also have cores, though the composition of these are still a matter of debate and range in possible composition from traditional stony/iron, to ice or to fluid metallic hydrogen. Gas giant cores are proportionally much smaller than those of terrestrial planets, though they can be considerably larger than the Earth's nevertheless; Jupiter's is 10–30 times heavier than Earth, and exoplanet HD149026 b may have a core 100 times the mass of the Earth. Planetary cores are challenging to study because they are impossible to reach by drill and there are almost no samples that are definitively from the core. Thus, they are studied via indirect techniques such as seismology, mineral physics, and planetary dynamics. Discovery Earth's core In 1797, Henry Cavendish calculated the average density of the Earth to be 5.48 times the density of water (later refined to 5.53), which led to the accepted belief that the Earth was much denser in its interior. Following the discovery of iron meteorites, Wiechert in 1898 postulated that the Earth had a similar bulk composition to iron meteorites, but the iron had settled to the interior of the Earth, and later represented this by integrating the bulk density of the Earth with the missing iron and nickel as a core. The first detection of Earth's core occurred in 1906 by Richard Dixon Oldham upon discovery of the P-wave shadow zone; the liquid outer core. By 1936 seismologists had determined the size of the overall core as well as the boundary between the fluid outer core and the solid inner core. Moon's core The internal structure of the Moon was characterized in 1974 using seismic data collected by the Apollo missions of moonquakes. The Moon's core has a radius of 300 km. The Moon's iron core has a liquid outer layer that makes up 60% of the volume of the core, with a solid inner core. Cores of the rocky planets The cores of the rocky planets were initially characterized by analyzing data from spacecraft, such as NASA's Mariner 10 that flew by Mercury and Venus to observe their surface characteristics. The cores of other planets cannot be measured using seismometers on their surface, so instead they have to be inferred based on calculations from these fly-by observation. Mass and size can provide a first-order calculation of the components that make up the interior of a planetary body. The structure of rocky planets is constrained by the average density of a planet and its moment of inertia. The moment of inertia for a differentiated planet is less than 0.4, because the density of the planet is concentrated in the center. Mercury has a moment of inertia of 0.346, which is evidence for a core. Conservation of energy calculations as well as magnetic field measurements can also constrain composition, and surface geology of the planets can characterize differentiation of the body since its accretion. Mercury, Venus, and Mars’ cores are about 75%, 50%, and 40% of their radius respectively. Formation Accretion Planetary systems form from flattened disks of dust and gas that accrete rapidly (within thousands of years) into planetesimals around 10 km in diameter. From here gravity takes over to produce Moon to Mars-sized planetary embryos (105 – 106 years) and these develop into planetary bodies over an additional 10–100 million years. Jupiter and Saturn most likely formed around previously existing rocky and/or icy bodies, rendering these previous primordial planets into gas-giant cores. This is the planetary core accretion model of planet formation. Differentiation Planetary differentiation is broadly defined as the development from one thing to many things; homogeneous body to several heterogeneous components. The hafnium-182/tungsten-182 isotopic system has a half-life of 9 million years, and is approximated as an extinct system after 45 million years. Hafnium is a lithophile element and tungsten is siderophile element. Thus if metal segregation (between the Earth's core and mantle) occurred in under 45 million years, silicate reservoirs develop positive Hf/W anomalies, and metal reservoirs acquire negative anomalies relative to undifferentiated chondrite material. The observed Hf/W ratios in iron meteorites constrain metal segregation to under 5 million years, the Earth's mantle Hf/W ratio places Earth's core as having segregated within 25 million years. Several factors control segregation of a metal core including the crystallization of perovskite. Crystallization of perovskite in an early magma ocean is an oxidation process and may drive the production and extraction of iron metal from an original silicate melt. Core merging and impacts Impacts between planet-sized bodies in the early Solar System are important aspects in the formation and growth of planets and planetary cores. Earth–Moon system The giant impact hypothesis states that an impact between a theoretical Mars-sized planet Theia and the early Earth formed the modern Earth and Moon. During this impact the majority of the iron from Theia and the Earth became incorporated into the Earth's core. Mars Core merging between the proto-Mars and another differentiated planetoid could have been as fast as 1000 years or as slow as 300,000 years (depending on viscosity of both cores). Chemistry Determining primary composition – Earth Using the chondritic reference model and combining known compositions of the crust and mantle, the unknown component, the composition of the inner and outer core, can be determined: 85% Fe, 5% Ni, 0.9% Cr, 0.25% Co, and all other refractory metals at very low concentration. This leaves Earth's core with a 5–10% weight deficit for the outer core, and a 4–5% weight deficit for the inner core; which is attributed to lighter elements that should be cosmically abundant and are iron-soluble; H, O, C, S, P, and Si. Earth's core contains half the Earth's vanadium and chromium, and may contain considerable niobium and tantalum. Earth's core is depleted in germanium and gallium. Weight deficit components – Earth Sulfur is strongly siderophilic and only moderately volatile and depleted in the silicate earth; thus may account for 1.9 weight % of Earth's core. By similar arguments, phosphorus may be present up to 0.2 weight %. Hydrogen and carbon, however, are highly volatile and thus would have been lost during early accretion and therefore can only account for 0.1 to 0.2 weight % respectively. Silicon and oxygen thus make up the remaining mass deficit of Earth's core; though the abundances of each are still a matter of controversy revolving largely around the pressure and oxidation state of Earth's core during its formation. No geochemical evidence exists to include any radioactive elements in Earth's core. Despite this, experimental evidence has found potassium to be strongly siderophilic at the temperatures associated with core formation, thus there is potential for potassium in planetary cores of planets, and therefore potassium-40 as well. Isotopic composition – Earth Hafnium/tungsten (Hf/W) isotopic ratios, when compared with a chondritic reference frame, show a marked enrichment in the silicate earth indicating depletion in Earth's core. Iron meteorites, believed to be resultant from very early core fractionation processes, are also depleted. Niobium/tantalum (Nb/Ta) isotopic ratios, when compared with a chondritic reference frame, show mild depletion in bulk silicate Earth and the moon. Pallasite meteorites Pallasites are thought to form at the core-mantle boundary of an early planetesimal, although a recent hypothesis suggests that they are impact-generated mixtures of core and mantle materials. Dynamics Dynamo Dynamo theory is a proposed mechanism to explain how celestial bodies like the Earth generate magnetic fields. The presence or lack of a magnetic field can help constrain the dynamics of a planetary core. Refer to Earth's magnetic field for further details. A dynamo requires a source of thermal and/or compositional buoyancy as a driving force. Thermal buoyancy from a cooling core alone cannot drive the necessary convection as indicated by modelling, thus compositional buoyancy (from changes of phase) is required. On Earth the buoyancy is derived from crystallization of the inner core (which can occur as a result of temperature). Examples of compositional buoyancy include precipitation of iron alloys onto the inner core and liquid immiscibility both, which could influence convection both positively and negatively depending on ambient temperatures and pressures associated with the host-body. Other celestial bodies that exhibit magnetic fields are Mercury, Jupiter, Ganymede, and Saturn. Core heat source A planetary core acts as a heat source for the outer layers of a planet. In the Earth, the heat flux over the core mantle boundary is 12 terawatts. This value is calculated from a variety of factors: secular cooling, differentiation of light elements, Coriolis forces, radioactive decay, and latent heat of crystallization. All planetary bodies have a primordial heat value, or the amount of energy from accretion. Cooling from this initial temperature is called secular cooling, and in the Earth the secular cooling of the core transfers heat into an insulating silicate mantle. As the inner core grows, the latent heat of crystallization adds to the heat flux into the mantle. Stability and instability Small planetary cores may experience catastrophic energy release associated with phase changes within their cores. Ramsey (1950) found that the total energy released by such a phase change would be on the order of 1029 joules; equivalent to the total energy release due to earthquakes through geologic time. Such an event could explain the asteroid belt. Such phase changes would only occur at specific mass to volume ratios, and an example of such a phase change would be the rapid formation or dissolution of a solid core component. Trends in the Solar System Inner rocky planets All of the rocky inner planets, as well as the moon, have an iron-dominant core. Venus and Mars have an additional major element in the core. Venus’ core is believed to be iron-nickel, similarly to Earth. Mars, on the other hand, is believed to have an iron-sulfur core and is separated into an outer liquid layer around an inner solid core. As the orbital radius of a rocky planet increases, the size of the core relative to the total radius of the planet decreases. This is believed to be because differentiation of the core is directly related to a body's initial heat, so Mercury's core is relatively large and active. Venus and Mars, as well as the moon, do not have magnetic fields. This could be due to a lack of a convecting liquid layer interacting with a solid inner core, as Venus’ core is not layered. Although Mars does have a liquid and solid layer, they do not appear to be interacting in the same way that Earth's liquid and solid components interact to produce a dynamo. Outer gas and ice giants Current understanding of the outer planets in the solar system, the ice and gas giants, theorizes small cores of rock surrounded by a layer of ice, and in Jupiter and Saturn models suggest a large region of liquid metallic hydrogen and helium. The properties of these metallic hydrogen layers is a major area of contention because it is difficult to produce in laboratory settings, due to the high pressures needed. Jupiter and Saturn appear to release a lot more energy than they should be radiating just from the sun, which is attributed to heat released by the hydrogen and helium layer. Uranus does not appear to have a significant heat source, but Neptune has a heat source that is attributed to a “hot” formation. Observed types The following summarizes known information about the planetary cores of given non-stellar bodies. Within the Solar System Mercury Mercury has an observed magnetic field, which is believed to be generated within its metallic core. Mercury's core occupies 85% of the planet's radius, making it the largest core relative to the size of the planet in the Solar System; this indicates that much of Mercury's surface may have been lost early in the Solar System's history. Mercury has a solid silicate crust and mantle overlying a solid metallic outer core layer, followed by a deeper liquid core layer, and then a possible solid inner core making a third layer. The composition of the iron-rich core remains uncertain, but it likely contains nickel, silicon and perhaps sulfur and carbon, plus trace amounts of other elements. Venus The composition of Venus' core varies significantly depending on the model used to calculate it, thus constraints are required. Moon The existence of a lunar core is still debated; however, if it does have a core it would have formed synchronously with the Earth's own core at 45 million years post-start of the Solar System based on hafnium-tungsten evidence and the giant impact hypothesis. Such a core may have hosted a geomagnetic dynamo early on in its history. Earth The Earth has an observed magnetic field generated within its metallic core. The Earth has a 5–10% mass deficit for the entire core and a density deficit from 4–5% for the inner core. The Fe/Ni value of the core is well constrained by chondritic meteorites. Sulfur, carbon, and phosphorus only account for ~2.5% of the light element component/mass deficit. No geochemical evidence exists for including any radioactive elements in the core. However, experimental evidence has found that potassium is strongly siderophile when dealing with temperatures associated with core-accretion, and thus potassium-40 could have provided an important source of heat contributing to the early Earth's dynamo, though to a lesser extent than on sulfur rich Mars. The core contains half the Earth's vanadium and chromium, and may contain considerable niobium and tantalum. The core is depleted in germanium and gallium. Core mantle differentiation occurred within the first 30 million years of Earth's history. Inner core crystallization timing is still largely unresolved. Mars Mars possibly hosted a core-generated magnetic field in the past. The dynamo ceased within 0.5 billion years of the planet's formation. Hf/W isotopes derived from the martian meteorite Zagami, indicate rapid accretion and core differentiation of Mars; i.e. under 10 million years. Potassium-40 could have been a major source of heat powering the early Martian dynamo. Core merging between proto-Mars and another differentiated planetoid could have been as fast as 1000 years or as slow as 300,000 years (depending on the viscosity of both cores and mantles). Impact-heating of the Martian core would have resulted in stratification of the core and kill the Martian dynamo for a duration between 150 and 200 million years. Modelling done by Williams, et al. 2004 suggests that in order for Mars to have had a functional dynamo, the Martian core was initially hotter by 150 K than the mantle (agreeing with the differentiation history of the planet, as well as the impact hypothesis), and with a liquid core potassium-40 would have had opportunity to partition into the core providing an additional source of heat. The model further concludes that the core of mars is entirely liquid, as the latent heat of crystallization would have driven a longer-lasting (greater than one billion years) dynamo. If the core of Mars is liquid, the lower bound for sulfur would be five weight %. Ganymede Ganymede has an observed magnetic field generated within its metallic core. Jupiter Jupiter has an observed magnetic field generated within its core, indicating some metallic substance is present. Its magnetic field is the strongest in the Solar System after the Sun's. Jupiter has a rock and/or ice core 10–30 times the mass of the Earth, and this core is likely soluble in the gas envelope above, and so primordial in composition. Since the core still exists, the outer envelope must have originally accreted onto a previously existing planetary core. Thermal contraction/evolution models support the presence of metallic hydrogen within the core in large abundances (greater than Saturn). Saturn Saturn has an observed magnetic field generated within its metallic core. Metallic hydrogen is present within the core (in lower abundances than Jupiter). Saturn has a rock and or ice core 10–30 times the mass of the Earth, and this core is likely soluble in the gas envelope above, and therefore it is primordial in composition. Since the core still exists, the envelope must have originally accreted onto previously existing planetary cores. Thermal contraction/evolution models support the presence of metallic hydrogen within the core in large abundances (but still less than Jupiter). Remnant planetary cores Missions to bodies in the asteroid belt will provide more insight to planetary core formation. It was previously understood that collisions in the solar system fully merged, but recent work on planetary bodies argues that remnants of collisions have their outer layers stripped, leaving behind a body that would eventually become a planetary core. The Psyche mission, titled “Journey to a Metal World,” is aiming to studying a body that could possibly be a remnant planetary core. Extrasolar As the field of exoplanets grows as new techniques allow for the discovery of both diverse exoplanets, the cores of exoplanets are being modeled. These depend on initial compositions of the exoplanets, which is inferred using the absorption spectra of individual exoplanets in combination with the emission spectra of their star. Chthonian planets A chthonian planet results when a gas giant has its outer atmosphere stripped away by its parent star, likely due to the planet's inward migration. All that remains from the encounter is the original core. Planets derived from stellar cores and diamond planets Carbon planets, previously stars, are formed alongside the formation of a millisecond pulsar. The first such planet discovered was 18 times the density of water, and five times the size of Earth. Thus the planet cannot be gaseous, and must be composed of heavier elements that are also cosmically abundant like carbon and oxygen; making it likely crystalline like a diamond. PSR J1719-1438 is a 5.7 millisecond pulsar found to have a companion with a mass similar to Jupiter but a density of 23 g/cm3, suggesting that the companion is an ultralow mass carbon white dwarf, likely the core of an ancient star. Hot ice planets Exoplanets with moderate densities (more dense than Jovian planets, but less dense than terrestrial planets) suggests that such planets like GJ1214b and GJ436 are composed of primarily water. Internal pressures of such water-worlds would result in exotic phases of water forming on the surface and within their cores.
Physical sciences
Planetary science
Astronomy
864424
https://en.wikipedia.org/wiki/Sombrero%20Galaxy
Sombrero Galaxy
The Sombrero Galaxy (also known as Messier Object 104, M104 or NGC 4594) is a peculiar galaxy of unclear classification in the constellation borders of Virgo and Corvus, being about from the Milky Way galaxy. It is a member of the Virgo II Groups, a series of galaxies and galaxy clusters strung out from the southern edge of the Virgo Supercluster. It has an isophotal diameter of approximately , making it slightly bigger in size than the Milky Way. It has a bright nucleus, an unusually large central bulge, and a prominent dust lane in its outer disk, which from Earth is viewed almost edge-on. The dark dust lane and the bulge give it the appearance of a sombrero hat (thus the name). Astronomers initially thought the halo was small and light, indicative of a spiral galaxy; but the Spitzer Space Telescope found that the halo was significantly larger and more massive than previously thought, indicative of a giant elliptical galaxy. The galaxy has an apparent magnitude of +8.0, making it easily visible with amateur telescopes, and is considered by some authors to be the galaxy with the highest absolute magnitude within a radius of 10 megaparsecs of the Milky Way. Its large bulge, central supermassive black hole, and dust lane all attract the attention of professional astronomers. Observation history Discovery The Sombrero Galaxy was discovered on May 11, 1781 by Pierre Méchain, who described the object in a May 1783 letter to J. Bernoulli that was later published in the Berliner Astronomisches Jahrbuch. Charles Messier made a handwritten note about this and five other objects (now collectively recognized as M104 – M109) to his personal list of objects now known as the Messier Catalogue, but it was not "officially" included until 1921. William Herschel independently discovered the object in 1784 and additionally noted the presence of a "dark stratum" in the galaxy's disc, what is now called a dust lane. Later astronomers were able to connect Méchain's and Herschel's observations. Designation as a Messier object In 1921, Camille Flammarion found Messier's personal list of the Messier objects including the hand-written notes about the Sombrero Galaxy. This was identified with object 4594 in the New General Catalogue, and Flammarion declared that it should be included in the Messier Catalogue. Since this time, the Sombrero Galaxy has been known as M104. Dust ring As noted above, this galaxy's most striking feature is the dust lane that crosses in front of the bulge of the galaxy. This dust lane is actually a symmetrical ring that encloses the bulge of the galaxy. Most of the cold atomic hydrogen gas and the dust lie within this ring. The ring might also contain most of the Sombrero Galaxy's cold molecular gas, although this is an inference based on observations with low resolution and weak detections. Additional observations are needed to confirm that the Sombrero galaxy's molecular gas is constrained to the ring. Based on infrared spectroscopy, the dust ring is the primary site of star formation within this galaxy. Nucleus The nucleus of the Sombrero Galaxy is classified as a low-ionization nuclear emission-line region (LINER). These are nuclear regions where ionized gas is present, but the ions are only weakly ionized (i.e. the atoms are missing relatively few electrons). The source of energy for ionizing the gas in LINERs has been debated extensively. Some LINER nuclei may be powered by hot, young stars found in star formation regions, whereas other LINER nuclei may be powered by active galactic nuclei (highly energetic regions that contain supermassive black holes). Infrared spectroscopy observations have demonstrated that the nucleus of the Sombrero Galaxy is probably devoid of any significant star formation activity. However, a supermassive black hole has been identified in the nucleus (as discussed in the subsection below), so this active galactic nucleus is probably the energy source that weakly ionizes the gas in the Sombrero Galaxy. Central supermassive black hole In the 1990s, a research group led by John Kormendy demonstrated that a supermassive black hole is present within the Sombrero Galaxy. Using spectroscopy data from both the CFHT and the Hubble Space Telescope, the group showed that the speed of revolution of the stars within the center of the galaxy could not be maintained unless a mass 1 billion times that of the Sun, , is present in the center. This is among the most massive black holes measured in any nearby galaxy, and is the nearest billion-solar-mass black hole to Earth. Synchrotron radiation At radio and X-ray wavelengths, the nucleus is a strong source of synchrotron radiation. Synchrotron radiation is produced when high-velocity electrons oscillate as they pass through regions with strong magnetic fields. This emission is quite common for active galactic nuclei. Although radio synchrotron radiation may vary over time for some active galactic nuclei, the luminosity of the radio emission from the Sombrero Galaxy varies only 10–20%. Unidentified terahertz radiation In 2006, two groups published measurements of the terahertz radiation from the nucleus of the Sombrero Galaxy at a wavelength of . This terahertz radiation was found not to originate from the thermal emission from dust (which is commonly seen at infrared and submillimeter wavelengths), synchrotron radiation (which is commonly seen at radio wavelengths), bremsstrahlung emission from hot gas (which is uncommonly seen at millimeter wavelengths), or molecular gas (which commonly produces submillimeter spectral lines). The source of the terahertz radiation remains unidentified. Globular clusters The Sombrero Galaxy has a relatively large number of globular clusters, observational studies of which have produced population estimates in the range of 1,200 to 2,000. The ratio of globular clusters to the galaxy's total luminosity is high compared to the Milky Way and similar galaxies with small bulges, but comparable to other galaxies with large bulges. These results have often been used to demonstrate that the number of a galaxy's globular clusters is thought to be related to the size of its bulge. The surface density of the globular clusters generally follows the bulge's light profile, except near the galaxy's center. Distance, mass and brightness At least two methods have been used to measure the distance to the Sombrero Galaxy. The first method relies on comparing the measured fluxes from the galaxy's planetary nebulae to the known luminosity of planetary nebulae in the Milky Way. This method gave the distance to the Sombrero Galaxy as . The second method is the surface brightness fluctuations method, which uses the grainy appearance of the galaxy's bulge to estimate the distance to it. Nearby galaxy bulges appear very grainy, while more distant bulges appear smooth. Early measurements using this technique gave distances of . Later, after some refinement of the technique, a distance of was measured. This was even further refined in 2003 to . The average distance measured through these two techniques is . The mass of M104 is estimated to be 800 billions solar masses The galaxy's absolute magnitude (in the blue) is estimated as −21.9 at (−21.8 at the average distance of above)—which, as stated above, makes it the brightest galaxy in a radius of around the Milky Way. A 2016 report used the Hubble Space Telescope to measure the distance to M104 based on the tip of the red-giant branch method, yielding 9.55 ± 0.13 ± 0.31 Mpc. Nearby galaxies and galaxy group information The Sombrero Galaxy lies within a complex, filament-like cloud of galaxies that extends to the south of the Virgo Cluster. However, it is unclear whether it is part of a formal galaxy group. Hierarchical methods for identifying groups, which determine group membership by considering whether individual galaxies belong to a larger aggregate of galaxies, typically produce results showing that the Sombrero Galaxy is part of a group that includes NGC 4487, NGC 4504, NGC 4802, UGCA 289, and possibly a few other galaxies. However, results that rely on the percolation method (also known as the friends-of-friends method), which links individual galaxies together to determine group membership, indicate that either the Sombrero Galaxy is not in a group or that it may be only part of a galaxy pair with UGCA 287. Besides that, M104 is also accompanied by an ultra-compact dwarf galaxy, discovered in 2009, with an absolute magnitude of −12.3, an effective radius of just , and a mass of 3.3× Amateur astronomy The Sombrero Galaxy is 11.5° west of Spica and 5.5° north-east of Eta Corvi. Although it is visible with 7×35 binoculars or a amateur telescope, an telescope is needed to distinguish the bulge from the disk, and a 10- or 12-inch (250 or 300 mm) telescope to see the dark dust lane. In culture One artistic work referencing the Sombrero Galaxy is the song South of the Sombrero Galaxy by St. Louis folk metal band Ars Arcanum. The gritty sci-fi Western piece is told from the perspective of a man on the run, from both the law and his own troubled past. The Sombrero Galaxy serves as a backdrop for the song’s narrative, blending cosmic imagery with themes of pursuit and regret. Gallery
Physical sciences
Notable galaxies
Astronomy
864847
https://en.wikipedia.org/wiki/Azure%20%28color%29
Azure (color)
Azure ( , ) is the color between cyan and blue on the spectrum of visible light. It is often described as the color of the sky on a clear day. On the RGB color wheel, "azure" (hexadecimal #0080FF) is defined as the color at 210 degrees, i.e., the hue halfway between blue and cyan. In the RGB color model, used to create all the colors on a television or computer screen, azure is created by adding a 50% of green light to a 100% of blue light. In the X11 color system, which became a model for early web colors, azure is depicted as a pale cyan or white cyan. Etymology and history The color azure ultimately takes its name from the vivid-blue gemstone lapis lazuli, a metamorphic rock. is the Latin word for "stone" and is the genitive form of the Medieval Latin , which is taken from the Arabic lāzaward (), itself from the Persian lāžaward, which is the name of the stone in Persian and also of a place where lapis lazuli was mined. The name of the stone came to be associated with its color. The French , the Italian , the Polish , Romanian and , the Portuguese and Spanish , Hungarian , and the Catalan atzur, all come from the name and color of lapis lazuli. The dropping of the initial l in Romance languages may be a case of the linguistic phenomenon known as rebracketing, i.e. Romance speakers may have perceived the sound as the initial phoneme of the definitive article in their respective language. The word was adopted into English from the French, and the first recorded use of it as a color name in English was in 1374 in Geoffrey Chaucer's work Troilus and Criseyde, where he refers to "a broche, gold and asure" (a brooch, gold and azure). Some languages, such as Italian, generally consider azure to be a basic colour, separate and distinct from blue. Some sources even go to the point of defining blue as a darker shade of azure. Azure also describes the color of the mineral azurite, both in its natural form and as a pigment in various paint formulations. In order to preserve its deep color, azurite was ground coarsely. Fine-ground azurite produces a lighter, washed-out color. Traditionally, the pigment was considered unstable in oil paints, and was sometimes isolated from other colors and not mixed. The use of the term spread through the practice of heraldry, where "azure" represents a blue color in the system of tinctures. In engravings, it is represented as a region of parallel horizontal lines, or by the abbreviation az. or b. In practice, azure has been represented by any number of shades of blue. In later heraldic practice a lighter blue, called bleu celeste ("sky blue"), is sometimes specified. Distinction among indigo, azure, and cyan According to the logic of the RGB color wheel, indigo colors are those colors with hue codes between 255 and 225 (degrees), azure colors are those colors with hue codes between 195 and 225, and cyan colors are those colors with hue codes between 165 and 195. Another way of describing it could be that cyan is a mixture of blue and green light, azure is a mixture of blue and cyan light, and indigo is a mixture of blue and violet light. All of the colors shown below in the section shades of azure are referenced as having a hue between 195 and 225 degrees, with the exception of the very pale X11 web color azure – RGB (240, 255, 255) – which, with a hue of 180 degrees, is a tone of cyan, but follows the artistic meaning of azure as sky blue. In nature Insects Azure bluet (Enallagma aspersum), damselfly found in North America Azure damselfly (Coenagrion puella), damselfly found in Europe Azure hawker (Aeshna caerulea), dragonfly in the family Aeshnidae Birds Azure gallinule (Porphyrio flavirostris), bird in the rail family, Rallidae Azure jay (Cyanocorax caeruleus) bird in the crow family, Corvidae Azure kingfisher (Alcedo azurea), bird in the river kingfisher family, Alcedinidae Azure tit (Cyanistes cyanus), bird in the tit family, Paridae Azure-crowned hummingbird (Amazilia cyanocephala), a hummingbird in the family Trochilidae Azure-hooded jay (Cyanolyca cucullata), bird in the crow family, Corvidae Azure-naped jay (Cyanocorax heilprini), bird in the crow family, Corvidae Azure-rumped tanager (Tangara cabanisi), bird in the family Thraupidae Azure-shouldered tanager (Thraupis cyanoptera), bird in the family Thraupidae Azure-winged magpie (Cyanopica cyana), bird in the crow family, Corvidae European roller (Coracias garrulus) Plants Azure bluet (Houstonia caerulea), flower found in the eastern United States In culture Côte d'Azur ("Azure Coast") is a name commonly used for the French Riviera, part of France's southeastern coast on the Mediterranean. In Chinese mythology, the Azure Dragon is one of the Four Symbols of the Chinese constellations. It is sometimes called the Azure Dragon of the East (). Known as Seiryū () in Japan and Cheongryong (/) in Korea, it represents the east and the spring season. Savoy azure (azzurro Savoia) is a traditional national color for Italy, taken from the traditional colors of the House of Savoy, the ruling house of the Kingdom of Piedmont-Sardinia that established the first modern united Italian state. The association between azure and Italian nationalism led in the Italy national football team donning azure jerseys, giving them the nickname gli Azzurri ("the Azures"). It is also the color of the Italian state police (Polizia di Stato). Ken Nordine's 1966 album Colors features the song "Azures". Portuguese and Spanish tilework known as azulejo has the same etymology due to the color blue (azul) being used in its design. In Hebrew the azure color is called Tekhelet ("תכלת"). Tekhelet and white are sacred colors for the Jewish people. In the Torah there is a mitzva to put a Tekhelet thread in the tzitzit. The colors Tekhelet and white are the National colors of Israel and appear in the flag of Israel (where the Tekhelet resembles blue). Astronomy The true color of the exoplanet HD 189733b determined by astronomers is azure blue.
Physical sciences
Colors
Physics
865481
https://en.wikipedia.org/wiki/Spermophilus
Spermophilus
Spermophilus is a genus of ground squirrels in the squirrel family. As traditionally defined the genus was very species-rich, ranging through Europe, Asia and North America, but this arrangement was found to be paraphyletic to the certainly distinct prairie dogs, marmots, and antelope squirrels. As a consequence, all the former Spermophilus species of North America have been moved to other genera, leaving the European and Asian species as true Spermophilus (the only exceptions are two Asian Urocitellus). Some species are sometimes called susliks (or sousliks). This name comes from Russian суслик, suslik. In some languages, a derivative of the name is in common usage, for example suseł in Polish. The scientific name of this genus means "seed-lovers" (gr. σπέρμα sperma, genitive σπέρματος spermatos – seed; φίλος philos – friend, lover). Habitat and behavior As typical ground squirrels, Spermophilus live in open habitats like grasslands, meadows, steppe and semideserts, feed on the low plants, and use burrows as nests and refuge. They are diurnal and mostly live in colonies, although some species also can occur singly. They are found in both lowlands to highlands, hibernate during the colder months (up to 8 months each year in some species) and in arid regions they may also aestivate during the summer or fall. The distributions of the various species are mostly separated, often by large rivers, although there are regions inhabited by as many as three species and rarely two species may even form mixed colonies. A few species are known to hybridize where their ranges come into contact. Appearance Spermophilus are overall yellowish, light orangish, light brownish or greyish. Although many are inconspicuously mottled or spotted, or have orange markings on the head, overall they lack strong patterns, except in S. suslicus, which commonly has brown upperparts with clear white spotting. Size varies with species and they have a head-and-body length of . Before hibernation the largest S. fulvus may weigh up to and the largest S. major up to almost , but they always weigh much less earlier in the year and other species are considerably smaller, mostly less than even in peak condition before hibernation. All have a fairly short tail that—depending on exact species—is around 10–45% of the length of the head-and-body. Relationship with humans Ground squirrels may carry fleas that transmit diseases to humans (see Black Death), and have been destructive in tunneling underneath human habitation. Species A generic revision was undertaken in 2007 by means of phylogenetic analyses using the mitochondrial gene cytochrome b. This resulted in the splitting of Spermophilus into eight genera, which with the prairie dogs, marmots, and antelope squirrels are each given as numbered clades. The exact relations between the clades are slightly unclear. Among these, the exclusively Palearctic species are retained as the genus Spermophilus sensu stricto (in the strictest sense). According to a 2024 genetic study the genus can be divided into four major clades that diverged during the Late Miocene. Spermophilus sensu stricto, Old World ground squirrels East Asian clade Alashan ground squirrel, Spermophilus alashanicus Daurian ground squirrel, Spermophilus dauricus Asia Minor/European clade European ground squirrel, Spermophilus citellus Podolian souslik (Spermophilus odessanus) Speckled ground squirrel, Spermophilus suslicus Taurus ground squirrel, Spermophilus taurensis Asia Minor ground squirrel, Spermophilus xanthoprymnus Pygmaeus-clade Caucasian Mountain ground squirrel, Spermophilus musicus Little ground squirrel, Spermophilus pygmaeus Colobotis-clade Brandt's ground squirrel, Spermophilus brevicauda Red-cheeked ground squirrel, Spermophilus erythrogenys Yellow ground squirrel, Spermophilus fulvus Russet ground squirrel, Spermophilus major Pallid ground squirrel, Spermophilus pallidicauda Spermophilus ralli Relict ground squirrel, Spermophilus relictus Spermophilus selevinus Spermophilus vorontsovi Prehistoric species Discovery and examination of one of the best preserved Eurasian ground squirrel fossils yet recovered allowed the study of many previously unknown aspects of ground squirrel cranial anatomy, and prompted a critical reassessment of their phylogenetic position. As a result, three Pleistocene species previously considered members of the Urocitellus genus were moved to Spermophilus: †Spermophilus nogaici †Spermophilus polonicus †Spermophilus primigenius In addition to the recent species, three now-extinct species are known from the Pleistocene of Europe: Spermophilus citelloides is known from the Middle Pleistocene to early Holocene of central Europe. It appears to be most closely related to the living S. suslicus. Spermophilus severskensis is known from the late Pleistocene (Weichselian) of the Desna area, Ukraine. It appears to have been a highly specialised grazer and close relative of the living S. pygmaeus. Spermophilus superciliosus is known from the Middle Pleistocene to reportedly the early 20th century, with a vast range across much of Europe, from southern England to the Volga and the Ural Mountains. It was similar in size to the recent S. major, and a probable ancestor of S. fulvus.
Biology and health sciences
Rodents
Animals
866065
https://en.wikipedia.org/wiki/UEFI
UEFI
Unified Extensible Firmware Interface (UEFI, or as an acronym) is a specification for the firmware architecture of a computing platform. When a computer is powered on, the UEFI-implementation is typically the first that runs, before starting the operating system. Examples include AMI Aptio, Phoenix SecureCore, TianoCore EDK II, InsydeH2O. UEFI replaces the BIOS that was present in the boot ROM of all personal computers that are IBM PC compatible, although it can provide backwards compatibility with the BIOS using CSM booting. Unlike its predecessor, BIOS, which is a de facto standard originally created by IBM as proprietary software, UEFI is an open standard maintained by an industry consortium. Like BIOS, most UEFI implementations are proprietary. Intel developed the original Extensible Firmware Interface (EFI) specification. The last Intel version of EFI was 1.10 released in 2005. Subsequent versions have been developed as UEFI by the UEFI Forum. UEFI is independent of platform and programming language, but C is used for the reference implementation TianoCore EDKII. History The original motivation for EFI came during early development of the first Intel–HP Itanium systems in the mid-1990s. BIOS limitations (such as 16-bit real mode, 1 MB addressable memory space, assembly language programming, and PC AT hardware) had become too restrictive for the larger server platforms Itanium was targeting. The effort to address these concerns began in 1998 and was initially called Intel Boot Initiative. It was later renamed to Extensible Firmware Interface (EFI). The first open source UEFI implementation, Tiano, was released by Intel in 2004. Tiano has since then been superseded by EDK and EDK II and is now maintained by the TianoCore community. In July 2005, Intel ceased its development of the EFI specification at version 1.10, and contributed it to the Unified EFI Forum, which has developed the specification as the Unified Extensible Firmware Interface (UEFI). The original EFI specification remains owned by Intel, which exclusively provides licenses for EFI-based products, but the UEFI specification is owned by the UEFI Forum. Version 2.0 of the UEFI specification was released on 31 January 2006. It added cryptography and security. Version 2.1 of the UEFI specification was released on 7 January 2007. It added network authentication and the user interface architecture ('Human Interface Infrastructure' in UEFI). In October 2018, Arm announced Arm ServerReady, a compliance certification program for landing the generic off-the-shelf operating systems and hypervisors on Arm-based servers. The program requires the system firmware to comply with Server Base Boot Requirements (SBBR). SBBR requires UEFI, ACPI and SMBIOS compliance. In October 2020, Arm announced the extension of the program to the edge and IoT market. The new program name is Arm SystemReady. Arm SystemReady defined the Base Boot Requirements (BBR) specification that currently provides three recipes, two of which are related to UEFI: 1) SBBR: which requires UEFI, ACPI and SMBIOS compliance suitable for enterprise level operating environments such as Windows, Red Hat Enterprise Linux, and VMware ESXi; and 2) EBBR: which requires compliance to a set of UEFI interfaces as defined in the Embedded Base Boot Requirements (EBBR) suitable for embedded environments such as Yocto. Many Linux and BSD distros can support both recipes. In December 2018, Microsoft announced Project Mu, a fork of TianoCore EDK II used in Microsoft Surface and Hyper-V products. The project promotes the idea of firmware as a service. The latest UEFI specification, version 2.11, was published in December 2024. Advantages The interface defined by the EFI specification includes data tables that contain platform information, and boot and runtime services that are available to the OS loader and OS. UEFI firmware provides several technical advantages over a BIOS: Ability to boot a disk containing large partitions (over 2 TB) with a GUID Partition Table (GPT) Flexible pre-OS environment, including network capability, GUI, multi language 32-bit (for example IA-32, ARM32) or 64-bit (for example x64, AArch64) pre-OS environment C language programming Python programming using Python interpreter for UEFI shell Modular design Backward and forward compatibility With UEFI, it is possible to store product keys for operating systems such as Windows, on the UEFI firmware of the device. UEFI is required for Secure Boot on devices shipping with Windows 8 and above. It is also possible for operating systems to access UEFI configuration data. Compatibility Processor compatibility As of version 2.5, processor bindings exist for Itanium, x86, x86-64, ARM (AArch32) and ARM64 (AArch64). Only little-endian processors can be supported. Unofficial UEFI support is under development for POWERPC64 by implementing TianoCore on top of OPAL, the OpenPOWER abstraction layer, running in little-endian mode. Similar projects exist for MIPS and RISC-V. As of UEFI 2.7, RISC-V processor bindings have been officially established for 32-, 64- and 128-bit modes. Standard PC BIOS is limited to a 16-bit processor mode and 1 MB of addressable memory space, resulting from the design based on the IBM 5150 that used a 16-bit Intel 8088 processor. In comparison, the processor mode in a UEFI environment can be either 32-bit (IA-32, AArch32) or 64-bit (x86-64, Itanium, and AArch64). 64-bit UEFI firmware implementations support long mode, which allows applications in the preboot environment to use 64-bit addressing to get direct access to all of the machine's memory. UEFI requires the firmware and operating system loader (or kernel) to be size-matched; that is, a 64-bit UEFI firmware implementation can load only a 64-bit operating system (OS) boot loader or kernel (unless the CSM-based legacy boot is used) and the same applies to 32-bit. After the system transitions from boot services to runtime services, the operating system kernel takes over. At this point, the kernel can change processor modes if it desires, but this bars usage of the runtime services (unless the kernel switches back again). As of version 3.15, the Linux kernel supports 64-bit kernels to be booted on 32-bit UEFI firmware implementations running on x86-64 CPUs, with UEFI handover support from a UEFI boot loader as the requirement. UEFI handover protocol deduplicates the UEFI initialization code between the kernel and UEFI boot loaders, leaving the initialization to be performed only by the Linux kernel's UEFI boot stub. Disk device compatibility In addition to the standard PC disk partition scheme that uses a master boot record (MBR), UEFI also works with the GUID Partition Table (GPT) partitioning scheme, which is free from many of the limitations of MBR. In particular, the MBR limits on the number and size of disk partitions (up to four primary partitions per disk, and up to 2 TB per disk) are relaxed. More specifically, GPT allows for a maximum disk and partition size of 8 ZiB . Linux Support for GPT in Linux is enabled by turning on the option CONFIG_EFI_PARTITION (EFI GUID Partition Support) during kernel configuration. This option allows Linux to recognize and use GPT disks after the system firmware passes control over the system to Linux. For reverse compatibility, Linux can use GPT disks in BIOS-based systems for both data storage and booting, as both GRUB 2 and Linux are GPT-aware. Such a setup is usually referred to as BIOS-GPT. As GPT incorporates the protective MBR, a BIOS-based computer can boot from a GPT disk using a GPT-aware boot loader stored in the protective MBR's bootstrap code area. In the case of GRUB, such a configuration requires a BIOS boot partition for GRUB to embed its second-stage code due to absence of the post-MBR gap in GPT partitioned disks (which is taken over by the GPT's Primary Header and Primary Partition Table). Commonly 1 MB in size, this partition's Globally Unique Identifier (GUID) in GPT scheme is and is used by GRUB only in BIOS-GPT setups. From GRUB's perspective, no such partition type exists in case of MBR partitioning. This partition is not required if the system is UEFI-based because no embedding of the second-stage code is needed in that case. UEFI systems can access GPT disks and boot directly from them, which allows Linux to use UEFI boot methods. Booting Linux from GPT disks on UEFI systems involves creation of an EFI system partition (ESP), which contains UEFI applications such as bootloaders, operating system kernels, and utility software. Such a setup is usually referred to as UEFI-GPT, while ESP is recommended to be at least 512 MB in size and formatted with a FAT32 filesystem for maximum compatibility. For backward compatibility, some UEFI implementations also support booting from MBR-partitioned disks through the Compatibility Support Module (CSM) that provides legacy BIOS compatibility. In that case, booting Linux on UEFI systems is the same as on legacy BIOS-based systems. Microsoft Windows Some of the EFI's practices and data formats mirror those of Microsoft Windows. The 64-bit versions of Windows Vista SP1 and later and 64-bit versions of Windows 8, 8.1, 10, and 11 can boot from a GPT disk that is larger than 2 TB. Features Services EFI defines two types of services: boot services and runtime services. Boot services are available only while the firmware owns the platform (i.e., before the ExitBootServices() call), and they include text and graphical consoles on various devices, and bus, block and file services. Runtime services are still accessible while the operating system is running; they include services such as date, time and NVRAM access. Graphics Output Protocol (GOP) services The Graphics Output Protocol (GOP) provides runtime services; see also Graphics features section below. The operating system is permitted to directly write to the framebuffer provided by GOP during runtime mode. UEFI Memory map services SMM services ACPI services SMBIOS services Devicetree services (for RISC processors) Variable services UEFI variables provide a way to store data, in particular non-volatile data. Some UEFI variables are shared between platform firmware and operating systems. Variable namespaces are identified by GUIDs, and variables are key/value pairs. For example, UEFI variables can be used to keep crash messages in NVRAM after a crash for the operating system to retrieve after a reboot. Time services UEFI provides time services. Time services include support for time zone and daylight saving fields, which allow the hardware real-time clock to be set to local time or UTC. On machines using a PC-AT real-time clock, by default the hardware clock still has to be set to local time for compatibility with BIOS-based Windows, unless using recent versions and an entry in the Windows registry is set to indicate the use of UTC. Applications Beyond loading an OS, UEFI can run UEFI applications, which reside as files on the EFI system partition. They can be executed from the UEFI Shell, by the firmware's boot manager, or by other UEFI applications. UEFI applications can be developed and installed independently of the original equipment manufacturers (OEMs). A type of UEFI application is an OS boot loader such as GRUB, rEFInd, Gummiboot, and Windows Boot Manager, which loads some OS files into memory and executes them. Also, an OS boot loader can provide a user interface to allow the selection of another UEFI application to run. Utilities like the UEFI Shell are also UEFI applications. Protocols EFI defines protocols as a set of software interfaces used for communication between two binary modules. All EFI drivers must provide services to others via protocols. The EFI Protocols are similar to the BIOS interrupt calls. Device drivers In addition to standard instruction set architecture-specific device drivers, EFI provides for a ISA-independent device driver stored in non-volatile memory as EFI byte code or EBC. System firmware has an interpreter for EBC images. In that sense, EBC is analogous to Open Firmware, the ISA-independent firmware used in PowerPC-based Apple Macintosh and Sun Microsystems SPARC computers, among others. Some architecture-specific (non-EFI Byte Code) EFI drivers for some device types can have interfaces for use by the OS. This allows the OS to rely on EFI for drivers to perform basic graphics and network functions before, and if, operating-system-specific drivers are loaded. In other cases, the EFI driver can be filesystem drivers that allow for booting from other types of disk volumes. Examples include efifs for 37 file systems (based on GRUB2 code), used by Rufus for chain-loading NTFS ESPs. Graphics features The EFI 1.0 specification defined a UGA (Universal Graphic Adapter) protocol as a way to support graphics features. UEFI did not include UGA and replaced it with GOP (Graphics Output Protocol). UEFI 2.1 defined a "Human Interface Infrastructure" (HII) to manage user input, localized strings, fonts, and forms (in the HTML sense). These enable original equipment manufacturers (OEMs) or independent BIOS vendors (IBVs) to design graphical interfaces for pre-boot configuration. UEFI uses UTF-16 to encode strings by default. Most early UEFI firmware implementations were console-based. Today many UEFI firmware implementations are GUI-based. EFI system partition An EFI system partition, often abbreviated to ESP, is a data storage device partition that is used in computers adhering to the UEFI specification. Accessed by the UEFI firmware when a computer is powered up, it stores UEFI applications and the files these applications need to run, including operating system boot loaders. Supported partition table schemes include MBR and GPT, as well as El Torito volumes on optical discs. For use on ESPs, UEFI defines a specific version of the FAT file system, which is maintained as part of the UEFI specification and independently from the original FAT specification, encompassing the FAT32, FAT16 and FAT12 file systems. The ESP also provides space for a boot sector as part of the backward BIOS compatibility. Booting UEFI booting Unlike the legacy PC BIOS, UEFI does not rely on boot sectors, defining instead a boot manager as part of the UEFI specification. When a computer is powered on, the boot manager checks the boot configuration and, based on its settings, then executes the specified OS boot loader or operating system kernel (usually boot loader). The boot configuration is defined by variables stored in NVRAM, including variables that indicate the file system paths to OS loaders or OS kernels. OS boot loaders can be automatically detected by UEFI, which enables easy booting from removable devices such as USB flash drives. This automated detection relies on standardized file paths to the OS boot loader, with the path varying depending on the computer architecture. The format of the file path is defined as ; for example, the file path to the OS loader on an x86-64 system is , and on ARM64 architecture. Booting UEFI systems from GPT-partitioned disks is commonly called UEFI-GPT booting. Despite the fact that the UEFI specification requires MBR partition tables to be fully supported, some UEFI firmware implementations immediately switch to the BIOS-based CSM booting depending on the type of boot disk's partition table, effectively preventing UEFI booting to be performed from EFI System Partition on MBR-partitioned disks. Such a boot scheme is commonly called UEFI-MBR. It is also common for a boot manager to have a textual user interface so the user can select the desired OS (or setup utility) from a list of available boot options. CSM booting To ensure backward compatibility, UEFI firmware implementations on PC-class machines could support booting in legacy BIOS mode from MBR-partitioned disks through the Compatibility Support Module (CSM) that provides legacy BIOS compatibility. In this scenario, booting is performed in the same way as on legacy BIOS-based systems, by ignoring the partition table and relying on the content of a boot sector. BIOS-style booting from MBR-partitioned disks is commonly called BIOS-MBR, regardless of it being performed on UEFI or legacy BIOS-based systems. Furthermore, booting legacy BIOS-based systems from GPT disks is also possible, and such a boot scheme is commonly called BIOS-GPT. The Compatibility Support Module allows legacy operating systems and some legacy option ROMs that do not support UEFI to still be used. It also provides required legacy System Management Mode (SMM) functionality, called CompatibilitySmm, as an addition to features provided by the UEFI SMM. An example of such a legacy SMM functionality is providing USB legacy support for keyboard and mouse, by emulating their classic PS/2 counterparts. In November 2017, Intel announced that it planned to phase out support CSM for client platforms by 2020. In July, of 2022, Kaspersky Labs published information regarding a Rootkit designed to chain boot malicious code on machines using Intel's H81 chipset and the Compatibility Support module of affected motherboards. In August 2023, Intel announced that it planned to phase out support CSM for server platforms by 2024. As of today, all computers based on Intel platforms no longer have CSM support. Network booting The UEFI specification includes support for booting over network via the Preboot eXecution Environment (PXE). PXE booting network protocols include Internet Protocol (IPv4 and IPv6), User Datagram Protocol (UDP), Dynamic Host Configuration Protocol (DHCP), Trivial File Transfer Protocol (TFTP) and iSCSI. OS images can be remotely stored on storage area networks (SANs), with Internet Small Computer System Interface (iSCSI) and Fibre Channel over Ethernet (FCoE) as supported protocols for accessing the SANs. Version 2.5 of the UEFI specification adds support for accessing boot images over HTTP. Secure Boot The UEFI specification defines a protocol known as Secure Boot, which can secure the boot process by preventing the loading of UEFI drivers or OS boot loaders that are not signed with an acceptable digital signature. The mechanical details of how precisely these drivers are to be signed are not specified. When Secure Boot is enabled, it is initially placed in "setup" mode, which allows a public key known as the "platform key" (PK) to be written to the firmware. Once the key is written, Secure Boot enters "User" mode, where only UEFI drivers and OS boot loaders signed with the platform key can be loaded by the firmware. Additional "key exchange keys" (KEK) can be added to a database stored in memory to allow other certificates to be used, but they must still have a connection to the private portion of the platform key. Secure Boot can also be placed in "Custom" mode, where additional public keys can be added to the system that do not match the private key. Secure Boot is supported by Windows 8 and 8.1, Windows Server 2012 and 2012 R2, Windows 10, Windows Server 2016, 2019, and 2022, and Windows 11, VMware vSphere 6.5 and a number of Linux distributions including Fedora (since version 18), openSUSE (since version 12.3), RHEL (since version 7), CentOS (since version 7), Debian (since version 10), Ubuntu (since version 12.04.2), Linux Mint (since version 21.3)., and AlmaLinux OS (since version 8.4). , FreeBSD support is in a planning stage. UEFI shell UEFI provides a shell environment, which can be used to execute other UEFI applications, including UEFI boot loaders. Apart from that, commands available in the UEFI shell can be used for obtaining various other information about the system or the firmware, including getting the memory map (memmap), modifying boot manager variables (bcfg), running partitioning programs (diskpart), loading UEFI drivers, and editing text files (edit). Source code for a UEFI shell can be downloaded from the Intel's TianoCore UDK/EDK2 project. A pre-built ShellBinPkg is also available. Shell v2 works best in UEFI 2.3+ systems and is recommended over Shell v1 in those systems. Shell v1 should work in all UEFI systems. Methods used for launching UEFI shell depend on the manufacturer and model of the system motherboard. Some of them already provide a direct option in firmware setup for launching, e.g. compiled x86-64 version of the shell needs to be made available as <EFI_SYSTEM_PARTITION>/SHELLX64.EFI. Some other systems have an already embedded UEFI shell which can be launched by appropriate key press combinations. For other systems, the solution is either creating an appropriate USB flash drive or adding manually (bcfg) a boot option associated with the compiled version of shell. Commands The following is a list of commands supported by the EFI shell. alias attrib bcfg cd cls comp cp date dblk dh dmpstore echo Edd30 EddDebug edit err guid help load ls map mem memmap mkdir mm mode mount pause pci reset rm set stall time type unload ver vol Extensions Extensions to UEFI can be loaded from virtually any non-volatile storage device attached to the computer. For example, an original equipment manufacturer (OEM) can distribute systems with an EFI system partition on the hard drive, which would add additional functions to the standard UEFI firmware stored on the motherboard's ROM. UEFI Capsule UEFI Capsule defines a Firmware-to-OS firmware update interface, marketed as modern and secure. Windows 8, Windows 8.1, Windows 10, and Fwupd for Linux each support the UEFI Capsule. Hardware Like BIOS, UEFI initializes and tests system hardware components (e.g. memory training, PCIe link training, USB link training on typical x86 systems), and then loads the boot loader from a mass storage device or through a network connection. In x86 systems, the UEFI firmware is usually stored in the NOR flash chip of the motherboard. In some ARM-based Android and Windows Phone devices, the UEFI boot loader is stored in the eMMC or eUFS flash memory. Classes UEFI machines can have one of the following classes, which were used to help ease the transition to UEFI: Class 0: Legacy BIOS Class 1: UEFI with a CSM interface and no external UEFI interface. The only UEFI interfaces are internal to the firmware. Class 2: UEFI with CSM and external UEFI interfaces, eg. UEFI Boot. Class 3: UEFI without a CSM interface and with an external UEFI interface. Class 3+: UEFI class 3 that has Secure Boot enabled. Starting from the 10th Gen Intel Core, Intel no longer provides Legacy Video BIOS for the iGPU (Intel Graphics Technology). Legacy boot with those CPUs requires a Legacy Video BIOS, which can still be provided by a video card. Boot stages SEC – Security Phase This is the first stage of the UEFI boot but may have platform specific binary code that precedes it. (e.g., Intel ME, AMD PSP, CPU microcode). It consists of minimal code written in assembly language for the specific architecture. It initializes a temporary memory (often CPU cache-as-RAM (CAR), or SoC on-chip SRAM) and serves as the system's software root of trust with the option of verifying PEI before hand-off. PEI – Pre-EFI Initialization The second stage of UEFI boot consists of a dependency-aware dispatcher that loads and runs PEI modules (PEIMs) to handle early hardware initialization tasks such as main memory initialization (initialize memory controller and DRAM) and firmware recovery operations. Additionally, it is responsible for discovery of the current boot mode and handling many ACPI S3 operations. In the case of ACPI S3 resume, it is responsible for restoring many hardware registers to a pre-sleep state. PEI also uses CAR. Initialization at this stage involves creating data structures in memory and establishing default values within these structures. DXE – Driver Execution Environment This stage consist of C modules and a dependency-aware dispatcher. With main memory now available, CPU, chipset, mainboard and other I/O devices are initialized in DXE and BDS. Initialization at this stage involves assigning EFI device paths to the hardware connected to the motherboard, and transferring configuration data to the hardware. BDS – Boot Device Select BDS is a part of the DXE. In this stage, boot devices are initialized, UEFI drivers or Option ROMs of PCI devices are executed according to system configuration, and boot options are processed. TSL – Transient System Load This is the stage between boot device selection and hand-off to the OS. At this point one may enter UEFI shell, or execute an UEFI application such as the OS boot loader. RT – Runtime The UEFI hands off to the operating system (OS) after is executed. A UEFI compatible OS is now responsible for exiting boot services triggering the firmware to unload all no longer needed code and data, leaving only runtime services code/data, e.g. SMM and ACPI. A typical modern OS will prefer to use its own programs (such as kernel drivers) to control hardware devices. When a legacy OS is used, CSM will handle this call ensuring the system is compatible with legacy BIOS expectations. Usage Implementations Intel's implementation of EFI is the Intel Platform Innovation Framework, codenamed Tiano. Tiano runs on Intel's XScale, Itanium, IA-32 and x86-64 processors, and is proprietary software, although a portion of the code has been released under the BSD license or Eclipse Public License (EPL) as TianoCore EDK II. TianoCore can be used as a payload for coreboot. Phoenix Technologies' implementation of UEFI is branded as SecureCore Technology (SCT). American Megatrends offers its own UEFI firmware implementation known as Aptio, while Insyde Software offers InsydeH2O, and Byosoft offers ByoCore. In December 2018, Microsoft released an open source version of its TianoCore EDK2-based UEFI implementation from the Surface line, Project Mu. An implementation of the UEFI API was introduced into the Universal Boot Loader (Das U-Boot) in 2017. On the ARMv8 architecture Linux distributions use the U-Boot UEFI implementation in conjunction with GNU GRUB for booting (e.g. SUSE Linux), the same holds true for OpenBSD. For booting from iSCSI iPXE can be used as a UEFI application loaded by U-Boot. Platforms Intel's first Itanium workstations and servers, released in 2000, implemented EFI 1.02. Hewlett-Packard's first Itanium 2 systems, released in 2002, implemented EFI 1.10; they were able to boot Windows, Linux, FreeBSD and HP-UX; OpenVMS added UEFI capability in June 2003. In January 2006, Apple Inc. shipped its first Intel-based Macintosh computers. These systems used EFI instead of Open Firmware, which had been used on its previous PowerPC-based systems. On 5 April 2006, Apple first released Boot Camp, which produces a Windows drivers disk and a non-destructive partitioning tool to allow the installation of Windows XP or Vista without requiring a reinstallation of Mac OS X (now macOS). A firmware update was also released that added BIOS compatibility to its EFI implementation. Subsequent Macintosh models shipped with the newer firmware. During 2005, more than one million Intel systems shipped with Intel's implementation of UEFI. New mobile, desktop and server products, using Intel's implementation of UEFI, started shipping in 2006. For instance, boards that use the Intel 945 chipset series use Intel's UEFI firmware implementation. Since 2005, EFI has also been implemented on non-PC architectures, such as embedded systems based on XScale cores. The EDK (EFI Developer Kit) includes an NT32 target, which allows EFI firmware and EFI applications to run within a Windows application. But no direct hardware access is allowed by EDK NT32. This means only a subset of EFI application and drivers can be executed by the EDK NT32 target. In 2008, more x86-64 systems adopted UEFI. While many of these systems still allow booting only the BIOS-based OSes via the Compatibility Support Module (CSM) (thus not appearing to the user to be UEFI-based), other systems started to allow booting UEFI-based OSes. For example, IBM x3450 server, MSI motherboards with ClickBIOS, HP EliteBook Notebook PCs. In 2009, IBM shipped System x machines (x3550 M2, x3650 M2, iDataPlex dx360 M2) and BladeCenter HS22 with UEFI capability. Dell shipped PowerEdge T610, R610, R710, M610 and M710 servers with UEFI capability. More commercially available systems are mentioned in a UEFI whitepaper. In 2011, major vendors (such as ASRock, Asus, Gigabyte, and MSI) launched several consumer-oriented motherboards using the Intel 6-series LGA 1155 chipset and AMD 9 Series AM3+ chipsets with UEFI. With the release of Windows 8 in October 2012, Microsoft's certification requirements now require that computers include firmware that implements the UEFI specification. Furthermore, if the computer supports the "Connected Standby" feature of Windows 8 (which allows devices to have power management comparable to smartphones, with an almost instantaneous return from standby mode), then the firmware is not permitted to contain a Compatibility Support Module (CSM). As such, systems that support Connected Standby are incapable of booting Legacy BIOS operating systems. In October 2017, Intel announced that it would remove legacy PC BIOS support from all its products by 2020, in favor of UEFI Class 3. By 2019, all computers based on Intel platforms no longer have legacy PC BIOS support. Operating systems An operating system that can be booted from a (U)EFI is called a (U)EFI-aware operating system, defined by (U)EFI specification. Here the term booted from a (U)EFI means directly booting the system using a (U)EFI operating system loader stored on any storage device. The default location for the operating system loader is <EFI_SYSTEM_PARTITION>/BOOT/BOOT<MACHINE_TYPE_SHORT_NAME>.EFI, where short name of the machine type can be IA32, X64, IA64, ARM or AA64. Some operating systems vendors may have their own boot loaders. They may also change the default boot location. The Linux kernel has been able to use EFI at boot time since early 2000s, using the elilo EFI boot loader or, more recently, EFI versions of GRUB. Grub+Linux also supports booting from a GUID partition table without UEFI. The distribution Ubuntu added support for UEFI Secure Boot as of version 12.10. Furthermore, the Linux kernel can be compiled with the option to run as an EFI bootloader on its own through the EFI boot stub feature. HP-UX has used (U)EFI as its boot mechanism on IA-64 systems since 2002. OpenVMS has used EFI on IA-64 since its initial evaluation release in December 2003, and for production releases since January 2005. OpenVMS on x86-64 also uses UEFI to boot the operating system. Apple uses EFI for its line of Intel-based Macs. Mac OS X v10.4 Tiger and Mac OS X v10.5 Leopard implement EFI v1.10 in 32-bit mode even on newer 64-bit CPUs, but full support arrived with OS X v10.8 Mountain Lion. The Itanium versions of Windows 2000 (Advanced Server Limited Edition and Datacenter Server Limited Edition; based on the pre-release Windows Server 2003 codebase) implemented EFI 1.10 in 2002. Windows XP 64-bit Edition, Windows 2000 Advanced Server Limited Edition (pre-release Windows Server 2003) and Windows Server 2003 for IA-64, all of which are for the Intel Itanium family of processors, implement EFI, a requirement of the platform through the DIG64 specification. Microsoft introduced UEFI for x64 Windows operating systems with Windows Vista SP1 and Windows Server 2008 however only UGA (Universal Graphic Adapter) 1.1 or Legacy BIOS INT 10h is supported; Graphics Output Protocol (GOP) is not supported. Therefore, PCs running 64-bit versions of Windows Vista SP1, Windows Vista SP2, Windows 7, Windows Server 2008 and Windows Server 2008 R2 are compatible with UEFI Class 2. 32-bit UEFI was originally not supported since vendors did not have any interest in producing native 32-bit UEFI firmware because of the mainstream status of 64-bit computing. Windows 8 finally introduced further optimizations for UEFI systems, including Graphics Output Protocol (GOP) support, a faster startup, 32-bit UEFI support, and Secure Boot support. Since Windows 8, the UEFI firmware with ACPI protocol is a mandatory requirement for ARM-based Microsoft Windows operating systems. Microsoft began requiring UEFI to run Windows with Windows 11, with IoT Enterprise editions of Windows 11 since version 24H2 exempt from the requirement. On 5 March 2013, the FreeBSD Foundation awarded a grant to a developer seeking to add UEFI support to the FreeBSD kernel and bootloader. The changes were initially stored in a discrete branch of the FreeBSD source code, but were merged into the mainline source on 4 April 2014 (revision 264095); the changes include support in the installer as well. UEFI boot support for amd64 first appeared in FreeBSD 10.1 and for arm64 in FreeBSD 11.0. Oracle Solaris 11.1 and later support UEFI boot for x86 systems with UEFI firmware version 2.1 or later. GRUB 2 is used as the boot loader on x86. OpenBSD 5.9 introduced UEFI boot support for 64-bit x86 systems using its own custom loader, OpenBSD 6.0 extended that support to include ARMv7. illumos added basic UEFI support in October 2017. ArcaOS supports UEFI booting since the 5.1 release. ArcaOS' UEFI support emulates specific BIOS functionality which the operating system depends on (particularly interrupts INT 10H and INT 13H). With virtualization HP Integrity Virtual Machines provides UEFI boot on HP Integrity Servers. It also provides a virtualized UEFI environment for the guest UEFI-aware OSes. Intel hosts an Open Virtual Machine Firmware project on SourceForge. VMware Fusion 3 software for Mac OS X can boot Mac OS X Server virtual machines using UEFI. VMware Workstation prior to version 11 unofficially supports UEFI, but is manually enabled by editing the .vmx file. VMware Workstation version 11 and above supports UEFI, independently of whether the physical host system is UEFI-based. VMware Workstation 14 (and accordingly, Fusion 10) adds support for the Secure Boot feature of UEFI. The VMware ESXi 5.0 hypervisor officially supports UEFI. Version 6.5 adds support for Secure Boot. VirtualBox has implemented UEFI since 3.1, but is limited to Unix/Linux operating systems and Windows 8 and later (does not work with Windows Vista x64 and Windows 7 x64). QEMU/KVM can be used with the Open Virtual Machine Firmware (OVMF) provided by TianoCore. The second generation of the Microsoft Hyper-V virtual machine supports virtualized UEFI. Google Cloud Platform Shielded VMs support virtualized UEFI to enable Secure Boot. Applications development EDK2 Application Development Kit (EADK) makes it possible to use standard C library functions in UEFI applications. EADK can be freely downloaded from the Intel's TianoCore UDK / EDK2 SourceForge project. As an example, a port of the Python interpreter is made available as a UEFI application by using the EADK. The development has moved to GitHub since UDK2015. A minimalistic "hello, world" C program written using EADK looks similar to its usual C counterpart: #include <Uefi.h> #include <Library/UefiLib.h> #include <Library/ShellCEntryLib.h> EFI_STATUS EFIAPI ShellAppMain(IN UINTN Argc, IN CHAR16 **Argv) { Print(L"hello, world\n"); return EFI_SUCCESS; } Criticism Numerous digital rights activists have protested UEFI. Ronald G. Minnich, a co-author of coreboot, and Cory Doctorow, a digital rights activist, have criticized UEFI as an attempt to remove the ability of the user to truly control the computer. It does not solve the BIOS's long-standing problems of requiring two different drivers—one for the firmware and one for the operating system—for most hardware. Open-source project TianoCore also provides UEFIs. TianoCore lacks the specialized drivers that initialize chipset functions, which are instead provided by coreboot, of which TianoCore is one of many payload options. The development of coreboot requires cooperation from chipset manufacturers to provide the specifications needed to develop initialization drivers. Secure Boot In 2011, Microsoft announced that computers certified to run its Windows 8 operating system had to ship with Microsoft's public key enrolled and Secure Boot enabled, which implies that using UEFI is a requirement for these devices. Following the announcement, the company was accused by critics and free software/open source advocates (including the Free Software Foundation) of trying to use the Secure Boot functionality of UEFI to hinder or outright prevent the installation of alternative operating systems such as Linux. Microsoft denied that the Secure Boot requirement was intended to serve as a form of lock-in, and clarified its requirements by stating that x86-based systems certified for Windows 8 must allow Secure Boot to enter custom mode or be disabled, but not on systems using the ARM architecture. Windows 10 allows OEMs to decide whether or not Secure Boot can be managed by users of their x86 systems. Other developers raised concerns about the legal and practical issues of implementing support for Secure Boot on Linux systems in general. Former Red Hat developer Matthew Garrett noted that conditions in the GNU General Public License version 3 may prevent the use of the GNU GRand Unified Bootloader without a distribution's developer disclosing the private key (however, the Free Software Foundation has since clarified its position, assuring that the responsibility to make keys available was held by the hardware manufacturer), and that it would also be difficult for advanced users to build custom kernels that could function with Secure Boot enabled without self-signing them. Other developers suggested that signed builds of Linux with another key could be provided, but noted that it would be difficult to persuade OEMs to ship their computers with the required key alongside the Microsoft key. Several major Linux distributions have developed different implementations for Secure Boot. Garrett himself developed a minimal bootloader known as a shim, which is a precompiled, signed bootloader that allows the user to individually trust keys provided by Linux distributions. Ubuntu 12.10 uses an older version of shim pre-configured for use with Canonical's own key that verifies only the bootloader and allows unsigned kernels to be loaded; developers believed that the practice of signing only the bootloader is more feasible, since a trusted kernel is effective at securing only the user space, and not the pre-boot state for which Secure Boot is designed to add protection. That also allows users to build their own kernels and use custom kernel modules as well, without the need to reconfigure the system. Canonical also maintains its own private key to sign installations of Ubuntu pre-loaded on certified OEM computers that run the operating system, and also plans to enforce a Secure Boot requirement as wellrequiring both a Canonical key and a Microsoft key (for compatibility reasons) to be included in their firmware. Fedora also uses shim, but requires that both the kernel and its modules be signed as well. shim has Machine Owner Key (MOK) that can be used to sign locally-compiled kernels and other software not signed by distribution maintainer. It has been disputed whether the operating system kernel and its modules must be signed as well; while the UEFI specifications do not require it, Microsoft has asserted that their contractual requirements do, and that it reserves the right to revoke any certificates used to sign code that can be used to compromise the security of the system. In Windows, if Secure Boot is enabled, all kernel drivers must be digitally signed; non-WHQL drivers may be refused to load. In February 2013, another Red Hat developer attempted to submit a patch to the Linux kernel that would allow it to parse Microsoft's authenticode signing using a master X.509 key embedded in PE files signed by Microsoft. However, the proposal was criticized by Linux creator Linus Torvalds, who attacked Red Hat for supporting Microsoft's control over the Secure Boot infrastructure. On 26 March 2013, the Spanish free software development group Hispalinux filed a formal complaint with the European Commission, contending that Microsoft's Secure Boot requirements on OEM systems were "obstructive" and anti-competitive. At the Black Hat conference in August 2013, a group of security researchers presented a series of exploits in specific vendor implementations of UEFI that could be used to exploit Secure Boot. In August 2016 it was reported that two security researchers had found the "golden key" security key Microsoft uses in signing operating systems. Technically, no key was exposed, however, an exploitable binary signed by the key was. This allows any software to run as though it was genuinely signed by Microsoft and exposes the possibility of rootkit and bootkit attacks. This also makes patching the fault impossible, since any patch can be replaced (downgraded) by the (signed) exploitable binary. Microsoft responded in a statement that the vulnerability only exists in ARM architecture and Windows RT devices, and has released two patches; however, the patches do not (and cannot) remove the vulnerability, which would require key replacements in end user firmware to fix. On March 1, 2023, researchers from ESET Cybersecurity Firm reported “The first in-the-wild UEFI bootkit bypassing UEFI Secure Boot” named ‘BlackLotus’ in their public analyses findings describing the theory behind its mechanics exploiting the patches that “do not (and cannot) remove the vulnerability”. In August 2024, the Windows 11 and Windows 10 security updates applied the Secure Boot Advanced Targeting (SBAT) settings to device's UEFI NVRAM, which may caused some Linux distributions failed to load. SBAT is a protocol that supported in new versions of Windows Boot Manager and shim, which refuse buggy or vulnerability intermediate bootloader (usually older version of Windows Boot Manager and GRUB) to load in the boot process. Many Linux distributions support UEFI Secure Boot , such as RHEL (RHEL 7 and later), CentOS (CentOS 7 and later), Ubuntu, Fedora, Debian (Debian 10 and later), OpenSUSE, and SUSE Linux Enterprise. Firmware problems The increased prominence of UEFI firmware in devices has also led to a number of technical problems blamed on their respective implementations. Following the release of Windows 8 in late 2012, it was discovered that certain Lenovo computer models with Secure Boot had firmware that was hardcoded to allow only executables named "Windows Boot Manager" or "Red Hat Enterprise Linux" to load, regardless of any other setting. Other problems were encountered by several Toshiba laptop models with Secure Boot that were missing certain certificates required for its proper operation. In January 2013, a bug surrounding the UEFI implementation on some Samsung laptops was publicized, which caused them to be bricked after installing a Linux distribution in UEFI mode. While potential conflicts with a kernel module designed to access system features on Samsung laptops were initially blamed (also prompting kernel maintainers to disable the module on UEFI systems as a safety measure), Matthew Garrett discovered that the bug was actually triggered by storing too many UEFI variables to memory, and that the bug could also be triggered under Windows under certain conditions. In conclusion, he determined that the offending kernel module had caused kernel message dumps to be written to the firmware, thus triggering the bug.
Technology
Computer hardware
null
866377
https://en.wikipedia.org/wiki/Bacillus%20subtilis
Bacillus subtilis
Bacillus subtilis (), known also as the hay bacillus or grass bacillus, is a gram-positive, catalase-positive bacterium, found in soil and the gastrointestinal tract of ruminants, humans and marine sponges. As a member of the genus Bacillus, B. subtilis is rod-shaped, and can form a tough, protective endospore, allowing it to tolerate extreme environmental conditions. B. subtilis has historically been classified as an obligate aerobe, though evidence exists that it is a facultative anaerobe. B. subtilis is considered the best studied Gram-positive bacterium and a model organism to study bacterial chromosome replication and cell differentiation. It is one of the bacterial champions in secreted enzyme production and used on an industrial scale by biotechnology companies. Description Bacillus subtilis is a Gram-positive bacterium, rod-shaped and catalase-positive. It was originally named Vibrio subtilis by Christian Gottfried Ehrenberg, and renamed Bacillus subtilis by Ferdinand Cohn in 1872 (subtilis being the Latin for "fine, thin, slender"). B. subtilis cells are typically rod-shaped, and are about 4–10 micrometers (μm) long and 0.25–1.0 μm in diameter, with a cell volume of about 4.6 fL at stationary phase. As with other members of the genus Bacillus, it can form an endospore, to survive extreme environmental conditions of temperature and desiccation. B. subtilis is a facultative anaerobe and had been considered as an obligate aerobe until 1998. B. subtilis is heavily flagellated, which gives it the ability to move quickly in liquids. B. subtilis has proven highly amenable to genetic manipulation, and has become widely adopted as a model organism for laboratory studies, especially of sporulation, which is a simplified example of cellular differentiation. In terms of popularity as a laboratory model organism, B. subtilis is often considered as the Gram-positive equivalent of Escherichia coli, an extensively studied Gram-negative bacterium. Characteristics Colony, morphological, physiological, and biochemical characteristics of Bacillus subtilis are shown in the Table below. Note: + = Positive, – =Negative Habitat This species is commonly found in the upper layers of the soil and B. subtilis is thought to be a normal gut commensal in humans. A 2009 study compared the density of spores found in soil (about 106 spores per gram) to that found in human feces (about 104 spores per gram). The number of spores found in the human gut was too high to be attributed solely to consumption through food contamination. In some bee habitats, B. subtilis appears in the gut flora of honey bees. B. subtilis can also be found in marine environments. There is evidence that B. subtilis is saprophytic in nature. Studies have shown that the bacterium exhibits vegetative growth in soil rich in organic matter, and that spores were formed when nutrients were depleted. Additionally, B. subtilis has been shown to form biofilms on plant roots, which might explain why it is commonly found in gut microbiomes. Perhaps animals eating plants with B. subtilis biofilms can foster growth of the bacterium in their gastrointestinal tract. It has been shown that the entire lifecycle of B. subtilis can be completed in the gastrointestinal tract, which provides credence to the idea that the bacterium enters the gut via plant consumption and stays present as a result of its ability to grow in the gut. Reproduction Bacillus subtilis can divide symmetrically to make two daughter cells (binary fission), or asymmetrically, producing a single endospore that can remain viable for decades and is resistant to unfavourable environmental conditions such as drought, salinity, extreme pH, radiation, and solvents. The endospore is formed at times of nutritional stress and through the use of hydrolysis, allowing the organism to persist in the environment until conditions become favourable. Prior to the process of sporulation the cells might become motile by producing flagella, take up DNA from the environment, or produce antibiotics. These responses are viewed as attempts to seek out nutrients by seeking a more favourable environment, enabling the cell to make use of new beneficial genetic material or simply by killing off competition. Under stressful conditions, such as nutrient deprivation, B. subtilis undergoes the process of sporulation. This process has been very well studied and has served as a model organism for studying sporulation. Sporulation Once B. subtilis commits to sporulation, the sigma factor sigma F is secreted. This factor promotes sporulation. A sporulation septum is formed and a chromosome is slowly moved into the forespore. When a third of one chromosome copy is in the forespore and the remaining two thirds is in the mother cell, the chromosome fragment in the forespore contains the locus for sigma F, which begins to be expressed in the forespore. In order to prevent sigma F expression in the mother cell, an anti-sigma factor, which is encoded by spoIIAB, is expressed. Any residual anti-sigma factor in the forespore (which would otherwise interfere with sporulation) is inhibited by an anti-anti-sigma factor, which is encoded by spoIIAA. SpoIIAA is located near the locus for the sigma factor, so it is consistently expressed in the forespore. Since the spoIIAB locus is not located near the sigma F and spoIIAA loci, it is expressed only in the mother cell and therefore repress sporulation in that cell, allowing sporulation to continue in the forespore. Residual spoIIAA in the mother cell represses spoIIAB, but spoIIAB is constantly replaced so it continues to inhibit sporulation. When the full chromosome localizes to the forespore, spoIIAB can repress sigma F. Therefore, the genetic asymmetry of the B. subtilis chromosome and expression of sigma F, spoIIAB and spoIIAA dictate spore formation in B. subtilis. Chromosomal replication Bacillus subtilis is a model organism used to study bacterial chromosome replication. Replication of the single circular chromosome initiates at a single locus, the origin (oriC). Replication proceeds bidirectionally and two replication forks progress in clockwise and counterclockwise directions along the chromosome. Chromosome replication is completed when the forks reach the terminus region, which is positioned opposite to the origin on the chromosome map. The terminus region contains several short DNA sequences (Ter sites) that promote replication arrest. Specific proteins mediate all the steps in DNA replication. Comparison between the proteins involved in chromosomal DNA replication in B. subtilis and in Escherichia coli reveals similarities and differences. Although the basic components promoting initiation, elongation, and termination of replication are well-conserved, some important differences can be found (such as one bacterium missing proteins essential in the other). These differences underline the diversity in the mechanisms and strategies that various bacterial species have adopted to carry out the duplication of their genomes. Genome Bacillus subtilis has about 4,100 genes. Of these, only 192 were shown to be indispensable; another 79 were predicted to be essential, as well. A vast majority of essential genes were categorized in relatively few domains of cell metabolism, with about half involved in information processing, one-fifth involved in the synthesis of cell envelope and the determination of cell shape and division, and one-tenth related to cell energetics. The complete genome sequence of B. subtilis sub-strain QB928 has 4,146,839 DNA base pairs and 4,292 genes. The QB928 strain is widely used in genetic studies due to the presence of various markers [aroI(aroK)906 purE1 dal(alrA)1 trpC2]. Several noncoding RNAs have been characterized in the B. subtilis genome in 2009, including Bsr RNAs. Microarray-based comparative genomic analyses have revealed that B. subtilis members show considerable genomic diversity. FsrA is a small RNA found in Bacillus subtilis. It is an effector of the iron sparing response, and acts to down-regulate iron-containing proteins in times of poor iron bioavailability. A promising fish probiotic, B. subtilis strain WS1A, that possesses antimicrobial activity against Aeromonas veronii and suppressed motile Aeromonas septicemia in Labeo rohita. The de novo assembly resulted in an estimated chromosome size of 4,148,460 bp, with 4,288 open reading frames. B. subtilis strain WS1A genome contains many potential genes, such as those encoding proteins involved in the biosynthesis of riboflavin, vitamin B6, and amino acids (ilvD) and in carbon utilization (pta). Transformation Natural bacterial transformation involves the transfer of DNA from one bacterium to another through the surrounding medium. In B. subtilis the length of transferred DNA is greater than 1,271 kb (more than 1 million bases). The transferred DNA is likely double-stranded DNA and is often more than a third of the total chromosome length of 4,215 kb. It appears that about 7–9% of the recipient cells take up an entire chromosome. In order for a recipient bacterium to bind, take up exogenous DNA from another bacterium of the same species and recombine it into its chromosome, it must enter a special physiological state called competence. Competence in B. subtilis is induced toward the end of logarithmic growth, especially under conditions of amino-acid limitation. Under these stressful conditions of semistarvation, cells typically have just one copy of their chromosome and likely have increased DNA damage. To test whether transformation is an adaptive function for B. subtilis to repair its DNA damage, experiments were conducted using UV light as the damaging agent. These experiments led to the conclusion that competence, with uptake of DNA, is specifically induced by DNA-damaging conditions, and that transformation functions as a process for recombinational repair of DNA damage. While the natural competent state is common within laboratory B. subtilis and field isolates, some industrially relevant strains, e.g. B. subtilis (natto), are reluctant to DNA uptake due to the presence of restriction modification systems that degrade exogenous DNA. B. subtilis (natto) mutants, which are defective in a type I restriction modification system endonuclease, are able to act as recipients of conjugative plasmids in mating experiments, paving the way for further genetic engineering of this particular B. subtilis strain. By adopting Green Chemistry in the use of less hazardous materials, while saving cost, researchers have been mimicking nature's methods of synthesizing chemicals that can be useful for the food and drug industry, by "piggybacking molecules on shorts strands of DNA" before they are zipped together during their complementary base pairing between the two strands. Each strand will carry a particular molecule of interest that will undergo a specific chemical reaction simultaneously when the two corresponding strands of DNA pairs hold together like a zipper, allowing another molecule of interest, to react with one another in controlled and isolated reaction between those molecules being carried into these DNA complementary attachments. By using this method with certain bacteria that naturally follow a process replication in a multi-step fashion, the researchers can simultaneously carry on the interactions of these added molecules to interact with enzymes and other molecules used for a secondary reaction by treating it like a capsule, which is similar to how the bacteria performs its own DNA replication processes. Uses 20th century Cultures of B. subtilis were popular worldwide, before the introduction of antibiotics, as an immunostimulatory agent to aid treatment of gastrointestinal and urinary tract diseases. It was used throughout the 1950s as an alternative medicine, which upon digestion has been found to significantly stimulate broad-spectrum immune activity including activation of secretion of specific antibodies IgM, IgG and IgA and release of CpG dinucleotides inducing interferon IFN-α/IFNγ producing activity of leukocytes and cytokines important in the development of cytotoxicity towards tumor cells. It was marketed throughout America and Europe from 1946 as an immunostimulatory aid in the treatment of gut and urinary tract diseases such as Rotavirus and Shigellosis. In 1966, the U.S. Army dumped bacillus subtilis onto the grates of New York City subway stations for five days in order to observe how a biological agent dispensed around the subway trains would disperse and potentially affect unsuspecting passengers. Due to its ability to survive, it is thought to still be present there. The antibiotic bacitracin was first isolated from a variety of Bacillus licheniformis named "Tracy I" in 1945, then considered part of the B. subtilis species. It is still commercially manufactured by growing the variety in a container of liquid growth medium. Over time, the bacteria synthesizes bacitracin and secretes the antibiotic into the medium. The bacitracin is then extracted from the medium using chemical processes. Since the 1960s B. subtilis has had a history as a test species in spaceflight experimentation. Its endospores can survive up to 6 years in space if coated by dust particles protecting it from solar UV rays. It has been used as an extremophile survival indicator in outer space such as Exobiology Radiation Assembly, EXOSTACK, and EXPOSE orbital missions. Wild-type natural isolates of B. subtilis are difficult to work with compared to laboratory strains that have undergone domestication processes of mutagenesis and selection. These strains often have improved capabilities of transformation (uptake and integration of environmental DNA), growth, and loss of abilities needed "in the wild". And, while dozens of different strains fitting this description exist, the strain designated '168' is the most widely used. Strain 168 is a tryptophan auxotroph isolated after X-ray mutagenesis of B. subtilis Marburg strain and is widely used in research due to its high transformation efficiency. Bacillus globigii, a closely related but phylogenetically distinct species now known as Bacillus atrophaeus was used as a biowarfare simulant during Project SHAD (aka Project 112). Subsequent genomic analysis showed that the strains used in those studies were products of deliberate enrichment for strains that exhibited abnormally high rates of sporulation. A strain of B. subtilis formerly known as Bacillus natto is used in the commercial production of the Japanese food nattō, as well as the similar Korean food cheonggukjang. 21st century As a model organism, B. subtilis is commonly used in laboratory studies directed at discovering the fundamental properties and characteristics of Gram-positive spore-forming bacteria. In particular, the basic principles and mechanisms underlying formation of the durable endospore have been deduced from studies of spore formation in B. subtilis. Its surface-binding properties play a role in safe radionuclide waste [e.g. thorium (IV) and plutonium (IV)] disposal. Due to its excellent fermentation properties, with high product yields (20 to 25 gram per litre) it is used to produce various enzymes, such as amylase and proteases. B. subtilis is used as a soil inoculant in horticulture and agriculture. It may provide some benefit to saffron growers by speeding corn growth and increasing stigma biomass yield. It is used as an "indicator organism" during gas sterilization procedures, to ensure a sterilization cycle has completed successfully. Specifically B. subtilis endospores are used to verify that a cycle has reached spore-destroying conditions. B. subtilis has been found to act as a useful bioproduct fungicide that prevents the growth of Monilinia vaccinii-corymbosi, a.k.a. the mummy berry fungus, without interfering with pollination or fruit qualities. Both metabolically active and non-metabolically active B. subtilis cells have been shown to reduce gold (III) to gold (I) and gold (0) when oxygen is present. This biotic reduction plays a role in gold cycling in geological systems and could potentially be used to recover solid gold from said systems. Novel and artificial substrains Novel strains of B. subtilis that could use 4-fluorotryptophan (4FTrp) but not canonical tryptophan (Trp) for propagation were isolated. As Trp is only coded by a single codon, there is evidence that Trp can be displaced by 4FTrp in the genetic code. The experiments showed that the canonical genetic code can be mutable. Recombinant strains pBE2C1 and pBE2C1AB were used in production of polyhydroxyalkanoates (PHA), and malt waste can be used as their carbon source for lower-cost PHA production. It is used to produce hyaluronic acid, which is used in the joint-care sector in healthcare and cosmetics. Monsanto has isolated a gene from B. subtilis that expresses cold shock protein B and spliced it into their drought-tolerant corn hybrid MON 87460, which was approved for sale in the US in November 2011. A new strain has been modified to convert nectar into honey by secreting enzymes. Safety In other animals Bacillus subtilis was reviewed by the US FDA Center for Veterinary Medicine and found to present no safety concerns when used in direct-fed microbial products, so the Association of American Feed Control Officials has listed it approved for use as an animal feed ingredient under Section 36.14 "Direct-fed Microorganisms". The Canadian Food Inspection Agency Animal Health and Production Feed Section has classified Bacillus culture dehydrated approved feed ingredients as a silage additive under Schedule IV-Part 2-Class 8.6 and assigned the International Feed Ingredient number IFN 8-19-119. On the other hand, several feed additives containing viable spores of B. subtilis have been positively evaluated by the European Food Safety Authority, regarding their safe use for weight gaining in animal production. In humans Bacillus subtilis spores can survive the extreme heat generated during cooking. Some B. subtilis strains are responsible for causing ropiness or rope spoilage – a sticky, stringy consistency caused by bacterial production of long-chain polysaccharides – in spoiled bread dough and baked goods. For a long time, bread ropiness was associated uniquely with B. subtilis species by biochemical tests. Molecular assays (randomly amplified polymorphic DNA PCR assay, denaturing gradient gel electrophoresis analysis, and sequencing of the V3 region of 16S ribosomal DNA) revealed greater Bacillus species variety in ropy breads, which all seems to have a positive amylase activity and high heat resistance. B. subtilis CU1 (2 × 109 spores per day) was evaluated in a 16-week study (10 days administration of probiotic, followed by 18 days wash-out period per each month; repeated same procedure for total 4 months) to healthy subjects. B. subtilis CU1 was found to be safe and well tolerated in the subjects without any side effects. Bacillus subtilis and substances derived from it have been evaluated by different authoritative bodies for their safe and beneficial use in food. In the United States, an opinion letter issued in the early 1960s by the Food and Drug Administration (FDA) designated some substances derived from microorganisms as generally recognized as safe (GRAS), including carbohydrase and protease enzymes from B. subtilis. The opinions were predicated on the use of nonpathogenic and nontoxicogenic strains of the respective organisms and on the use of current good manufacturing practices. The FDA stated that the enzymes derived from the B. subtilis strain were in common use in food prior to January 1, 1958, and that nontoxigenic and nonpathogenic strains of B. subtilis are widely available and have been safely used in a variety of food applications. This includes consumption of Japanese fermented soy bean, in the form of Natto, which is commonly consumed in Japan, and contains as many as 108 viable cells per gram. The fermented beans are recognized for their contribution to a healthy gut flora and vitamin K2 intake; during this long history of widespread use, natto has not been implicated in adverse events potentially attributable to the presence of B. subtilis. The natto product and the B. subtilis natto as its principal component are FOSHU (Foods for Specified Health Use) approved by the Japanese Ministry of Health, Labour, and Welfare as effective for preservation of health. Bacillus subtilis has been granted "Qualified Presumption of Safety" status by the European Food Safety Authority.
Biology and health sciences
Gram-positive bacteria
Plants
866698
https://en.wikipedia.org/wiki/Teal
Teal
Teal is a greenish-blue color. Its name comes from that of a bird—the Eurasian teal (Anas crecca)—which presents a similarly colored stripe on its head. The word is often used colloquially to refer to shades of cyan in general. It can be created by mixing cyan into a green base, or deepened as needed with black or gray. It is also one of the first group of 16 HTML/CSS web colors. In the RGB model used to create colors on computer screens and televisions, teal is created by reducing the brightness of cyan to about one half. In North America, teal was a fad color during the 1990s, with, among others, many sports teams adopting the color for their uniforms. Etymology The first recorded use of teal as a color name in English was in 1917. The term teal (referring to a species of duck) is derived from the Middle English tele, a word akin to the Dutch taling and the Middle Low German telink. Variations Teal blue Teal blue is a medium tone of teal with more blue. The first recorded use of teal blue as a color name in English was in 1927. The source of this color is the Plochere Color System, a color system formulated in 1948 that is widely used by interior designers. Teal was subsequently a heavily used color in the 1950s and 1960s. Teal blue is also the name of a Crayola crayon color (color #113) from 1990 to 2003. Teal green Teal green is a darker shade of teal with more green. It is a variable color averaging a dark bluish-green that is green, darker, and stronger than invisible green or pine tree. Teal green is most closely related to the Crayola crayon color Deep Space Sparkle. Deep sea green Deep sea green is one of the paint colors manufactured and marketed by American paint company Benjamin Moore. In culture Aviation TEAL is the acronym for Tasman Empire Airways Limited, the forerunner of Air New Zealand, who used teal as their airline's signature color; it appeared not just on plane livery but promotional material and airline bags. When New Zealanders refer to ‘teal green,’ they are more likely referring to the airline color than the bird's color. Rapid transit Teal is the official color of Kochi Metro, the rapid transit system serving the city of Kochi in India. Flags The flag of Mozambique contains a greenish-teal horizontal stripe. The teal stripe in the flag of Sri Lanka represents Sri Lankan Muslims. Business A Teal organisation is an emerging organisational paradigm. Military Armies that used feldgrau, cadet gray and similar shades of grayish green for field uniforms in the late 19th and early 20th century commonly used more saturated color for officers, often tending on teal. The armed forces of the Netherlands used teal field uniforms up to the 2nd World War. Some of the modern parade uniforms of the Russian Armed Forces are also teal, though named "wave-green" in the service. Sports Teal is the jersey color of the Belfast Giants. The color was chosen to be a neutral color in the often heated sporting environments of Belfast. It is also worn by the Charlotte Hornets of the NBA and The Port Adelaide Football Club in the AFL also feature teal in their team colors. The Jacksonville Jaguars of the NFL use teal as one of their primary colors. The Miami Dolphins use a variation called Aqua as their primary color. The Philadelphia Eagles also use a variation called Midnight Green. Two teams in Major League Baseball use the color teal. The Seattle Mariners use a variant known as "Northwest Green" as one of their primary colors while the Arizona Diamondbacks use teal as an alternate color. In the National Hockey League, the San Jose Sharks use a variation called Deep Pacific Teal as their primary color. The Mighty Ducks of Anaheim used a variation of teal known as Jade as a primary color until 2006 when the team was rebranded to the Anaheim Ducks. The color is still used today on the team's alternate uniform. The Penrith Panthers of the NRL used in the early 2000s teal as a secondary color. The Griquas of the Currie Cup use teal as primary color (although is officially defined peacock blue). Foods Gummy bears are commonly teal. Social & Political Represents intersectionality of those who reflect on equality and social justice for all marginalized groups and misunderstood groups such as women, LGBTQ+, people of color, the homeless, persons with mental illness (eg, PTSD, depression), the poor and other groups that go under represented and/or devalued in the US. Computing Windows 95 featured a teal-colored default wallpaper. Heroes of Might and Magic III featured a teal-colored party. Film The "orange and teal look" is a trend in 21st-century filmmaking, in which scenes are color graded to emphasize these two complementary colors. TV series Perry the Platypus, one of the main characters in the TV series Phineas and Ferb, is teal. Ash Ketchum, the main character on Pokémon, wore a dark teal t-shirt during the earlier seasons. Martha Jones, a companion of the Tenth Doctor, wore a teal t-shirt on her debut Smith and Jones. The wives of Commanders in The Handmaid's Tale wear teal. In Our Flag Means Death, when asked their favourite color, the character Jim Jimenez replies “teal”. Characters in the South Korean television series Squid Game wear teal tracksuits as their game uniform. Religion The Hermit Intercessors of the Lamb, a Christian contemplation group in the state of Nebraska, wears habits with a teal scapular to symbolize intercession between heaven (blue) and earth. Originally organised as a Roman Catholic association, it was suppressed in 2010 by the Archbishop of Omaha, who directed members to cease wearing the scapular in Church activities. Politics In Australia, the color teal, and the term "teal independents", have become associated with a group of independent candidates in the 2022 Australian federal election who campaigned on a platform highlighting the importance of climate change action, tackling corruption in politics, and gender equality. These candidates are largely supported by Climate 200 and are often referred to by the media as 'teals' because that color is a dominant feature in some of their campaigns. Nominally, their policy platform reflected those of both the Liberals and the Greens. Art History Green pigments for paints and fabric dyes were difficult to obtain from nature in the past, thus they were rarely employed in clothes or heraldic emblems. While green may have been blended with blue and yellow paints, mixing dissimilar substances was frowned upon due to suspicion of alchemy. Only during the early Renaissance did the superstitious custom fade away, and in the late eighteenth century, the German Swedish scientist Carl Wilhelm Scheele found new copper greens. Issue awareness Teal is the color of ovarian cancer awareness. Ovarian cancer survivors and supporters may wear teal ribbons, bracelets, T-shirts, and hats to bring public attention to the disease. Academia Teal, along with Bronze, is the school color for Coastal Carolina University. Nature Insects Some dragonflies are cyan or teal.
Physical sciences
Colors
Physics
1330507
https://en.wikipedia.org/wiki/Reedfish
Reedfish
The reedfish, ropefish (more commonly used in the United States), or snakefish, Erpetoichthys calabaricus, is a species of fish in the family Polypteridae alongside the bichirs. It is the only member of the genus Erpetoichthys. It is native to fresh and brackish waters in West and Central Africa. The reedfish possesses a pair of lungs in addition to gills, allowing it to survive in very oxygen-poor water. It is threatened by habitat loss through palm oil plantations, other agriculture, deforestation, and urban development. Description The largest confirmed reedfish museum specimen was long, and three studies where more than 2,000 wild reedfish were caught (using basket traps, meaning that only individuals longer than were retained) found none that exceeded . Although sometimes claimed to reach up to long, this is incorrect. Body elongation in fishes, such as eels, usually happens through the addition of caudal (tail) vertebrae, but in bichirs it has happened through the addition of precaudal vertebrae. Reedfish have evolved a more snakelike body by having twice as many precaudal vertebrae as the members of its sister genus Polypterus, despite having the same number of tail vertebrae. Pelvic fins are absent, and the long dorsal fin consist of a series of well-separated spines, each supporting one or several articulated rays and a membrane. The reedfish possesses a pair of lungs, enabling it to breathe atmospheric air. This allows the species to survive in water with low dissolved oxygen content and to survive for an intermediate amount of time out of water. The sexes are very similar in both median and maximum length, but females average heavier than males of a similar length, and they can be reliably separated by the shape of their anal fin. Reedfish are dark above and on the sides, with lighter orangish or yellowish underparts. Males are generally more olive-green in colour, whereas females generally are more yellowish-brown. Larvae have conspicuous external gills, making them resemble salamander larvae. The genus name derives from the Greek words erpeton (creeping thing) and ichthys (fish). Distribution and habitat The reedfish inhabits slow-moving or standing, fresh or brackish, relatively warm tropical water, and usually in places with reeds or other dense plant growth. It occurs in Benin, Cameroon and Nigeria, spanning the area from the Ouémé River to the Sanaga River. There are old records from the Chiloango River in DR Congo and Cabinda in Angola, but these are unconfirmed and questionable. Ecology The reedfish is nocturnal, and feeds on annelid worms, small crustaceans (such as shrimp), insects (both adults and their larvae), snails and small fish. When moving through water slowly, it tends to use its pectoral fins, changing to an eel-like form of swimming (making more use of full-body movements and the caudal fin) when moving quickly. Unlike their sister genus Polypterus, which does not leave water voluntarily, reedfish are known to explore land both in the wild and in captivity if given the opportunity, slithering along like a snake and also taking food items on land. Prey captured on land is brought back to the water. Females repeatedly deposit small batches of eggs between the anal fins of the male, where they are fertilized. The male reedfish then scatters the eggs among aquatic vegetation, where they stick to plants and substrate. Larvae hatch rapidly (after 70 hours) but remain attached to vegetation; they become independent and start to feed after ~22 days, when the egg's yolk sac has been consumed. Conservation In coastal central Africa, the species is threatened by habitat loss, driven by the development of oil palm plantations. Populations in western Africa are impacted by degradation and loss of habitat from wetland drainage for agricultural and urban developments. The reedfish is currently classified as Near Threatened by the IUCN. It is regarded as a good food fish and commonly caught in the local subsistence fishery. It is also regularly caught for the international aquarium fish trade. Overall, catch levels do not appear to represent a major threat to the species at present, but do need monitoring. In the aquarium Reedfish are sometimes displayed in aquaria. All aquarium fish are wild-caught; they have not yet been successfully bred in captivity. Spawning and hatching in captivity has been observed, but no hatchlings have been reported to survive to adulthood. They are inquisitive, peaceful, and have some "personality". Although nocturnal, reedfish will sometimes come out during the day. Since they have a peaceful nature, other fish may "bully" a reedfish, despite its large size, especially in competition for food or space. Some reedfish also have an inclination to stay close to the water surface, where they will be safe from other fish and will even allow most of their bodies to leave the water at times. They can be difficult to keep; they will jump and enter pumps to escape tanks and frequently die as a result, and they can be sensitive to pH swings and nitrogen chemistry. They will often consume other smaller fish when given the opportunity. Often small feeder goldfish and minnows are eaten in place of bloodworms or nightcrawlers, and other commercially available live fish food.
Biology and health sciences
Polypteriformes
Animals
1331789
https://en.wikipedia.org/wiki/Lepton%20number
Lepton number
In particle physics, lepton number (historically also called lepton charge) is a conserved quantum number representing the difference between the number of leptons and the number of antileptons in an elementary particle reaction. Lepton number is an additive quantum number, so its sum is preserved in interactions (as opposed to multiplicative quantum numbers such as parity, where the product is preserved instead). The lepton number is defined by where is the number of leptons and is the number of antileptons. Lepton number was introduced in 1953 to explain the absence of reactions such as in the Cowan–Reines neutrino experiment, which instead observed . This process, inverse beta decay, conserves lepton number, as the incoming antineutrino has lepton number −1, while the outgoing positron (antielectron) also has lepton number −1. Lepton flavor conservation In addition to lepton number, lepton family numbers are defined as the electron number, for the electron and the electron neutrino; the muon number, for the muon and the muon neutrino; and the tau number, for the tauon and the tau neutrino. Prominent examples of lepton flavor conservation are the muon decays and . In these decay reactions, the creation of an electron is accompanied by the creation of an electron antineutrino, and the creation of a positron is accompanied by the creation of an electron neutrino. Likewise, a decaying negative muon results in the creation of a muon neutrino, while a decaying positive muon results in the creation of a muon antineutrino. Finally, the weak decay of a lepton into a lower-mass lepton always results in the production of a neutrino-antineutrino pair: . One neutrino carries through the lepton number of the decaying heavy lepton, (a tauon in this example, whose faint residue is a tau neutrino) and an antineutrino that cancels the lepton number of the newly created, lighter lepton that replaced the original. (In this example, a muon antineutrino with that cancels the muon's . Violations of the lepton number conservation laws Lepton flavor is only approximately conserved, and is notably not conserved in neutrino oscillation. However, both the total lepton number and lepton flavor are still conserved in the Standard Model. Numerous searches for physics beyond the Standard Model incorporate searches for lepton number or lepton flavor violation, such as the hypothetical decay . Experiments such as MEGA and SINDRUM have searched for lepton number violation in muon decays to electrons; MEG set the current branching limit of order and plans to lower to limit to after 2016. Some theories beyond the Standard Model, such as supersymmetry, predict branching ratios of order to . The Mu2e experiment, in construction as of 2017, has a planned sensitivity of order . Because the lepton number conservation law in fact is violated by chiral anomalies, there are problems applying this symmetry universally over all energy scales. However, the quantum number is commonly conserved in Grand Unified Theory models. If neutrinos turn out to be Majorana fermions, neither individual lepton numbers, nor the total lepton number nor would be conserved, e.g. in neutrinoless double beta decay, where two neutrinos colliding head-on might actually annihilate, similar to the (never observed) collision of a neutrino and antineutrino. Reversed signs convention Some authors prefer to use lepton numbers that match the signs of the charges of the leptons involved, following the convention in use for the sign of weak isospin and the sign of strangeness quantum number (for quarks), both of which conventionally have the otherwise arbitrary sign of the quantum number match the sign of the particles' electric charges. When following the electric-charge-sign convention, the lepton number (shown with an over-bar here, to reduce confusion) of an electron, muon, tauon, and any neutrino counts as the lepton number of the positron, antimuon, antitauon, and any antineutrino counts as When this reversed-sign convention is observed, the baryon number is left unchanged, but the difference is replaced with a sum: , whose number value remains unchanged, since and
Physical sciences
Quantum numbers
Physics
1333270
https://en.wikipedia.org/wiki/Cetraria%20islandica
Cetraria islandica
Cetraria islandica, also known as true Iceland lichen or Iceland moss, is an Arctic-alpine lichen whose erect or upright, leaflike habit gives it the appearance of a moss, where its name likely comes from. Description It is often of a pale chestnut color, but varies considerably, being sometimes almost entirely grayish-white; and grows to a height of from , the branches being channeled into flattened lobes with fringed edges. Chemistry In commerce it is a light-gray harsh cartilaginous body, almost colorless, and tastes slightly bitter. It contains about 70% of lichenin or lichen-starch, a polymeric carbohydrate compound isomeric with common starch. It also yields a peculiar modification of chlorophyll (called thallochlor), fumaric acid, lichenostearic acid, and cetraric acid (which gives it the bitter taste). It also contains lichesterinic acid and protolichesterinic acids. Distribution and habitat It grows abundantly in the mountainous regions of northern countries, and it is specially characteristic of the lava slopes and plains of the west and north of Iceland. It is found on the mountains of north Wales, northern England, Scotland and south-west Ireland. In North America its range extends through Arctic regions, from Alaska to Newfoundland, and south in the Rocky Mountains to Colorado, and to the Appalachian Mountains of New England. Ecology Cetraria islandica is a known host to the lichenicolous fungus species Lichenopeltella cetrariicola, which is known from Europe and Iceland. Uses All parts of the lichen are edible. It may be dry in winter but can be soaked. Boiling reduces the bitterness. It can be added as a thickener to milk or grains or dried and stored. It is not in great demand, and even in Iceland it is only occasionally used to make folk medicines and in a few traditional dishes. In earlier times, it was much more widely used in breads, porridges, soups, etc. It forms a nutritious and easily digested amylaceous food, being used in place of starch in some preparations of hot chocolate. Cetraric acid or cetrarin, a white micro-crystalline powder with a bitter taste, is readily soluble in alcohol, and slightly soluble in water and ether. It has been recommended for medicinal use by alternative medicine sites, in doses of 2 to 4 grains (0.1 to 0.25 grams), as a bitter tonic and aperient. It is traditionally used to relieve chest ailments, irritation of the oral and pharyngeal mucous membranes and to suppress dry cough. Gallery
Biology and health sciences
Lichens
Plants
1334076
https://en.wikipedia.org/wiki/Interplate%20earthquake
Interplate earthquake
An interplate earthquake occurs at the boundary between two tectonic plates. Earthquakes of this type account for more than 90 percent of the total seismic energy released around the world. If one plate is trying to move past the other, they will be locked until sufficient stress builds up to cause the plates to slip relative to each other. The slipping process creates an earthquake with relative displacement on either side of the fault, resulting in seismic waves which travel through the Earth and along the Earth's surface. Relative plate motion can be lateral as along a transform fault boundary, vertical if along a convergent boundary (i.e. subduction or thrust/reverse faulting) or a divergent boundary (i.e. rift zone or normal faulting), and oblique, with horizontal and lateral components at the boundary. Interplate earthquakes associated at a subduction boundary are called megathrust earthquakes, which include most of the Earth's largest earthquakes. Intraplate earthquakes are often confused with interplate earthquakes, but are fundamentally different in origin, occurring within a single plate rather than between two tectonic plates on a plate boundary. The specifics of the mechanics by which they occur, as well as the intensity of the stress drop which occurs after the earthquake also differentiate the two types of events. Intraplate earthquakes have, on average, a higher stress drop than that of an interplate earthquake and generally higher intensity. Mechanics Mechanically, interplate earthquakes differ from other seismic events in that they are caused by motion at the boundary between two tectonic plates. An interplate earthquake event occurs when the accumulated stress at a tectonic plate boundary are released via brittle failure and displacement along the fault. There are three types of plate boundaries to consider in the context of interplate earthquake events: Transform fault: Where two boundaries slide laterally relative to each other. Divergent boundary: Where two boundaries move apart. Convergent boundary: Where one plate moves towards, and potentially subducts beneath, another plate. Precursory tremors Scientists have determined that interplate earthquakes are sometimes preceded by an irregular occurrence of small tremors. Precursory tremors are often associated with slow slip along a plate boundary. These precursory tremors can sometimes be identified within days or weeks of an interplate earthquake event and allow researchers to anticipate interplate earthquakes and introduce strategies to mitigate damage. Differences with intraplate earthquakes Beyond the inherent mechanical differences leading to interplate earthquake events and location of interplate earthquakes on plate boundaries, these seismic occurrences can be differentiated by other means. Intensity Interplate earthquakes differ from intraplate earthquakes in that the intensity of intraplate earthquakes exceed those of interplate earthquakes by nearly two points. Using the Modified Mercalli Intensity scale, earthquakes are categorized descriptively on a scale from I (not felt) to XII (total destruction) based on observed effects of the seismic event. While the ground accelerations of these two types of events are similar, the resulting intensity of intraplate earthquakes is significantly greater than that of interplate earthquakes due to the greater energy release (stress drop) across intraplate faults. Stress drop Stress drop is a measure of the stress across a fault before and after an earthquake rupture. While intraplate and interplate earthquakes obey similar length proportional scaling laws, interplate earthquakes exhibit stress drop values that are systematically smaller by a factor of 6. This suggests that the boundaries between plates are significantly weaker than the plates themselves. The reason for the measurable, systemic difference in stress drop between interplate and intraplate earthquakes is not entirely understood. However, intraplate earthquake models show that stress is distributed uniformly across the fault whereas interplate earthquakes have stress concentrated in specific areas along the boundary. Furthermore, interplate earthquakes release stress immediately, as compared to intraplate earthquakes which release stress gradually. Effects Subduction erosion Basal erosion, the process of removal of materials from the underside of the upper plate by the subducting plate, occurs at numerous, but not all, convergent margins. As the process of subduction erosion is not completely understood, a model has been proposed in which basal erosion is supplemented by cyclical, interplate earthquakes. The model suggests that erosion does not occur gradually in subduction zones, but rather in brief episodes of elevated seismicity along the plate boundary. Tsunamis Earthquakes are a major factor in the creation of tsunami waves. As interplate earthquakes result in an immediate release of stress along a fault, they produce significant seismic energy and can cause seafloor uplift, generating large waves as the energy from the sudden slip along the fault is transferred to the overlying water body. However, the majority of interplate earthquakes are not intense enough to create tidal waves, with most tsunamis being caused by intraplate earthquakes or tsunami earthquakes due to their comparatively slow stress release regimes and proximity to the surface of the Earth. Major interplate earthquakes Interplate earthquakes account for over 90% of all seismic energy released worldwide. As such, their effects are widespread and interplate earthquake events are numerous. Earthquakes of magnitudes higher than 5 in populated regions are considered highly dangerous and pose a direct threat to human life and property. Some of the largest, most devastating earthquakes that have occurred in the last century have been identified as interplate events. Some areas of the world that are particularly prone to interplate earthquakes due to the presence of prominent plate boundaries include the west coast of North America (especially California and Alaska), the northeastern Mediterranean region (Greece, Italy, and Turkey in particular), Iran, New Zealand, Indonesia, India, Japan, and parts of China. Major earthquakes (magnitude ≥ 9.0) since 1900{"type":"FeatureCollection","metadata":{"generated":1527809532000,"url":"https://earthquake.usgs.gov/fdsnws/event/1/query.geojson?starttime=1900-01-01%2000%3A00%3A00&endtime=2018-05-31%2023%3A59%3A59&minmagnitude=9&orderby=time","title":"USGS Earthquakes","status":200,"api":"1.5.8","count":5},"features":[{"type":"Feature","properties":{"mag":9.1,"place":"near the east coast of Honshu, Japan","time":1299822384120,"updated":1510603947632,"tz":null,"url":"https://earthquake.usgs.gov/earthquakes/eventpage/official20110311054624120_30","detail":"https://earthquake.usgs.gov/fdsnws/event/1/query?eventid=official20110311054624120_30&format=geojson","felt":1497,"cdi":9.1,"mmi":8.6,"alert":null,"status":"reviewed","tsunami":0,"sig":2184,"net":"official","code":"20110311054624120_30","ids":",usc0001xgp,usp000hvnu,choy20110311054623,official20110311054624120_30,duputel201103110546a,atlas20110311054624,iscgem16461282,usp000hvpg,usp000hvpa,","sources":",us,us,choy,official,duputel,atlas,iscgem,us,us,","types":",associate,dyfi,finite-fault,focal-mechanism,general-header,general-link,general-text,impact-text,moment-tensor,origin,phase-data,poster,scitech-link,scitech-text,shakemap,trump-impact-text,trump-origin,trump-phase-data,trump-shakemap,","nst":541,"dmin":null,"rms":1.16,"gap":9.5,"magType":"mww","type":"earthquake","title":"M 9.1 – near the east coast of Honshu, Japan"},"geometry":{"type":"Point","coordinates":[142.373,38.297,29]},"id":"official20110311054624120_30"}, {"type":"Feature","properties":{"mag":9.1,"place":"off the west coast of northern Sumatra","time":1104022733450,"updated":1510165148120,"tz":null,"url":"https://earthquake.usgs.gov/earthquakes/eventpage/official20041226005853450_30","detail":"https://earthquake.usgs.gov/fdsnws/event/1/query?eventid=official20041226005853450_30&format=geojson","felt":886,"cdi":9.1,"mmi":8.6,"alert":null,"status":"reviewed","tsunami":0,"sig":2080,"net":"official","code":"20041226005853450_30","ids":",duputel122604a,us2004slav,official20041226005853450_30,choy20041226005853,gcmtm122604a,usp000dbed,gcmt20041226005850,atlas20041226005853,iscgem7453151,","sources":",duputel,us,official,choy,gcmt,us,gcmt,atlas,iscgem,","types":",associate,dyfi,focal-mechanism,general-header,general-text,impact-text,moment-tensor,origin,phase-data,poster,shakemap,trump,trump-origin,trump-shakemap,","nst":601,"dmin":null,"rms":1.17,"gap":22,"magType":"mw","type":"earthquake","title":"M 9.1 – off the west coast of northern Sumatra"},"geometry":{"type":"Point","coordinates":[95.982,3.295,30]},"id":"official20041226005853450_30"}, {"type":"Feature","properties":{"mag":9.2,"place":"Southern Alaska","time":-181859024000,"updated":1520609186398,"tz":null,"url":"https://earthquake.usgs.gov/earthquakes/eventpage/official19640328033616_30","detail":"https://earthquake.usgs.gov/fdsnws/event/1/query?eventid=official19640328033616_30&format=geojson","felt":905,"cdi":9.1,"mmi":8.4,"alert":null,"status":"automatic","tsunami":0,"sig":2126,"net":"official","code":"19640328033616_30","ids":",akgoodfrid,official19640328033616_30,iscgem869809,atlas19640328033612,","sources":",ak,official,iscgem,atlas,","types":",dyfi,general-header,general-link,impact-text,isoseismal-map,origin,shakemap,trump-origin,trump-shakemap,","nst":null,"dmin":null,"rms":null,"gap":null,"magType":"mw","type":"earthquake","title":"M 9.2 – Southern Alaska"},"geometry":{"type":"Point","coordinates":[-147.339,60.908,25]},"id":"official19640328033616_30"}, {"type":"Feature","properties":{"mag":9.5,"place":"Bio-Bio, Chile","time":-303281320000,"updated":1520876380763,"tz":null,"url":"https://earthquake.usgs.gov/earthquakes/eventpage/official19600522191120_30","detail":"https://earthquake.usgs.gov/fdsnws/event/1/query?eventid=official19600522191120_30&format=geojson","felt":null,"cdi":null,"mmi":9.3,"alert":null,"status":"automatic","tsunami":0,"sig":1388,"net":"official","code":"19600522191120_30","ids":",iscgem879136,official19600522191120_30,atlas19600522191117,","sources":",iscgem,official,atlas,","types":",general-header,impact-text,origin,shakemap,trump-origin,trump-shakemap,","nst":null,"dmin":null,"rms":null,"gap":null,"magType":"mw","type":"earthquake","title":"M 9.5 – Bio-Bio, Chile"},"geometry":{"type":"Point","coordinates":[-73.407,-38.143,25]},"id":"official19600522191120_30"}, {"type":"Feature","properties":{"mag":9,"place":"off the east coast of the Kamchatka Peninsula, Russia","time":-541407690000,"updated":1518651179867,"tz":null,"url":"https://earthquake.usgs.gov/earthquakes/eventpage/official19521104165830_30","detail":"https://earthquake.usgs.gov/fdsnws/event/1/query?eventid=official19521104165830_30&format=geojson","felt":null,"cdi":null,"mmi":null,"alert":null,"status":"automatic","tsunami":0,"sig":1246,"net":"official","code":"19521104165830_30","ids":",iscgem893648,official19521104165830_30,","sources":",iscgem,official,","types":",general-header,impact-text,origin,trump-origin,","nst":null,"dmin":null,"rms":null,"gap":null,"magType":"mw","type":"earthquake","title":"M 9.0 – off the east coast of the Kamchatka Peninsula, Russia"},"geometry":{"type":"Point","coordinates":[159.779,52.623,21.6]},"id":"official19521104165830_30"}],"bbox":[-147.339,-38.143,21.6,159.779,60.908,30]}
Physical sciences
Seismology
Earth science
1334113
https://en.wikipedia.org/wiki/Body%20armor
Body armor
Body armor, personal armor (also spelled armour), armored suit (armoured) or coat of armor, among others, is armor for a person's body: protective clothing or close-fitting hands-free shields designed to absorb or deflect physical attacks. Historically used to protect military personnel, today it is also used by various types of police (riot police in particular), private security guards, or bodyguards, and occasionally ordinary citizens. Today there are two main types: regular non-plated body armor for moderate to substantial protection, and hard-plate reinforced body armor for maximum protection, such as used by combatants. History Many factors have affected the development of personal armor throughout human history. Significant factors in the development of armor include the economic and technological necessities of armor production. For instance full plate armor first appeared in Medieval Europe when water-powered trip hammers made the formation of plates faster and cheaper. At times the development of armor has run parallel to the development of increasingly effective weaponry on the battlefield, with armorers seeking to create better protection without sacrificing mobility. Ancient The first record of body armor in history was found on the Stele of Vultures in ancient Sumer in today's south Iraq. The oldest known Western armor is the Dendra panoply, dating from the Mycenaean Era around 1400 BC. Mail, also referred to as chainmail, is made of interlocking iron rings, which may be riveted or welded shut. It is believed to have been invented by Celtic people in Europe about 500 BC: most cultures that used mail used the Celtic word or a variant, suggesting the Celts as the originators. The Romans widely adopted mail as the lorica hamata, although they also made use of lorica segmentata and lorica squamata. While no non-metallic armor is known to have survived, it was likely to have been commonplace due to its lower cost. Eastern armor has a long history, beginning in Ancient China. In East Asian history laminated armor such as lamellar, and styles similar to the coat of plates, and brigandine were commonly used. Later cuirasses and plates were also used. In pre-Qin dynasty times, leather armor was made out of rhinoceros. The use of iron plate armor on the Korean peninsula was developed during the Gaya Confederacy of 42 CE - 562 CE. The iron was mined and refined in the area surrounding Gimhae (Gyeongsangnam Province, South Korea). Using both vertical and triangular plate designs, the plate armor sets consisted of 27 or more individual thick curved plates, which were secured together by nail or hinge. The recovered sets include accessories such as iron arm guards, neck guards, leg guards, and horse armor/bits. The use of these armor types disappeared from use on the Korean Peninsula after the fall of the Gaya Confederacy to the Silla Dynasty, during the three kingdoms era Three Kingdoms of Korea in 562 CE. Middle Ages In European history, well-known armor types include the mail hauberk of the early medieval age, and the full steel plate harness worn by later Medieval and Renaissance knights, and a few key components (breast and back plates) by heavy cavalry in several European countries until the first year of World War I (1914–1915). The Japanese armor known today as samurai armor appeared in the Heian period. (794-1185) These early samurai armors are called the ō-yoroi and dō-maru. Plate Gradually, small additional plates or discs of iron were added to the mail to protect vulnerable areas. By the late 13th century, the knees were capped, and two circular discs, called besagews were fitted to protect the underarms. A variety of methods for improving the protection provided by mail were used as armorers seemingly experimented. Hardened leather and splinted construction were used for arm and leg pieces. The coat of plates was developed, an armor made of large plates sewn inside a textile or leather coat. Early plate in Italy, and elsewhere in the 13th to 15th centuries were made of iron. Iron armor could be carburized or case hardened to give a surface of harder steel. Plate armor became cheaper than mail by the 15th century as it required much less labor and labor had become much more expensive after the Black Death, though it did require larger furnaces to produce larger blooms. Mail continued to be used to protect those joints which could not be adequately protected by plate, such as the armpit, crook of the elbow and groin. Another advantage of plate was that a lance rest could be fitted to the breast plate. The small skull cap evolved into a bigger true helmet, the bascinet, as it was lengthened downward to protect the back of the neck and the sides of the head. Additionally, several new forms of fully enclosed helmets were introduced in the late 14th century to replace the great helm, such as the sallet and barbute and later the armet and close helm. Probably the most recognized style of armor in the world became the plate armor associated with the knights of the European Late Middle Ages, but continuing to the early 17th-century Age of Enlightenment in all European countries. By about 1400, the full harness of plate armor had been developed in armories of Lombardy Heavy cavalry dominated the battlefield for centuries in part because of their armor. In the early 15th century, small "hand cannon" first began to be used, in the Hussite Wars, in combination with Wagenburg tactics, allowing infantry to defeat armored knights on the battlefield. At the same time crossbows were made more powerful to pierce armor, and the development of the Swiss Pike square formation also created substantial problems for heavy cavalry. Rather than dooming the use of body armor, the threat of small firearms intensified the use and further refinement of plate armor. There was a 150-year period in which better and more metallurgically advanced steel armor was being used, precisely because of the danger posed by the gun. Hence, guns and cavalry in plate armor were "threat and remedy" together on the battlefield for almost 400 years. By the 15th-century, Italian armor plates were almost always made of steel. In Southern Germany armorers began to harden their steel armor only in the late 15th century. They would continue to harden their steel for the next century because they quenched and tempered their product which allowed for the fire-gilding to be combined with tempering. The quality of the metal used in armor deteriorated as armies became bigger and armor was made thicker, necessitating breeding of larger cavalry horses. If during the 14th and 15th centuries armor seldom weighed more than , then by the late 16th century it weighed . The increasing weight and thickness of late 16th-century armor therefore gave substantial resistance. In the early years of pistols and arquebuses, black powder muzzleloading firearms were fired at a relatively low velocity (usually below ). The full suits of plate armor, or only breast plates could actually stop bullets fired from a modest distance. The front breast plates were, in fact, commonly shot as a test. The impact point would often be encircled with engraving to point it out. This was called the "proof". Armor often also bore an insignia of the maker, especially if it was of good quality. Crossbow bolts or quarrels, if still used, would seldom penetrate good plate, nor would any bullet unless fired from close range. In effect, rather than making plate armor obsolete, the use of firearms stimulated the development of plate armor into its later stages. For most of that period, it allowed horsemen to fight while being the targets of defending arquebusiers without being easily killed. Full suits of armor were actually worn by generals and princely commanders until the 1710s. Horse armor The horse was afforded protection from cavalry and infantry weapons by steel plate barding. This gave the horse protection and enhanced the visual impression of a mounted knight. Late in the era, elaborate barding was used as parade armor. Gunpowder era As gunpowder weapons greatly improved from the 16th century onward, it became cheaper and more effective to have groups of unarmored infantry with early guns than to have expensive knights mounted on horseback, which was the primary cause for armor to be largely discarded. Most light cavalry units discarded their armor, though some heavy cavalry units continued to use it, such as German reiters, Polish hussars, and French cuirassiers. Late modern use Metal armor remained in limited use long after its general obsolescence. Soldiers in the American Civil War (1861–1865) bought iron and steel vests from peddlers (both sides had considered but rejected it for standard issue). The effectiveness of the vests varied widely—some successfully deflected bullets and saved lives but others were poorly made and resulted in tragedy for the soldiers. In any case the vests were abandoned by many soldiers due to their weight on long marches as well as the stigma they got for being cowards from their fellow troops. At the start of World War I in 1914, thousands of the French cuirassiers rode out to engage the German cavalry who likewise used helmets and armor. By that period, the shiny armor plate was covered in dark paint and a canvas wrap covered their elaborate Napoleonic-style helmets. Their armor was meant to protect only against sabers and lances. The cavalry had to beware of rifles and machine guns, like the infantry soldiers, who at least had a trench to give them some protection. Some Arditi assault troops of the Italian army wore body armor in 1916 and 1917. By the end of the war the Germans had made some 400,000 Sappenpanzer suits. Too heavy and restrictive for infantry, most were worn by spotters, sentries, machine gunners, and other troops who stayed in one place. Modern non-metallic armor Soldiers use metal or ceramic plates in their bullet resistant vests, providing additional protection from pistol and rifle bullets. Metallic components or tightly woven fiber layers can give soft armor resistance to stab and slash attacks from combat knives and knife bayonets. Chain mail armored gloves continue to be used by butchers and abattoir workers to prevent cuts and wounds while cutting up carcasses. Ceramic Boron carbide is used in hard plate armor capable of defeating rifle and armor piercing ammunition. The ceramic material is typically structured with a Kevlar layer on one side and a nylon spall shield on the other, optimizing ballistic resistance against different projectile threats, including various calibers of shells and bullets. Boron carbide ceramics were first used in the 1960s in designing bulletproof vests, cockpit floor and pilot seats of gunships. It was used in armor plates like the SAPI series, and today in most civilian accessible body armors. Other materials include boron suboxide, alumina, and silicon carbide, which are used for varying reasons from protecting from tungsten carbide penetrators, to improved weight to area ratios. Ceramic body armor is made up of a hard and rigid ceramic strike face bonded to a ductile fiber composite backing layer. The projectile is shattered, turned, or eroded as it impacts the ceramic strike face, and much of its kinetic energy is consumed as it interacts with the ceramic layer; the fiber composite backing layer absorbs residual kinetic energy and catches bullet and ceramic debris (spalling). This allows such armor to defeat armor-piercing 5.56×45mm, 7.62×51mm, and 7.62x39mm bullets, among others, with little or no felt blunt trauma. High-end ceramic armor plates typically utilize ultra-high-molecular-weight polyethylene fiber composite backing layers, whereas budget plates will utilize aramid or fiberglass. Fibers DuPont Kevlar is well known as a component of some bullet resistant vests and bullet resistant face masks. The PASGT helmet and vest used by United States military forces since the early 1980s both have Kevlar as a key component, as do their replacements. Civilian applications include Kevlar reinforced clothing for motorcycle riders to protect against abrasion injuries. Kevlar in non-woven long strand form is used inside an outer protective cover to form chaps that loggers use while operating a chainsaw. If the moving chain contacts and tears through the outer cover, the long fibers of Kevlar tangle, clog, and stop the chain from moving as they get drawn into the workings of the drive mechanism of the saw. Kevlar is used also in emergency services protection gear if it involves high heat, e.g.'', tackling a fire, and Kevlar such as vests for police officers, security, and SWAT. The latest Kevlar material that DuPont has developed is Kevlar XP. In comparison with "normal" Kevlar, Kevlar XP is more lightweight and more comfortable to wear, as its quilt stitch is not required for the ballistic package. Twaron is similar to Kevlar. They both belong to the aramid family of synthetic fibers. The only difference is that Twaron was first developed by Akzo in the 1970s. Twaron was first commercially produced in 1986. Now, Twaron is manufactured by Teijin Aramid. Like Kevlar, Twaron is a strong, synthetic fiber. It is also heat resistant and has many applications. It can be used in the production of several materials that include the military, construction, automotive, aerospace, and even sports market sectors. Among the examples of Twaron-made materials are body armor, helmets, ballistic vests, speaker woofers, drumheads, tires, turbo hoses, wire ropes, and cables. Another fiber used to manufacture a bullet-resistant vest is Dyneema ultra-high-molecular-weight polyethylene. Originated in the Netherlands, Dyneema has an extremely high strength-to-weight ratio (a diameter rope of Dyneema can bear up to a load), is light enough (low density) that it can float on water, and has high energy absorption characteristics. Since the introduction of the Dyneema Force Multiplier Technology in 2013, many body armor manufacturers have switched to Dyneema for their high-end armor solutions. Protected areas Shield A shield is held in the hand or arm. Its purpose is to intercept attacks, either by stopping projectiles such as arrows or by glancing a blow to the side of the shield-user, and it can also be used offensively as a bludgeoning weapon. Shields vary greatly in size, ranging from large shields that protect the user's entire body to small shields that are mostly for use in hand-to-hand combat. Shields also vary a great deal in thickness; whereas some shields were made of thick wooden planking, to protect soldiers from spears and crossbow bolts, other shields were thinner and designed mainly for glancing blows away (such as a sword blow). In prehistory, shields were made of wood, animal hide, or wicker. In antiquity and in the Middle Ages, shields were used by foot soldiers and mounted soldiers. Even after the invention of gunpowder and firearms, shields continued to be used. In the 18th century, Scottish clans continued to use small shields, and in the 19th century, some non-industrialized peoples continued to use shields. In the 20th and 21st centuries, ballistic shields are used by military and police units that specialize in anti-terrorist action, hostage rescue, and siege-breaching. Head A combat helmet is among the oldest forms of personal protective equipment, and is known to have been worn in ancient India around 1700 BC and the Assyrians around 900 BC, followed by the ancient Greeks and Romans, throughout the Middle Ages, and up to the modern era. Their materials and construction became more advanced as weapons became more and more powerful. Initially constructed from leather and brass, and then bronze and iron during the Bronze and Iron Ages, they soon came to be made entirely from forged steel in many societies after about AD 950. At that time, they were purely military equipment, protecting the head from cutting blows with swords, flying arrows, and low-velocity musketry. Some late medieval helmets, like the great bascinet, rested on the shoulders and prevented the wearer from turning his head, greatly restricting mobility. During the 18th and 19th centuries, helmets were not widely used in warfare; instead, many armies used unarmored hats that offered no protection against blade or bullet. The arrival of World War I, with its trench warfare and wide use of artillery, led to mass adoption of metal helmets once again, this time with a shape that offered mobility, a low profile, and compatibility with gas masks. Today's militaries often use high-quality helmets made of ballistic materials such as Kevlar and Twaron, which have excellent bullet and fragmentation stopping power. Some helmets also have good non-ballistic protective qualities, though many do not. The two most popular ballistic helmet models are the PASGT and the MICH. The Modular Integrated Communications Helmet (MICH) type helmet has a slightly smaller coverage at the sides which allows tactical headsets and other communication equipment. The MICH model has standard pad suspension and four-point chinstrap. The Personal Armor System for Ground Troops (PASGT) helmet has been in use since 1983 and has slowly been replaced by the MICH helmet. A ballistic face mask is designed to protect the wearer from ballistic threats. Ballistic face masks are usually made of kevlar or other bullet-resistant materials and the inside of the mask may be padded for shock absorption, depending on the design. Due to weight restrictions, protection levels range only up to NIJ Level IIIA. Torso A ballistic vest helps absorb the impact from firearm-fired projectiles and shrapnel from explosions, and is worn on the torso. Soft vests are made from many layers of woven or laminated fibers and can be capable of protecting the wearer from small caliber handgun and shotgun projectiles, and small fragments from explosives, such as hand grenades. Metal or ceramic plates can be used with a soft vest, providing additional protection from rifle rounds, and metallic components or tightly woven fiber layers can give soft armor resistance to stab and slash attacks from a bayonet or knife. Soft vests are commonly worn by police forces, private citizens and private security guards or bodyguards, whereas hard-plate reinforced vests are mainly worn by combat soldiers, police tactical units and hostage rescue teams. A modern equivalent may combine a ballistic vest with other items of protective clothing, such as a combat helmet. Vests intended for police and military use may also include ballistic shoulder and side protection armor components, and explosive ordnance disposal technicians wear heavy armor and helmets with face visors and spine protection. Limbs Medieval armor often offered protection for all of the limbs, including metal boots for the lower legs, gauntlets for the hands and wrists, and greaves for the legs. Today, protection of limbs from bombs is provided by a bombsuit. Most modern soldiers sacrifice limb protection for mobility, since armor thick enough to stop bullets would greatly inhibit movement of the arms and legs. Performance standards Due to the various different types of projectiles, it is often inaccurate to refer to a particular product as "bulletproof" because this suggests that it will protect against any and all projectiles. Instead, the term bullet resistant is generally preferred. Standards are regional. Around the world ammunition varies and armor testing must reflect the threats found locally. While many standards exist, a few standards are widely used as models. The US National Institute of Justice ballistic and stab documents are examples of broadly accepted standards. In addition to the NIJ, the United Kingdom's Home Office Scientific Development Branch (HOSDB—formerly the Police Scientific Development Branch (PSDB)) standards are also used by a number of other countries and organizations. These "model" standards are usually adapted by other countries by following the same basic test methodologies, while changing the specific ammunition tested. NIJ Standard-0101.06 has specific performance standards for bullet resistant vests used by law enforcement. This rates vests on the following scale against penetration and also blunt trauma protection (deformation): In 2018 or 2019, NIJ was expected to introduce the new NIJ Standard-0101.07. This new standard will completely replace the NIJ Standard-0101.06. The current system of using Roman numerals (II, IIIA, III, and IV) to indicate the level of threat will disappear and be replaced by a naming convention similar to the standard developed by UK Home Office Scientific Development Branch. HG (Hand Gun) is for soft armor and RF (Rifle) is for hard armor. Another important change is that the test-round velocity for conditioned armor will be the same as that for new armor during testing. For example, for NIJ Standard-0101.06 Level IIIA the .44 Magnum round is currently shot at for conditioned armor and at for new armor. For the NIJ Standard-0101.07, the velocity for both conditioned and new armor will be the same. In January 2012, the NIJ introduced BA 9000, body armor quality management system requirements as a quality standard not unlike ISO 9001 (and much of the standards were based on ISO 9001). In addition to the NIJ and HOSDB standards, other important standards include: the German Police's Technische Richtlinie (TR) Ballistische Schutzwesten, Draft ISO prEN ISO 14876, and Underwriters Laboratories (UL Standard 752). Textile armor is tested for both penetration resistance by bullets and for the impact energy transmitted to the wearer. The "backface signature" or transmitted impact energy is measured by shooting armor mounted in front of a backing material, typically oil-based modelling clay. The clay is used at a controlled temperature and verified for impact flow before testing. After the armor is impacted with the test bullet the vest is removed from the clay and the depth of the indentation in the clay is measured. The backface signature allowed by different test standards can be difficult to compare. Both the clay materials and the bullets used for the test are not common. In general the British, German and other European standards allow of backface signature, while the US-NIJ standards allow for , which can potentially cause internal injury. The allowable backface signature for this has been controversial from its introduction in the first NIJ test standard and the debate as to the relative importance of penetration-resistance vs. backface signature continues in the medical and testing communities. In general a vest's textile material temporarily degrades when wet. Neutral water at room temp does not affect para-aramid or UHMWPE but acidic, basic and some other solutions can permanently reduce para-aramid fiber tensile strength. (As a result of this, the major test standards call for wet testing of textile armor.) Mechanisms for this wet loss of performance are not known. Vests that will be tested after ISO-type water immersion tend to have heat-sealed enclosures and those that are tested under NIJ-type water spray methods tend to have water-resistant enclosures. From 2003 to 2005, a large study of the environmental degradation of Zylon armor was undertaken by the US-NIJ. This concluded that water, long-term use, and temperature exposure significantly affect tensile strength and the ballistic performance of PBO or Zylon fiber. This NIJ study on vests returned from the field demonstrated that environmental effects on Zylon resulted in ballistic failures under standard test conditions. Ballistic testing V50 and V0 Measuring the ballistic performance of armor is based on determining the kinetic energy of a bullet at impact. Because the energy of a bullet is a key factor in its penetrating capacity, velocity is used as the primary independent variable in ballistic testing. For most users the key measurement is the velocity at which no bullets will penetrate the armor. Measuring this zero penetration velocity (V0) must take into account variability in armor performance and test variability. Ballistic testing has a number of sources of variability: the armor, test backing materials, bullet, casing, powder, primer and the gun barrel, to name a few. Variability reduces the predictive power of a determination of V0. If, for example, the V0 of an armor design is measured to be with a 9 mm FMJ bullet based on 30 shots, the test is only an estimate of the real V0 of this armor. The problem is variability. If the V0 is tested again with a second group of 30 shots on the same vest design, the result will not be identical. Only a single low velocity penetrating shot is required to reduce the V0 value. The more shots made the lower the V0 will go. In terms of statistics, the zero penetration velocity is the tail end of the distribution curve. If the variability is known and the standard deviation can be calculated, one can rigorously set the V0 at a confidence interval. Test Standards now define how many shots must be used to estimate a V0 for the armor certification. This procedure defines a confidence interval of an estimate of V0. (See "NIJ and HOSDB test methods".) V0 is difficult to measure, so a second concept has been developed in ballistic testing called V50. This is the velocity at which 50 percent of the shots go through and 50 percent are stopped by the armor. US military standards define a commonly used procedure for this test. The goal is to get three shots that penetrate and a second group of three shots that are stopped by the armor all within a specified velocity range. It is possible, and desirable, to have a penetration velocity lower than a stop velocity. These three stops and three penetrations can then be used to calculate a V50 velocity. In practice this measurement of V50 often requires 1–2 vest panels and 10–20 shots. A very useful concept in armor testing is the offset velocity between the V0 and V50. If this offset has been measured for an armor design, then V50 data can be used to measure and estimate changes in V0. For vest manufacturing, field evaluation and life testing both V0 and V50 are used. However, as a result of the simplicity of making V50 measurements, this method is more important for control of armor after certification. Cunniff analysis Using dimensionless analysis, Cuniff arrived at a relation connecting the V50 and the system parameters for textile-based body armors. Under the assumption that the energy of impact is dissipated in breaking the yarn, it was shown that Here, are the failure stress, failure strain, density and elastic modulus of the yarn is the mass per unit area of the armor is the mass per unit area of the projectile Military testing After the Vietnam War, military planners developed a concept of "Casualty Reduction". The large body of casualty data made clear that in a combat situation, fragments, not bullets, were the greatest threat to soldiers. After World War II vests were being developed and fragment testing was in its early stages. Artillery shells, mortar shells, aerial bombs, grenades, and antipersonnel mines are fragmentation devices. They all contain a steel casing that is designed to burst into small steel fragments or shrapnel, when their explosive core detonates. After considerable effort measuring fragment size distribution from various NATO and Soviet Bloc munitions, a fragment test was developed. Fragment simulators were designed and the most common shape is a Right Circular Cylinder or RCC simulator. This shape has a length equal to its diameter. These RCC Fragment Simulation Projectiles (FSPs) are tested as a group. The test series most often includes , , , and mass RCC FSP testing. The 2-4-16-64 series is based on the measured fragment size distributions. The second part of "Casualty Reduction" strategy is a study of velocity distributions of fragments from munitions. Warhead explosives have blast speeds of to . As a result, they are capable of ejecting fragments at speeds of over , implying very high energy (where the energy of a fragment is mass × velocity2, neglecting rotational energy). The military engineering data showed that, like the fragment size, the fragment velocities had characteristic distributions. It is possible to segment the fragment output from a warhead into velocity groups. For example, 95% of all fragments from a bomb blast under have a velocity of or less. This established a set of goals for military ballistic vest design. The random nature of fragmentation required the military vest specification to trade off mass vs. ballistic-benefit. Hard vehicle armor is capable of stopping all fragments, but military personnel can only carry a limited amount of gear and equipment, so the weight of the vest is a limiting factor in vest fragment protection. The 2-4-16-64 grain series at limited velocity can be stopped by an all-textile vest of approximately . In contrast to deformable lead bullets, fragments do not change shape; they are steel and can not be deformed by textile materials. The FSP (the smallest fragment projectile commonly used in testing) is about the size of a grain of rice; such small, fast-moving fragments can potentially slip through the vest, moving between yarns. As a result, fabrics optimized for fragment protection are tightly woven, although these fabrics are not as effective at stopping lead bullets. By the 2010s, the development of body armor had been stymied in regards to weight, in that designers had trouble increasing the protective capability of body armor while still maintaining or decreasing its weight.
Technology
Armour
null
1334312
https://en.wikipedia.org/wiki/Megathrust%20earthquake
Megathrust earthquake
Megathrust earthquakes occur at convergent plate boundaries, where one tectonic plate is forced underneath another. The earthquakes are caused by slip along the thrust fault that forms the contact between the two plates. These interplate earthquakes are the planet's most powerful, with moment magnitudes (Mw) that can exceed 9.0. Since 1900, all earthquakes of magnitude 9.0 or greater have been megathrust earthquakes. The thrust faults responsible for megathrust earthquakes often lie at the bottom of oceanic trenches; in such cases, the earthquakes can abruptly displace the sea floor over a large area. As a result, megathrust earthquakes often generate tsunamis that are considerably more destructive than the earthquakes themselves. Teletsunamis can cross ocean basins to devastate areas far from the original earthquake. Terminology and mechanism The term megathrust refers to an extremely large thrust fault, typically formed at the plate interface along a subduction zone, such as the Sunda megathrust. However, the term is also occasionally applied to large thrust faults in continental collision zones, such as the Himalayan megathrust. A megathrust fault can be long. A thrust fault is a type of reverse fault, in which the rock above the fault is displaced upwards relative to the rock below the fault. This distinguishes reverse faults from normal faults, where the rock above the fault is displaced downwards, or strike-slip faults, where the rock on one side of the fault is displaced horizontally with respect to the other side. Thrust faults are distinguished from other reverse faults because they dip at a relatively shallow angle, typically less than 45°, and show large displacements. In effect, the rocks above the fault have been thrust over the rocks below the fault. Thrust faults are characteristic of areas where the Earth's crust is being compressed by tectonic forces. Megathrust faults occur where two tectonic plates collide. When one of the plates is composed of oceanic lithosphere, it dives beneath the other plate (called the overriding plate) and sinks into the Earth's mantle as a slab. The contact between the colliding plates is the megathrust fault, where the rock of the overriding plate is displaced upwards relative to the rock of the descending slab. Friction along the megathrust fault can lock the plates together, and the subduction forces then build up strain in the two plates. A megathrust earthquake takes place when the fault ruptures, allowing the plates to abruptly move past each other to release the accumulated strain energy. Occurrence and characteristics Megathrust earthquakes are almost exclusive to tectonic subduction zones and are often associated with the Pacific and Indian Oceans. These subduction zones are also largely responsible for the volcanic activity associated with the Pacific Ring of Fire. Since these earthquakes deform the ocean floor, they often generate strong tsunami waves. Subduction zone earthquakes are also known to produce intense shaking and ground movements that can last for up to 3–5 minutes. In the Indian Ocean region, the Sunda megathrust is located where the Indo-Australian plate subducts under the Eurasian plate along a fault off the coasts of Myanmar, Sumatra, Java and Bali, terminating off the northwestern coast of Australia. This subduction zone was responsible for the 2004 Indian Ocean earthquake and tsunami. In parts of the megathrust south of Java, referred to as the Java Trench, for the western part, 8.9 is possible, while in the eastern Java segment, 8.8 is possible, while if both were to rupture at the same time, the magnitude would be 9.1. In the South China Sea lies the Manila Trench, which is capable of producing 9.0 or larger earthquakes, with the maximum magnitude at Mw 9.2 or higher. In Japan, the Nankai megathrust under the Nankai Trough is responsible for Nankai megathrust earthquakes and associated tsunamis. The largest megathrust event within the last 20 years was the magnitude 9.0–9.1 Tōhoku earthquake along the Japan Trench megathrust. In North America, the Juan de Fuca plate subducts under the North American plate, creating the Cascadia subduction zone from mid Vancouver Island, British Columbia down to Northern California. This subduction zone was responsible for the 1700 Cascadia earthquake. The Aleutian Trench, of the southern coast of Alaska and the Aleutian Islands, where the North American plate overrides the Pacific plate, has generated many major earthquakes throughout history, several of which generated Pacific-wide tsunamis, including the 1964 Alaska earthquake; at magnitude 9.1–9.2, it remains the largest recorded earthquake in North America, and the third-largest earthquake instrumentally recorded in the world. In the Himalayan region, where the Indian plate subducts under the Eurasian plate, the largest recorded earthquake was the 1950 Assam–Tibet earthquake, at magnitude 8.7. It is estimated that earthquakes with magnitude 9.0 or larger are expected to occur at an interval of every 800 years, with the highest boundary being a magnitude 10, though this is not considered physically possible. Therefore, the largest possible earthquake in the region is a magnitude 9.7, assuming a single rupture of the whole Himalayan arc and assuming standard scaling law, which implies an average slip of 50 m. A megathrust earthquake could occur in the Lesser Antilles subduction zone, with a maximum magnitude of 9.3, or potentially even 10.3 through recent evaluations, a value not considered impossible. The largest recorded megathrust earthquake was the 1960 Valdivia earthquake, estimated between magnitudes 9.4–9.6, centered off the coast of Chile along the Peru-Chile Trench, where the Nazca plate subducts under the South American plate. This megathrust region has regularly generated extremely large earthquakes. The largest possible earthquakes are estimated at magnitudes of 10 to 11, most likely caused by a combined rupture of the Japan Trench and Kuril–Kamchatka Trench, or individually the Aleutian Trench or Peru–Chile Trench. Another possible area could be the Lesser Antilles subduction zone. A study reported in 2016 found that the largest megathrust quakes are associated with downgoing slabs with the shallowest dip, so-called flat slab subduction. Compared with other earthquakes of similar magnitude, megathrust earthquakes have a longer duration and slower rupture velocities. The largest megathrust earthquakes occur in subduction zones with thick sediments, which may allow a fault rupture to propagate for great distances unimpeded.
Physical sciences
Seismology
Earth science
26998458
https://en.wikipedia.org/wiki/Manure
Manure
Manure is organic matter that is used as organic fertilizer in agriculture. Most manure consists of animal feces; other sources include compost and green manure. Manures contribute to the fertility of soil by adding organic matter and nutrients, such as nitrogen, that are utilised by bacteria, fungi and other organisms in the soil. Higher organisms then feed on the fungi and bacteria in a chain of life that comprises the soil food web. Types There are in the 21st century three main classes of manures used in soil management: Animal manure Most animal manure consists of feces. Common forms of animal manure include farmyard manure (or farm slurry (liquid manure). Farmyard manure also contains plant material (often straw), which has been used as bedding for animals and has absorbed the feces and urine. Agricultural manure in liquid form, known as slurry, is produced by more intensive livestock rearing systems where concrete or slats are used instead of straw bedding. Manure from different animals has different qualities and requires different application rates when used as fertilizer. For example horses, cattle, pigs, sheep, chickens, turkeys, rabbits, and guano from seabirds and bats all have different properties. For instance, sheep manure is high in nitrogen and potash, while pig manure is relatively low in both. Horses mainly eat grass and a few weeds, so horse manure can contain grass and weed seeds, as horses do not digest seeds as cattle do. Cattle manure is a good source of nitrogen as well as organic carbon. Chicken litter, coming from a bird, is very concentrated in nitrogen and phosphate and is prized for both properties. Animal manures may be adulterated or contaminated with other animal products, such as wool (shoddy and other hair), feathers, blood, and bone. Livestock feed can be mixed with the manure due to spillage. For example, chickens are often fed meat and bone meal, an animal product, which can end up becoming mixed with chicken litter. Compost Compost is the decomposed remnants of organic materials. It is usually of plant origin, but often includes some animal dung or bedding. Green manure Green manures are crops grown for the express purpose of plowing them in, thus increasing fertility through the incorporation of nutrients and organic matter into the soil. Leguminous plants such as clover are often used for this, as they fix nitrogen using Rhizobia bacteria in specialized nodes in the root structure. Other types of plant matter used as manure include the contents of the rumens of slaughtered ruminants, spent grain (left over from brewing beer) and seaweed. Uses Animal manure Animal manure, such as chicken manure and cow dung, has been used for centuries as a fertilizer for farming. It can improve the soil structure (aggregation) so that the soil holds more nutrients and water, and therefore becomes more fertile. Animal manure also encourages soil microbial activity which promotes the soil's trace mineral supply, improving plant nutrition. It also contains some nitrogen and other nutrients that assist the growth of plants. Odor is an obvious and major issue with animal manure. Components in swine manure include low molecular weight carboxylic acids, acetic, propionic, butyric, and valeric acids. Other components include skatole and trimethyl amine. Animal manures with a particularly unpleasant odor (such as slurries from intensive pig farming) are usually knifed (injected) directly into the soil to reduce release of the odor. Manure from pigs and cattle is usually spread on fields using a manure spreader. Due to the relatively lower level of proteins in vegetable matter, herbivore manure has a milder smell than the dung of carnivores or omnivores. However, herbivore slurry that has undergone anaerobic fermentation may develop more unpleasant odors, and this can be a problem in some agricultural regions. Poultry droppings are harmful to plants when fresh, but after a period of composting are valuable fertilizers. Manure is also commercially composted and bagged and sold as a soil amendment. In 2018, Austrian scientists offered a method of paper production from elephant and cow manure. Dry animal dung is used as a fuel in many countries around the world. Issues Any quantity of animal manure may be a source of pathogens or food spoilage organisms which may be carried by flies, rodents or a range of other vector organisms and cause disease or put food safety at risk. In intensive agricultural land use, animal manure is often not used as targeted as mineral fertilizers, and thus, the nitrogen utilization efficiency is poor. Animal manure can become a problem in terms of excessive use in areas of intensive agriculture with high numbers of livestock and too little available farmland. The greenhouse gas nitrous oxide can be emitted so contributing to climate change. Livestock antibiotics In 2007, a University of Minnesota study indicated that foods such as corn, lettuce, and potatoes have been found to accumulate antibiotics from soils spread with animal manure that contains these drugs. Organic foods may be much more or much less likely to contain antibiotics, depending on their sources and treatment of manure. For instance, by Soil Association Standard 4.7.38, most organic arable farmers either have their own supply of manure (which would, therefore, not normally contain drug residues) or else rely on green manure crops for the extra fertility (if any nonorganic manure is used by organic farmers, then it usually has to be rotted or composted to degrade any residues of drugs and eliminate any pathogenic bacteria—Standard 4.7.38, Soil Association organic farming standards). On the other hand, as found in the University of Minnesota study, the non-usage of artificial fertilizers, and resulting exclusive use of manure as fertilizer, by organic farmers can result in significantly greater accumulations of antibiotics in organic foods.
Technology
Agronomical techniques
null
26998617
https://en.wikipedia.org/wiki/Field%20%28physics%29
Field (physics)
In science, a field is a physical quantity, represented by a scalar, vector, or tensor, that has a value for each point in space and time. An example of a scalar field is a weather map, with the surface temperature described by assigning a number to each point on the map. A surface wind map, assigning an arrow to each point on a map that describes the wind speed and direction at that point, is an example of a vector field, i.e. a 1-dimensional (rank-1) tensor field. Field theories, mathematical descriptions of how field values change in space and time, are ubiquitous in physics. For instance, the electric field is another rank-1 tensor field, while electrodynamics can be formulated in terms of two interacting vector fields at each point in spacetime, or as a single-rank 2-tensor field. In the modern framework of the quantum field theory, even without referring to a test particle, a field occupies space, contains energy, and its presence precludes a classical "true vacuum". This has led physicists to consider electromagnetic fields to be a physical entity, making the field concept a supporting paradigm of the edifice of modern physics. Richard Feynman said, "The fact that the electromagnetic field can possess momentum and energy makes it very real, and [...] a particle makes a field, and a field acts on another particle, and the field has such familiar properties as energy content and momentum, just as particles can have." In practice, the strength of most fields diminishes with distance, eventually becoming undetectable. For instance the strength of many relevant classical fields, such as the gravitational field in Newton's theory of gravity or the electrostatic field in classical electromagnetism, is inversely proportional to the square of the distance from the source (i.e. they follow Gauss's law). A field can be classified as a scalar field, a vector field, a spinor field or a tensor field according to whether the represented physical quantity is a scalar, a vector, a spinor, or a tensor, respectively. A field has a consistent tensorial character wherever it is defined: i.e. a field cannot be a scalar field somewhere and a vector field somewhere else. For example, the Newtonian gravitational field is a vector field: specifying its value at a point in spacetime requires three numbers, the components of the gravitational field vector at that point. Moreover, within each category (scalar, vector, tensor), a field can be either a classical field or a quantum field, depending on whether it is characterized by numbers or quantum operators respectively. In this theory an equivalent representation of field is a field particle, for instance a boson. History To Isaac Newton, his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. When looking at the motion of many bodies all interacting with each other, such as the planets in the Solar System, dealing with the force between each pair of bodies separately rapidly becomes computationally inconvenient. In the eighteenth century, a new quantity was devised to simplify the bookkeeping of all these gravitational forces. This quantity, the gravitational field, gave at each point in space the total gravitational acceleration which would be felt by a small object at that point. This did not change the physics in any way: it did not matter if all the gravitational forces on an object were calculated individually and then added together, or if all the contributions were first added together as a gravitational field and then applied to an object. The development of the independent concept of a field truly began in the nineteenth century with the development of the theory of electromagnetism. In the early stages, André-Marie Ampère and Charles-Augustin de Coulomb could manage with Newton-style laws that expressed the forces between pairs of electric charges or electric currents. However, it became much more natural to take the field approach and express these laws in terms of electric and magnetic fields; in 1845 Michael Faraday became the first to coin the term "magnetic field". And Lord Kelvin provided a formal definition for a field in 1851. The independent nature of the field became more apparent with James Clerk Maxwell's discovery that waves in these fields, called electromagnetic waves, propagated at a finite speed. Consequently, the forces on charges and currents no longer just depended on the positions and velocities of other charges and currents at the same time, but also on their positions and velocities in the past. Maxwell, at first, did not adopt the modern concept of a field as a fundamental quantity that could independently exist. Instead, he supposed that the electromagnetic field expressed the deformation of some underlying medium—the luminiferous aether—much like the tension in a rubber membrane. If that were the case, the observed velocity of the electromagnetic waves should depend upon the velocity of the observer with respect to the aether. Despite much effort, no experimental evidence of such an effect was ever found; the situation was resolved by the introduction of the special theory of relativity by Albert Einstein in 1905. This theory changed the way the viewpoints of moving observers were related to each other. They became related to each other in such a way that velocity of electromagnetic waves in Maxwell's theory would be the same for all observers. By doing away with the need for a background medium, this development opened the way for physicists to start thinking about fields as truly independent entities. In the late 1920s, the new rules of quantum mechanics were first applied to the electromagnetic field. In 1927, Paul Dirac used quantum fields to successfully explain how the decay of an atom to a lower quantum state led to the spontaneous emission of a photon, the quantum of the electromagnetic field. This was soon followed by the realization (following the work of Pascual Jordan, Eugene Wigner, Werner Heisenberg, and Wolfgang Pauli) that all particles, including electrons and protons, could be understood as the quanta of some quantum field, elevating fields to the status of the most fundamental objects in nature. That said, John Wheeler and Richard Feynman seriously considered Newton's pre-field concept of action at a distance (although they set it aside because of the ongoing utility of the field concept for research in general relativity and quantum electrodynamics). Classical fields There are several examples of classical fields. Classical field theories remain useful wherever quantum properties do not arise, and can be active areas of research. Elasticity of materials, fluid dynamics and Maxwell's equations are cases in point. Some of the simplest physical fields are vector force fields. Historically, the first time that fields were taken seriously was with Faraday's lines of force when describing the electric field. The gravitational field was then similarly described. Newtonian gravitation A classical field theory describing gravity is Newtonian gravitation, which describes the gravitational force as a mutual interaction between two masses. Any body with mass M is associated with a gravitational field g which describes its influence on other bodies with mass. The gravitational field of M at a point r in space corresponds to the ratio between force F that M exerts on a small or negligible test mass m located at r and the test mass itself: Stipulating that m is much smaller than M ensures that the presence of m has a negligible influence on the behavior of M. According to Newton's law of universal gravitation, F(r) is given by where is a unit vector lying along the line joining M and m and pointing from M to m. Therefore, the gravitational field of M is The experimental observation that inertial mass and gravitational mass are equal to an unprecedented level of accuracy leads to the identity that gravitational field strength is identical to the acceleration experienced by a particle. This is the starting point of the equivalence principle, which leads to general relativity. Because the gravitational force F is conservative, the gravitational field g can be rewritten in terms of the gradient of a scalar function, the gravitational potential Φ(r): Electromagnetism Michael Faraday first realized the importance of a field as a physical quantity, during his investigations into magnetism. He realized that electric and magnetic fields are not only fields of force which dictate the motion of particles, but also have an independent physical reality because they carry energy. These ideas eventually led to the creation, by James Clerk Maxwell, of the first unified field theory in physics with the introduction of equations for the electromagnetic field. The modern versions of these equations are called Maxwell's equations. Electrostatics A charged test particle with charge q experiences a force F based solely on its charge. We can similarly describe the electric field E so that . Using this and Coulomb's law tells us that the electric field due to a single charged particle is The electric field is conservative, and hence can be described by a scalar potential, V(r): Magnetostatics A steady current I flowing along a path ℓ will create a field B, that exerts a force on nearby moving charged particles that is quantitatively different from the electric field force described above. The force exerted by I on a nearby charge q with velocity v is where B(r) is the magnetic field, which is determined from I by the Biot–Savart law: The magnetic field is not conservative in general, and hence cannot usually be written in terms of a scalar potential. However, it can be written in terms of a vector potential, A(r): Electrodynamics In general, in the presence of both a charge density ρ(r, t) and current density J(r, t), there will be both an electric and a magnetic field, and both will vary in time. They are determined by Maxwell's equations, a set of differential equations which directly relate E and B to ρ and J. Alternatively, one can describe the system in terms of its scalar and vector potentials V and A. A set of integral equations known as retarded potentials allow one to calculate V and A from ρ and J, and from there the electric and magnetic fields are determined via the relations At the end of the 19th century, the electromagnetic field was understood as a collection of two vector fields in space. Nowadays, one recognizes this as a single antisymmetric 2nd-rank tensor field in spacetime. Gravitation in general relativity Einstein's theory of gravity, called general relativity, is another example of a field theory. Here the principal field is the metric tensor, a symmetric 2nd-rank tensor field in spacetime. This replaces Newton's law of universal gravitation. Waves as fields Waves can be constructed as physical fields, due to their finite propagation speed and causal nature when a simplified physical model of an isolated closed system is set . They are also subject to the inverse-square law. For electromagnetic waves, there are optical fields, and terms such as near- and far-field limits for diffraction. In practice though, the field theories of optics are superseded by the electromagnetic field theory of Maxwell Gravity waves are waves in the surface of water, defined by a height field. Fluid dynamics Fluid dynamics has fields of pressure, density, and flow rate that are connected by conservation laws for energy and momentum. The mass continuity equation is a continuity equation, representing the conservation of mass and the Navier–Stokes equations represent the conservation of momentum in the fluid, found from Newton's laws applied to the fluid, if the density , pressure , deviatoric stress tensor of the fluid, as well as external body forces b, are all given. The flow velocity u is the vector field to solve for. Elasticity Linear elasticity is defined in terms of constitutive equations between tensor fields, where are the components of the 3x3 Cauchy stress tensor, the components of the 3x3 infinitesimal strain and is the elasticity tensor, a fourth-rank tensor with 81 components (usually 21 independent components). Thermodynamics and transport equations Assuming that the temperature T is an intensive quantity, i.e., a single-valued, continuous and differentiable function of three-dimensional space (a scalar field), i.e., that , then the temperature gradient is a vector field defined as . In thermal conduction, the temperature field appears in Fourier's law, where q is the heat flux field and k the thermal conductivity. Temperature and pressure gradients are also important for meteorology. Quantum fields It is now believed that quantum mechanics should underlie all physical phenomena, so that a classical field theory should, at least in principle, permit a recasting in quantum mechanical terms; success yields the corresponding quantum field theory. For example, quantizing classical electrodynamics gives quantum electrodynamics. Quantum electrodynamics is arguably the most successful scientific theory; experimental data confirm its predictions to a higher precision (to more significant digits) than any other theory. The two other fundamental quantum field theories are quantum chromodynamics and the electroweak theory. In quantum chromodynamics, the color field lines are coupled at short distances by gluons, which are polarized by the field and line up with it. This effect increases within a short distance (around 1 fm from the vicinity of the quarks) making the color force increase within a short distance, confining the quarks within hadrons. As the field lines are pulled together tightly by gluons, they do not "bow" outwards as much as an electric field between electric charges. These three quantum field theories can all be derived as special cases of the so-called standard model of particle physics. General relativity, the Einsteinian field theory of gravity, has yet to be successfully quantized. However an extension, thermal field theory, deals with quantum field theory at finite temperatures, something seldom considered in quantum field theory. In BRST theory one deals with odd fields, e.g. Faddeev–Popov ghosts. There are different descriptions of odd classical fields both on graded manifolds and supermanifolds. As above with classical fields, it is possible to approach their quantum counterparts from a purely mathematical view using similar techniques as before. The equations governing the quantum fields are in fact PDEs (specifically, relativistic wave equations (RWEs)). Thus one can speak of Yang–Mills, Dirac, Klein–Gordon and Schrödinger fields as being solutions to their respective equations. A possible problem is that these RWEs can deal with complicated mathematical objects with exotic algebraic properties (e.g. spinors are not tensors, so may need calculus for spinor fields), but these in theory can still be subjected to analytical methods given appropriate mathematical generalization. Field theory Field theory usually refers to a construction of the dynamics of a field, i.e. a specification of how a field changes with time or with respect to other independent physical variables on which the field depends. Usually this is done by writing a Lagrangian or a Hamiltonian of the field, and treating it as a classical or quantum mechanical system with an infinite number of degrees of freedom. The resulting field theories are referred to as classical or quantum field theories. The dynamics of a classical field are usually specified by the Lagrangian density in terms of the field components; the dynamics can be obtained by using the action principle. It is possible to construct simple fields without any prior knowledge of physics using only mathematics from multivariable calculus, potential theory and partial differential equations (PDEs). For example, scalar PDEs might consider quantities such as amplitude, density and pressure fields for the wave equation and fluid dynamics; temperature/concentration fields for the heat/diffusion equations. Outside of physics proper (e.g., radiometry and computer graphics), there are even light fields. All these previous examples are scalar fields. Similarly for vectors, there are vector PDEs for displacement, velocity and vorticity fields in (applied mathematical) fluid dynamics, but vector calculus may now be needed in addition, being calculus for vector fields (as are these three quantities, and those for vector PDEs in general). More generally problems in continuum mechanics may involve for example, directional elasticity (from which comes the term tensor, derived from the Latin word for stretch), complex fluid flows or anisotropic diffusion, which are framed as matrix-tensor PDEs, and then require matrices or tensor fields, hence matrix or tensor calculus. The scalars (and hence the vectors, matrices and tensors) can be real or complex as both are fields in the abstract-algebraic/ring-theoretic sense. In a general setting, classical fields are described by sections of fiber bundles and their dynamics is formulated in the terms of jet manifolds (covariant classical field theory). In modern physics, the most often studied fields are those that model the four fundamental forces which one day may lead to the Unified Field Theory. Symmetries of fields A convenient way of classifying a field (classical or quantum) is by the symmetries it possesses. Physical symmetries are usually of two types: Spacetime symmetries Fields are often classified by their behaviour under transformations of spacetime. The terms used in this classification are: scalar fields (such as temperature) whose values are given by a single variable at each point of space. This value does not change under transformations of space. vector fields (such as the magnitude and direction of the force at each point in a magnetic field) which are specified by attaching a vector to each point of space. The components of this vector transform between themselves contravariantly under rotations in space. Similarly, a dual (or co-) vector field attaches a dual vector to each point of space, and the components of each dual vector transform covariantly. tensor fields, (such as the stress tensor of a crystal) specified by a tensor at each point of space. Under rotations in space, the components of the tensor transform in a more general way which depends on the number of covariant indices and contravariant indices. spinor fields (such as the Dirac spinor) arise in quantum field theory to describe particles with spin which transform like vectors except for one of their components; in other words, when one rotates a vector field 360 degrees around a specific axis, the vector field turns to itself; however, spinors would turn to their negatives in the same case. Internal symmetries Fields may have internal symmetries in addition to spacetime symmetries. In many situations, one needs fields which are a list of spacetime scalars: (φ1, φ2, ... φN). For example, in weather prediction these may be temperature, pressure, humidity, etc. In particle physics, the color symmetry of the interaction of quarks is an example of an internal symmetry, that of the strong interaction. Other examples are isospin, weak isospin, strangeness and any other flavour symmetry. If there is a symmetry of the problem, not involving spacetime, under which these components transform into each other, then this set of symmetries is called an internal symmetry. One may also make a classification of the charges of the fields under internal symmetries. Statistical field theory Statistical field theory attempts to extend the field-theoretic paradigm toward many-body systems and statistical mechanics. As above, it can be approached by the usual infinite number of degrees of freedom argument. Much like statistical mechanics has some overlap between quantum and classical mechanics, statistical field theory has links to both quantum and classical field theories, especially the former with which it shares many methods. One important example is mean field theory. Continuous random fields Classical fields as above, such as the electromagnetic field, are usually infinitely differentiable functions, but they are in any case almost always twice differentiable. In contrast, generalized functions are not continuous. When dealing carefully with classical fields at finite temperature, the mathematical methods of continuous random fields are used, because thermally fluctuating classical fields are nowhere differentiable. Random fields are indexed sets of random variables; a continuous random field is a random field that has a set of functions as its index set. In particular, it is often mathematically convenient to take a continuous random field to have a Schwartz space of functions as its index set, in which case the continuous random field is a tempered distribution. We can think about a continuous random field, in a (very) rough way, as an ordinary function that is almost everywhere, but such that when we take a weighted average of all the infinities over any finite region, we get a finite result. The infinities are not well-defined; but the finite values can be associated with the functions used as the weight functions to get the finite values, and that can be well-defined. We can define a continuous random field well enough as a linear map from a space of functions into the real numbers.
Physical sciences
Basics_6
null
6534970
https://en.wikipedia.org/wiki/Chrysalidocarpus%20lutescens
Chrysalidocarpus lutescens
Chrysalidocarpus lutescens, also known by its synonym Dypsis lutescens and as golden cane palm, areca palm, yellow palm, butterfly palm, or bamboo palm, is a species of flowering plant in the family Arecaceae, native to Madagascar and naturalized in the Andaman Islands, Thailand, Vietnam, Réunion, El Salvador, Cuba, Puerto Rico, the Canary Islands, southern Florida, Haiti, the Dominican Republic, Jamaica, the Leeward Islands and the Leeward Antilles. Its native names are rehazo and lafahazo (from Malagasy hazo 'tree' with reha 'pride' and lafa 'fibre' respectively). Description Chrysalidocarpus lutescens is a perennial tropical plant that grows to in height and spreads from 3-5 m (8-15ft). Multiple cane-like stems emerge from the base, creating a vase-like shape. The leaves are upward-arching, long, pinnate, with a yellow mid-rib. The petiole is yellow-green in colour and waxy in texture, with a maculate base. The leaves have 40-60 pairs of leaflets. Leaflet arrangement is opposite and their shape is linear to lanceolate. It bears 2-ft-long panicles of yellow flowers in summer. Offsets can be cut off when mature enough, as a propagation method. It bears oblong fruit that is 0.5 in long and ripens from yellow/gold to dark purple/black. It is grown as an ornamental plant in gardens in tropical and subtropical regions, and elsewhere indoors as a houseplant, one of the most important commercially. It has gained the Royal Horticultural Society's Award of Garden Merit. One of several common names, "butterfly palm", refers to the leaves, which curve upwards in multiple stems to create a butterfly look. In its introduced range, this plant acts as a supplier of fruit to some bird species that feed on it opportunistically, such as Pitangus sulphuratus, Coereba flaveola, and Thraupis sayaca species in Brazil. Cultural requirements In its native habitat of Madagascar, C. lutescens grows in moist forested areas. It grows best in rich, moist, and well-drained soils and in bright, partly shaded areas. It tolerates full sun, but long periods of direct sunlight may burn the foliage. Overfertilization results in yellowing of the leaves. It is a low-maintenance tropical plant. It is winter hardy to USDA zones 10-11, and does well outdoors in warm climates with medium to high humidity. The plant is highly sensitive to cold temperatures. Pests and diseases C. lutescens has no serious insect or disease problems. It is susceptible to scale, whiteflies, and spider mites. Plants grown outdoors may be subject to phytoplasma disease of palms, which is spread by planthoppers and can cause severe yellowing. Uses In its native climate, the plant may be massed and used as a landscape specimen, privacy screen, or informal hedge. It can be grown as a tree or shrub. In areas of eastern Madagascar, this plant also has environmental and medicinal uses. It was once used as a source of fibre to make fishing nets. Houseplant maintenance Chrysalidocarpus lutescens is a popular, low-maintenance houseplant. If grown indoors, plant in a well-drained potting soil in a pot that has adequate drainage holes. The size of the pot should be twice the size of the root ball. Repotting may be necessary every 2-3 years, and size of the pot should only increase 3-4 inches compared to the size of the old pot. Ensure the plant is placed in an area with bright, indirect sunlight. It will thrive near a window where light is filtered, but will struggle if placed in the path of direct sun, which may cause scorching or yellowing of the foliage. If grown in areas with extreme temperatures, note that the plant will struggle in temperatures that drop below 60°F/15°C. This plant prefers moist soil, but cannot tolerate soggy conditions. To prevent overwatering, check the moisture level regularly and allow soil to dry out between waterings. The plant will benefit from fertilization in the summer months when it is experiencing the most growth. C. lutescens prefers medium to high humidity, if the air indoors is too dry, the foliage may exhibit browning at the tips. This can be remedied with manual misting or adding a humidifier to the room. This plant does not require pruning. Pruning may be done based on owner preference. Gallery
Biology and health sciences
Arecales (inc. Palms)
Plants
6541883
https://en.wikipedia.org/wiki/Fishing%20float
Fishing float
A fishing float or bobber is a lightweight buoy used in angling, usually attached to a fishing line. Angling using a float is sometimes called float fishing. A float can serve several purposes: firstly, it serves as a visual bite indicator that helps the angler assess underwater status of the baited hook and decide whether to start retrieving the line; secondly, it can suspend the hook and bait at a predetermined depth, which helps the angler target fish in specific depths; thirdly, as a terminal tackle, it adds mass and allows the hook and bait to be cast farther against air resistance; and lastly, due to its buoyancy, it can carry the baited hook to otherwise inaccessible areas of water by drifting along the prevailing current. Design and functions A typical float consists of a body with lower specific gravity than water, which provides the buoyancy to remain afloat at the water surface); a brightly colored rod at the top, which makes it easier to be seen from afar; and an attachment at the bottom that suspends the hook. Sometimes a small counterweight is also placed at the bottom to help the float to stay upright against wind and waves. The float is used to enable the angler to cast out a bait away from the shore or boat while maintaining a reference point to where the bait is unlike bottom or leger fishing. The angler will select an appropriate float after taking into account the strength of the current (if any), the wind speed, the size of the bait he or she is using, the depth the angler wishes to present that bait at and the distance the bait is to be cast. Usually, the line between the float and hook will have small weights attached, ensuring that the float sits vertically in the water with only a small brightly coloured tip remaining visible. The rest of the float is usually finished in a dull neutral colour to render it as inconspicuous as possible to the fish. Each float style is designed to be used in certain types of conditions such as slow or fast rivers, windy or still water or small confined waters such as canals. History It is impossible to say with any degree of accuracy who first used a float for indicating that a fish had taken the bait, but it can be said with some certainty that people used pieces of twig, bird feather quills or rolled leaves as bite indicators, many years before any documented evidence. The first known mention of using a float appears in the book "Treatyse of fysshynge wyth an Angle" written by Juliana Berners in 1496: The method described, involved boring a hole through a cork so the line could be passed through and trapped with a quill. Later books such as "the Arte of Angling," a 1577 text edited by Gerald Eades Bentley in 1956, and the classic work "The Compleat Angler" first published in 1653, written by Isaak Walton gave greater detail on fishing and using floats. Prior to about 1800, anglers made their own floats, a practice that many still carry on today. As angling became more popular, companies started to make floats in different styles to supply the growing demand. By 1921, companies such as Wadhams had at least 250 mainly celluloid floats in their catalog. Since those early days, the fishing float has become the subject of much practical and theoretical change. English anglers such as Peter Drennan (Drennan International) and Kenneth Middleton (Middy Tackle) and American fishermen like Chicago's ex World Champion Mick Thill (Thill Floats) have built up large companies designing and marketing fishing floats. The English companies have been supported by major league anglers such as Ivan Marks, Benny Ashurst and Billy Lane. Types Floats come in different sizes and shapes, and can be made from various materials, such as foam, balsa wood, cork, plastic, Indian sarkanda reed, or even bird/porcupine quills. Avon The Avon float is a straight float with a body at the top. It was designed to cope with the fast flow conditions of the English River Avon. Many early floats were Avon style having a cork body pushed onto a crow quill. It is fished attached to the line top and bottom. Bubble Bubble floats are small hollow balls which are used to control the fishing line. They may have the facility to be partially filled with water to control how much float is above the water. They are used in situations where a normal float cannot be cast, such a working close to the edge of reeds or heavy surface plant growth. The bubble float can be allowed to drift into the area without tangling. Dink The Dink Float is most commonly made of a cylinder of dark foam with a smaller cylinder of cork on the top painted for indicator. The line is run through the top, wrapped around the cylinder and through the bottom. Main advantage is that the float needs no stopper on the main line, the wrap of line between the top of bottom of float will hold it in place. Popper A popper float, commonly called a 'popping cork' is designed to mimic a large fish feeding at the surface with rod action. There are different styles of popper floats, some use a metal wire with beads at each end to make a clicking noise when pulled through the water, while more modern floats make use of a concave top, which make a deep chugging sound when pulled through the water, imitating the sound of large predator fish feeding at the surface. Some popping corks also have pellets inside, designed to mimic bait fish jumping at the surface when rattled. Quill The quill is one of the earliest floats, originally it was a bird feather quill but with the opening up of new worlds, porcupine quills from Africa became a standard for the float. It is fished in the same way as a stick float. Self-cocking Self-cocking floats can be of many styles but they are all weighted so that in the water they automatically stand upright without the use of shot or weights on the fishing line. Stick The Stick Float is a straight float with a taper. It is always attached to the line both top and bottom. They are made from two different materials, a light, buoyant top section of balsa wood and a heavy stem of hard grade cane, non-buoyant hardwood, or plastic. Unlike the Avon float, the stick has no body; it is just a tapered rod. Waggler A waggler float is the term given to any float which is attached only at the bottom to the line. They come in two different types, straight or bodied. These two types can come both with and without inserts (antennas). They are made from a variety of materials including quills (such as peacock), balsa wood, cane, plastic and reed. With direction control Floats with direction control change direction by planing or moving to one side when given a tug.
Technology
Hunting and fishing
null
21124968
https://en.wikipedia.org/wiki/Lifeboat%20%28rescue%29
Lifeboat (rescue)
A rescue lifeboat is a boat rescue craft which is used to attend a vessel in distress, or its survivors, to rescue crew and passengers. It can be hand pulled, sail powered or powered by an engine. Lifeboats may be rigid, inflatable or rigid-inflatable combination-hulled vessels. Overview There are generally three types of boat, in-land (used on lakes and rivers), in-shore (used closer to shore) and off-shore (into deeper waters and further out to sea). A rescue lifeboat is a boat designed with specialised features for searching for, rescuing and saving the lives of people in peril at sea or other large bodies of water. In the United Kingdom and Ireland rescue lifeboats are typically vessels crewed by volunteers, intended for quick dispatch, launch and transit to reach a ship or individuals in trouble at sea. Off-shore boats are referred to as 'All-weather' and generally have a range of 150–250 nautical miles. Characteristics such as capability to withstand heavy weather, fuel capacity, navigation and communication devices carried, vary with size. A vessel and her crew can be used for operation out to away from a place of safe refuge, remaining at or on the scene to search for several hours, with fuel reserves sufficient for returning; operating in up to gale force sea conditions; in daylight, fog and darkness. A smaller inshore rescue boat (IRB) or inshore life boat (ILB) and her crew would not be able to withstand (or even survive) these conditions for long. In countries such as Canada and the United States, the term 'motor lifeboat', or its US military acronym MLB, is used to designate shore-based rescue lifeboats which are generally crewed by full-time coast guard service personnel. These vessels stay on standby service rather than patrolling in the water, like a crew of fire fighters standing by for an alarm. In Canada, some lifeboats are 'co-crewed', meaning that the operator and engineer are full-time personnel while the crew members are trained volunteers. Types of craft Inflatable boats (IB, RIB and RHIB) Older inflatable boats, such as those introduced by the Royal National Lifeboat Institution (RNLI) and Atlantic College in 1963, were soon made larger and those over often had plywood bottoms and were known as RIBs. These two types were superseded by newer types of RIBs which had purpose built hulls and flotation tubes. A gap in operations caused the New Zealand Lifeguard Service to reintroduce small 2 man IRB's, which have since been adopted by other organisations such as the RNLI as well. Lifeboats Larger non-inflatable boats are also employed as lifeboats. The RNLI fields the Severn class lifeboat and Tamar class lifeboat as all-weather lifeboats (ALB). In the United States and Canada, the term motor life boat (MLB) refers to a similar class of non-inflatable self-righting lifeboats, such as the 47-foot Motor Lifeboat. In France, the SNSM mainly equips all-weather lifeboats of the 17.6 m series of the "Patron Jack Morisseau" class. In 2022, the Canadian Coast Guard launched 62’ / 19m Bay Class Lifeboat based on hull form and topsides style of the shorter RNLI Severn Class All-weather Life-Boat (ALB) designed by Robert Allan RAL naval architects of Vancouver. History China The first recorded rescue attempt by boat is noted in the Record of the Grand Historian, Sima Qian, of the second century AD. It was the unsuccessful attempt to rescue Qu Yuan from drowning in the Miluo River (Hunan Province) by local fishermen. This humanitarian act has ever since been re-enacted in the 2,000 year old annual dragon boat festival, which is a UNESCO-designated intangible cultural heritage of the world. A regular lifeboat service operated from 1854 to 1940 along the middle reaches of the Chang jiang or Yangtze, a major river which flows through south central China. These waters are particularly treacherous to waterway travellers owing to the canyon-like gorge conditions along the river shore and the high volume and rate of flow. The 'long river' was a principal means of communication between coastal (Shanghai) and interior China (Chongqing, once known as Chungking). These river lifeboats, usually painted red, were of a wooden pulling boat design, with a very narrow length-to-beam ratio and a shallow draft for negotiating shoal waters and turbulent rock-strewn currents. They could thus be maneuvered sideways to negotiate rocks, similar to today's inflated rafts for 'running' fast rivers, and also could be hauled upstream by human haulers, rather than beasts of burden, who walked along narrow catwalks lining the canyon sides. United Kingdom The first lifeboat station in Britain was at Formby beach, established in 1776 by William Hutchinson, Dock Master for the Liverpool Common Council. The first non-submersible ('unimmergible') lifeboat is credited to Lionel Lukin, an Englishman who, in 1784, modified and patented a Norwegian yawl, fitting it with water-tight cork-filled chambers for additional buoyancy and a cast iron keel to keep the boat upright. The first boat specialised as a lifeboat was tested on the River Tyne in England on January 29, 1790, built by Henry Greathead. The design won a competition organised by the private Law House committee, though William Wouldhave and Lionel Lukin both claimed to be the inventor of the first lifeboat. Greathead's boat, the Original (combined with some features of Wouldhave's) entered service in 1790 and another 31 of the same design were constructed. The boat was rowed by up to 12 crew for whom cork jackets were provided. In 1807 Ludkin designed the Frances Ann for the Lowestoft service, which wasn't satisfied with Greathead's design, and this saved 300 lives over 42 years of service. The first self-righting design was developed by William Wouldhave and also entered in the Law House competition, but was only awarded a half-prize. Self-righting designs were not deployed until the 1840s. These lifeboats were crewed by 6 to 10 volunteers who would row out from shore when a ship was in distress. In the case of the UK the crews were generally local boatmen. One example of this was the Newhaven Lifeboat, established in 1803 in response to the wrecking of HMS Brazen in January 1800, when only one of her crew of 105 could be saved. The UK combined many of these local efforts into a national organisation in 1824 with the establishment of the Royal National Lifeboat Institution. One example of an early lifeboat was the Landguard Fort Lifeboat of 1821, designed by Richard Hall Gower. In 1851, James Beeching and James Peake produced the design for the Beeching–Peake SR (self-righting) lifeboat which became the standard model for the new Royal National Lifeboat Institution fleet. The first motorised boat, the Duke of Northumberland, was built in 1890 and was steam powered. In 1929 the motorised lifeboat Princess Mary was commissioned and was the largest oceangoing lifeboat at that time, able to carry over 300 persons on rescue missions. The Princess Mary was stationed at Padstow in Cornwall, England. United States The United States Life Saving Service (USLSS) was established in 1848. This was a United States government agency that grew out of private and local humanitarian efforts to save the lives of shipwrecked mariners and passengers. In 1915 the USLSS merged with the Revenue Cutter Service to form the United States Coast Guard (USCG). In 1899 the Lake Shore Engine Company, at the behest of the Marquette Life Saving Station, fitted a two-cylinder engine to a lifeboat on Lake Superior, Michigan. Its operation marked the introduction of the term motor life boat (MLB). By 1909, 44 boats had been fitted with engines whose power had increased to . The sailors of the MLBs are called "surfmen", after the name given to the volunteers of the original USLSS. The main school for training USCG surfmen is the National Motor Lifeboat School (NMLBS) located at the Coast Guard Station Cape Disappointment at the mouth of the Columbia River, which is also the boundary separating Washington State from Oregon State. The sand bars which form at the entrance are treacherous and provide a tough training environment for surf lifesavers. Canada Canada established its first lifeboat stations in the mid-to-late 19th century along the Atlantic and Pacific coasts, as well as along the shores of the Canadian side of the Great Lakes. The original organisation was called the "Canadian Lifesaving Service", not to be confused with the Royal Life Saving Society of Canada, which came later at the turn of the 20th century. In 1908, Canada had the first lifeboat (a pulling sailing boat design) to be equipped with a motor in North America, at Bamfield, British Columbia. France The Société Nationale de Sauvetage en Mer (SNSM) is a French voluntary organisation founded in 1967 by merging the Société Centrale de Sauvetage des Naufragés (founded in 1865) and the Hospitaliers Sauveteurs Bretons (1873). Its task is saving lives at sea around the French coast, including the overseas départments and territories. Modern lifeboats Lifeboats have been modified by the addition of an engine since 1890 which provides more power to get in and out of the swell area inside the surf. They can be launched from shore in any weather and perform rescues further out. Older lifeboats relied on sails and oars which are slower and dependent on wind conditions or manpower. Modern lifeboats generally have electronic devices such as radios and radar to help locate the party in distress and carry medical and food supplies for the survivors. The Rigid Hulled Inflatable Boat (RHIB) is now seen as the best type of craft for in-shore rescues as they are less likely to be tipped over by the wind or breakers. Specially designed jet rescue boats have also been used successfully. Unlike ordinary pleasure craft these small to medium-sized rescue craft often have a very low freeboard so that victims can be taken aboard without lifting. This means that the boats are designed to operate with water inside the boat hull and rely on flotation tanks rather than hull displacement to stay afloat and upright. Inflatables (IB)s fell out of general use after the introduction of RIBs during the 1970s. Conditions in New Zealand and other large surf zones was identified and Inflatable Rescue Boats (IRB), small non rigid powered boats, were introduced by New Zealand at Piha Beach and have been put into use in many other countries including Australia and the RNLI in the UK. Australasia In Australasia surf lifesaving clubs operate inflatable rescue boats (IRB) for in-shore rescues of swimmers and surfers. These boats are best typified by the rubber Zodiac and are powered by a 25-horsepower outboard motor. In the off season, these boats are used in competitive rescue racing. In addition to this, most states have a power craft rescue service. RWCs (Rescue Water Craft, Jetski) are common to many beaches, providing lifesaving service. The state of New South Wales operates dual hull fiberglass offshore boats, while Queensland, Tasmania and South Australia operate aluminum hull Jet Rescue Boats, of about 6m in length. Some regions such as North Queensland and the Northern Territory operate RNLI style rigid hull inflatables. In Auckland, New Zealand two 15-foot surf jet rescue boat powered by three stage Hamilton jet units were stationed in the 1970s and 1980s at Piha Beach the home of the Piha Surf Life Saving Club. Canada The Canadian Coast Guard operates makes and models of motor lifeboats that are modified RNLI and USCG designs such as the Arun and the 47 footer (respectively). France The SNSM operates over 500 boats crewed by more than 3200 volunteers, from all-weather lifeboats to jetskis, dispersed in 218 stations (including 15 in overseas territories). In 2009 the SNSM was responsible for about half of all sea rescue operations and saved 5,400 lives in 2816 call-outs and assisted 2140 boats in distress. The service has 41 all-weather rescue boats, 34 first-class rescue boats, 76 second-class lifeboats and 20 light rescue boats (and an amphibious rescue boat), and many inflatable boats. All these boats are made unsinkable by injection into the hull of very light materials (closed cell polyurethane foam) : with these buoyancy reserves, the boat itself full of water always remains in positive buoyancy; they also have a tight sealed compartment. All-weather lifeboats from 15 meters to 18 meters are self-righting. The first class lifeboat have capacities close to the all-weather rescue boats, the second class lifeboat are intended for slightly less difficult conditions. The first and second class boats, respectively 14 meters and 12 meters, which are the most recent boats, are self-righting. The boats are dispersed in 185 stations (including 15 in overseas territories). Germany In Germany, the German Maritime Search and Rescue Service (DGzRS) has provided naval rescue service since 1865. It is a civilian, non-profit organisation which relies entirely on individual funding (no government support) and has a variety of boats and ships, the biggest being the Hermann Marwede with 400 tons displacement, the largest lifeboat in the world, operating from the island of Helgoland. The DGzRS operates from 54 stations in the North Sea and the Baltic Sea. It has 20 rescue cruisers (usually piggybacking a smaller rescue boat), mostly operated by own full-time personnel and 40 rescue boats operated by volunteers. Voluntary organisations such as the German Red Cross (Wasserwacht) and DLRG provide lifeguarding and emergency response for rivers, lakes, coasts and such like. Netherlands The Dutch lifeboat association Koninklijke Nederlandse Redding Maatschappij (KNRM) has developed jet-driven RIB lifeboats. This has resulted in 3 classes, the largest is the Arie Visser class: length 18,80 m, twin jet, 2 x , max. speed , capacity 120 persons. Some local lifeguard organisations also respond on the SAR. Scandinavia Most Scandinavian countries also have volunteer lifeboat societies. UK and Ireland Royal National Lifeboat Institution The Royal National Lifeboat Institution (or RNLI) maintains lifeboats around the coasts of Great Britain and Ireland crewed largely by unpaid volunteers, most part-time, with equipment funded through voluntary donations. In Britain, the RNLI design and build several types of all-weather motor lifeboats, the Arun class kept permanently afloat, the Tyne class slipway-launched boat and the Mersey class carriage-launched boat. More recently the Arun replacement Trent and Severn class prototype models were delivered in 1992 with the first production Trent arriving in 1994 and the Severn in 1996. The first production Tamar class, replacement for the Tyne went into service in December 2005 and the FCB2 class replacement for the Mersey is being developed for deployment sometime in 2013. The FCB2 class of lifeboat was on 11 April 2011 accepted as a proven design and given the class name Shannon, continuing the RNLI tradition of naming all-weather lifeboat classes after rivers in the British Isles. Scarborough lifeboat station in North Yorkshire and Hoylake lifeboat station on the Wirral are two of the first stations to be allocated one of the new boats. Scarborough's Shannon class lifeboat will be named Frederick William Plaxton in his memory as he left a substantial legacy to the RNLI specifically to purchase Scarborough's next all-weather lifeboat. Independent services There are at least 70 lifeboat services in Britain and Ireland that are independent of the RNLI, providing lifeboats and crews 24 hours a day all year round, manned by unpaid volunteers. They operate inland, inshore or offshore, according to local needs. United States The United States Life Saving Service began using motorised lifeboats in 1899. Models derived from this hull design remained in use until 1987. Today in U.S. waters rescue-at-sea is part of the duties of the United States Coast Guard. The coast guard's MLBs, an integral part of the USCG's fleet, are built to withstand the most severe conditions at sea. Designed to be self-bailing, self-righting and practically unsinkable, MLBs are used for surf rescue in heavy weather. 36' (foot) The T model was introduced in 1929. At length overall, beam and with a two-ton lead keel, she was powered by a Sterling gas engine and had a speed of nine knots (17 km/h). From the early days of the 20th century the 36 MLB was the mainstay of coastal rescue operations for over 30 years until the 44 MLB was introduced in 1962. Built at the Coast Guard Yard in Curtis Bay, Maryland, 218 36 T, TR and TRS MLBs were built between 1929 and 1956. Based on a hull design from the 1880s, the 36 TRS and her predecessors remain the longest active hull design in the Coast Guard, serving the Coast Guard and the Life Saving Services for almost 100 years, the last one, CG-36535, serving Depoe Bay MLB Station in Oregon until 1987. 52' (foot) In the mid-1930s the USCG ordered two 52-foot wooden-hulled motor lifeboats (MLBs) for service where there was a high traffic of merchants ships and heavy seas that had a high capacity in the number of person that could be rescued of approximately 100 and could tow ten fully loaded standard life boats used by most merchant vessels. Unlike the older 36-foot, the 52-foot MLBs had a diesel engine. The 52-foot wooden-hulled MLBs were the only Coast Guard vessels less than in length that were given names, CG-52300 Invincible and CG-52301 Triumph. Both were built at the United States Coast Guard Yard; Invincible was initially assigned to Station Sandy Hook, New Jersey, and Triumph was assigned to Station Point Adams in Oregon. In time Invincible was also transferred to the Pacific Northwest at Station Grays Harbor. Triumph later capsized and sank during a rescue mission on January 12, 1961. By that time, the Coast Guard had already built two of the four steel-hulled successor 52-foot Motor Lifeboats. , the steel-hulled 52' MLBs continue in service. 44' (foot) During the 1960s the Coast Guard replaced the MLB with the newly designed boat. These steel-hulled boats were more capable and more complicated than the wooden lifeboats they replaced. In all 110 vessels would be built by the Coast Guard Yard in Curtis Bay between 1962 and 1972 with an additional 52 built by the RNLI, Canadian Coast Guard and others under licence from the USCG. The last active 44' MLB in the United States Coast Guard was retired in May 2009, however these boats are still in active service elsewhere around the globe. The 44' MLB can be found in many third world countries and is faithfully serving the Royal Volunteer Coastal Patrol in Australia and the Royal New Zealand Coastguard Federation. The current engine configuration is twin Detroit Diesel 6v53s that put out each at a max RPM of 2800. 30' (foot) surf rescue boat Another surf capable boat that the Coast Guard has used in recent years is the 30' surf rescue boat (SRB) introduced in 1983. The 30' SRB was self-righting and self bailing and designed with marked differences from the typical lifeboats used by the Coast Guard up until the early 1980s. The 30' SRB is not considered to be an MLB, but was generally used in a similar capacity. Designed to perform search and rescue in adverse weather the vessel is generally operated with a crew of two, a surfman and an engineer. The crew both stand on the coxswain flat, protected by the superstructure on the bow and stern. The boat's appearance has caused many to comment that it looks like a "Nike Tennis Shoe". Since 1997 the introduction of the faster 47' MLB and the phasing out of the 44' MLBs made the 30 footers obsolete. The class of vessels underwent an overhaul in the early nineties to extend their life until the newer and faster 47' motor lifeboats came into service, and in the late 1990s most of the 30 footers were de-commissioned. One still remains on active duty at Motor Lifeboat Station Depoe Bay in Depoe Bay, Oregon and is used almost daily. This station was host to the last 36' motor lifeboat in the late 1980s. 47' (foot) The USCG has since designed and built new aluminum lifeboats and the first production boat was delivered to the USCG in 1997. The 47-Foot Motor Lifeboat is able to withstand impacts of three times the acceleration of gravity, can survive a complete roll-over and is self-righting in less than 10 seconds with all machinery and instruments remaining fully operational. The 47' MLB can travel at to reach her destination. There are 117 operational with a total of 200 scheduled to be delivered to the USCG. A further 27 models are being built by MetalCraft Marine under licence to the Canadian Coast Guard. Response Boat – Medium The Response Boat – Medium is a replacement for the 41' boats and the USCG plans a fleet of 180 in the USA. Gallery
Technology
Naval transport
null
143320
https://en.wikipedia.org/wiki/PCI%20Express
PCI Express
PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe or PCI-E, is a high-speed serial computer expansion bus standard, meant to replace the older PCI, PCI-X and AGP bus standards. It is the common motherboard interface for personal computers' graphics cards, capture cards, sound cards, hard disk drive host adapters, SSDs, Wi-Fi, and Ethernet hardware connections. PCIe has numerous improvements over the older standards, including higher maximum system bus throughput, lower I/O pin count, smaller physical footprint, better performance scaling for bus devices, a more detailed error detection and reporting mechanism (Advanced Error Reporting, AER), and native hot-swap functionality. More recent revisions of the PCIe standard provide hardware support for I/O virtualization. The PCI Express electrical interface is measured by the number of simultaneous lanes. (A lane is a single send/receive line of data, analogous to a "one-lane road" having one lane of traffic in both directions.) The interface is also used in a variety of other standards — most notably the laptop expansion card interface called ExpressCard. It is also used in the storage interfaces of SATA Express, U.2 (SFF-8639) and M.2. Formal specifications are maintained and developed by the PCI-SIG (PCI Special Interest Group) — a group of more than 900 companies that also maintains the conventional PCI specifications. Architecture Conceptually, the PCI Express bus is a high-speed serial replacement of the older PCI/PCI-X bus. One of the key differences between the PCI Express bus and the older PCI is the bus topology; PCI uses a shared parallel bus architecture, in which the PCI host and all devices share a common set of address, data, and control lines. In contrast, PCI Express is based on point-to-point topology, with separate serial links connecting every device to the root complex (host). Because of its shared bus topology, access to the older PCI bus is arbitrated (in the case of multiple masters), and limited to one master at a time, in a single direction. Furthermore, the older PCI clocking scheme limits the bus clock to the slowest peripheral on the bus (regardless of the devices involved in the bus transaction). In contrast, a PCI Express bus link supports full-duplex communication between any two endpoints, with no inherent limitation on concurrent access across multiple endpoints. In terms of bus protocol, PCI Express communication is encapsulated in packets. The work of packetizing and de-packetizing data and status-message traffic is handled by the transaction layer of the PCI Express port (described later). Radical differences in electrical signaling and bus protocol require the use of a different mechanical form factor and expansion connectors (and thus, new motherboards and new adapter boards); PCI slots and PCI Express slots are not interchangeable. At the software level, PCI Express preserves backward compatibility with PCI; legacy PCI system software can detect and configure newer PCI Express devices without explicit support for the PCI Express standard, though new PCI Express features are inaccessible. The PCI Express link between two devices can vary in size from one to 16 lanes. In a multi-lane link, the packet data is striped across lanes, and peak data throughput scales with the overall link width. The lane count is automatically negotiated during device initialization and can be restricted by either endpoint. For example, a single-lane PCI Express (x1) card can be inserted into a multi-lane slot (x4, x8, etc.), and the initialization cycle auto-negotiates the highest mutually supported lane count. The link can dynamically down-configure itself to use fewer lanes, providing a failure tolerance in case bad or unreliable lanes are present. The PCI Express standard defines link widths of x1, x2, x4, x8, and x16. Up to and including PCIe 5.0, x12, and x32 links were defined as well but virtually never used. This allows the PCI Express bus to serve both cost-sensitive applications where high throughput is not needed, and performance-critical applications such as 3D graphics, networking (10 Gigabit Ethernet or multiport Gigabit Ethernet), and enterprise storage (SAS or Fibre Channel). Slots and connectors are only defined for a subset of these widths, with link widths in between using the next larger physical slot size. As a point of reference, a PCI-X (133 MHz 64-bit) device and a PCI Express 1.0 device using four lanes (x4) have roughly the same peak single-direction transfer rate of 1064 MB/s. The PCI Express bus has the potential to perform better than the PCI-X bus in cases where multiple devices are transferring data simultaneously, or if communication with the PCI Express peripheral is bidirectional. Interconnect PCI Express devices communicate via a logical connection called an interconnect or link. A link is a point-to-point communication channel between two PCI Express ports allowing both of them to send and receive ordinary PCI requests (configuration, I/O or memory read/write) and interrupts (INTx, MSI or MSI-X). At the physical level, a link is composed of one or more lanes. Low-speed peripherals (such as an 802.11 Wi-Fi card) use a single-lane (x1) link, while a graphics adapter typically uses a much wider and therefore faster 16-lane (x16) link. Lane A lane is composed of two differential signaling pairs, with one pair for receiving data and the other for transmitting. Thus, each lane is composed of four wires or signal traces. Conceptually, each lane is used as a full-duplex byte stream, transporting data packets in eight-bit "byte" format simultaneously in both directions between endpoints of a link. Physical PCI Express links may contain 1, 4, 8 or 16 lanes. Lane counts are written with an "x" prefix (for example, "x8" represents an eight-lane card or slot), with x16 being the largest size in common use. Lane sizes are also referred to via the terms "width" or "by" e.g., an eight-lane slot could be referred to as a "by 8" or as "8 lanes wide." For mechanical card sizes, see below. Serial bus The bonded serial bus architecture was chosen over the traditional parallel bus because of the inherent limitations of the latter, including half-duplex operation, excess signal count, and inherently lower bandwidth due to timing skew. Timing skew results from separate electrical signals within a parallel interface traveling through conductors of different lengths, on potentially different printed circuit board (PCB) layers, and at possibly different signal velocities. Despite being transmitted simultaneously as a single word, signals on a parallel interface have different travel duration and arrive at their destinations at different times. When the interface clock period is shorter than the largest time difference between signal arrivals, recovery of the transmitted word is no longer possible. Since timing skew over a parallel bus can amount to a few nanoseconds, the resulting bandwidth limitation is in the range of hundreds of megahertz. A serial interface does not exhibit timing skew because there is only one differential signal in each direction within each lane, and there is no external clock signal since clocking information is embedded within the serial signal itself. As such, typical bandwidth limitations on serial signals are in the multi-gigahertz range. PCI Express is one example of the general trend toward replacing parallel buses with serial interconnects; other examples include Serial ATA (SATA), USB, Serial Attached SCSI (SAS), FireWire (IEEE 1394), and RapidIO. In digital video, examples in common use are DVI, HDMI, and DisplayPort. Multichannel serial design increases flexibility with its ability to allocate fewer lanes for slower devices. Form factors PCI Express (standard) A PCI Express card fits into a slot of its physical size or larger (with x16 as the largest used), but may not fit into a smaller PCI Express slot; for example, a x16 card may not fit into a x4 or x8 slot. Some slots use open-ended sockets to permit physically longer cards and negotiate the best available electrical and logical connection. The number of lanes actually connected to a slot may also be fewer than the number supported by the physical slot size. An example is a x16 slot that runs at x4, which accepts any x1, x2, x4, x8 or x16 card, but provides only four lanes. Its specification may read as "x16 (x4 mode)", while "mechanical @ electrical" notation (e.g. "x16 @ x4") is also common. The advantage is that such slots can accommodate a larger range of PCI Express cards without requiring motherboard hardware to support the full transfer rate. Standard mechanical sizes are x1, x4, x8, and x16. Cards using a number of lanes other than the standard mechanical sizes need to physically fit the next larger mechanical size (e.g. an x2 card uses the x4 size, or an x12 card uses the x16 size). The cards themselves are designed and manufactured in various sizes. For example, solid-state drives (SSDs) that come in the form of PCI Express cards often use HHHL (half height, half length) and FHHL (full height, half length) to describe the physical dimensions of the card. Non-standard video card form factors Modern (since ) gaming video cards usually exceed the height as well as thickness specified in the PCI Express standard, due to the need for more capable and quieter cooling fans, as gaming video cards often emit hundreds of watts of heat. Modern computer cases are often wider to accommodate these taller cards, but not always. Since full-length cards (312 mm) are uncommon, modern cases sometimes cannot fit those. The thickness of these cards also typically occupies the space of 2 to 5 PCIe slots. In fact, even the methodology of how to measure the cards varies between vendors, with some including the metal bracket size in dimensions and others not. For instance, comparing three high-end video cards released in 2020: a Sapphire Radeon RX 5700 XT card measures 135 mm in height (excluding the metal bracket), which exceeds the PCIe standard height by 28 mm, another Radeon RX 5700 XT card by XFX measures 55 mm thick (i.e. 2.7 PCI slots at 20.32 mm), taking up 3 PCIe slots, while an Asus GeForce RTX 3080 video card takes up two slots and measures 140.1mm × 318.5mm × 57.8mm, exceeding PCI Express's maximum height, length, and thickness respectively. Pinout The following table identifies the conductors on each side of the edge connector on a PCI Express card. The solder side of the printed circuit board (PCB) is the A-side, and the component side is the B-side. PRSNT1# and PRSNT2# pins must be slightly shorter than the rest, to ensure that a hot-plugged card is fully inserted. The WAKE# pin uses full voltage to wake the computer, but must be pulled high from the standby power to indicate that the card is wake capable. Power Slot power All PCI express cards may consume up to at (). The amount of +12 V and total power they may consume depends on the form factor and the role of the card: x1 cards are limited to 0.5 A at +12V (6 W) and 10 W combined. x4 and wider cards are limited to 2.1 A at +12V (25 W) and 25 W combined. A full-sized x1 card may draw up to the 25 W limits after initialization and software configuration as a high-power device. A full-sized x16 graphics card may draw up to 5.5 A at +12V (66 W) and 75 W combined after initialization and software configuration as a high-power device. 6- and 8-pin power connectors Optional connectors add (6-pin) or (8-pin) of +12 V power for up to total (). Sense0 pin is connected to ground by the cable or power supply, or float on board if cable is not connected. Sense1 pin is connected to ground by the cable or power supply, or float on board if cable is not connected. Some cards use two 8-pin connectors, but this has not been standardized yet , therefore such cards must not carry the official PCI Express logo. This configuration allows 375 W total () and will likely be standardized by PCI-SIG with the PCI Express 4.0 standard. The 8-pin PCI Express connector could be confused with the EPS12V connector, which is mainly used for powering SMP and multi-core systems. The power connectors are variants of the Molex Mini-Fit Jr. series connectors. 12VHPWR connector PCI Express Mini Card PCI Express Mini Card (also known as Mini PCI Express, Mini PCIe, Mini PCI-E, mPCIe, and PEM), based on PCI Express, is a replacement for the Mini PCI form factor. It is developed by the PCI-SIG. The host device supports both PCI Express and USB 2.0 connectivity, and each card may use either standard. Most laptop computers built after 2005 use PCI Express for expansion cards; however, , many vendors are moving toward using the newer M.2 form factor for this purpose. Due to different dimensions, PCI Express Mini Cards are not physically compatible with standard full-size PCI Express slots; however, passive adapters exist that let them be used in full-size slots. Physical dimensions Dimensions of PCI Express Mini Cards are 30 mm × 50.95 mm (width × length) for a Full Mini Card. There is a 52-pin edge connector, consisting of two staggered rows on a 0.8 mm pitch. Each row has eight contacts, a gap equivalent to four contacts, then a further 18 contacts. Boards have a thickness of 1.0 mm, excluding the components. A "Half Mini Card" (sometimes abbreviated as HMC) is also specified, having approximately half the physical length of 26.8 mm. There are also half size mini PCIe cards that are 30 x 31.90 mm which is about half the length of a full size mini PCIe card. Electrical interface PCI Express Mini Card edge connectors provide multiple connections and buses: PCI Express x1 (with SMBus) USB 2.0 Wires to diagnostics LEDs for wireless network (i.e., Wi-Fi) status on computer's chassis SIM card for GSM and WCDMA applications (UIM signals on spec.) Future extension for another PCIe lane 1.5 V and 3.3 V power Mini-SATA (mSATA) variant Despite sharing the Mini PCI Express form factor, an mSATA slot is not necessarily electrically compatible with Mini PCI Express. For this reason, only certain notebooks are compatible with mSATA drives. Most compatible systems are based on Intel's Sandy Bridge processor architecture, using the Huron River platform. Notebooks such as Lenovo's ThinkPad T, W and X series, released in March–April 2011, have support for an mSATA SSD card in their WWAN card slot. The ThinkPad Edge E220s/E420s, and the Lenovo IdeaPad Y460/Y560/Y570/Y580 also support mSATA. On the contrary, the L-series among others can only support M.2 cards using the PCIe standard in the WWAN slot. Some notebooks (notably the Asus Eee PC, the Apple MacBook Air, and the Dell mini9 and mini10) use a variant of the PCI Express Mini Card as an SSD. This variant uses the reserved and several non-reserved pins to implement SATA and IDE interface passthrough, keeping only USB, ground lines, and sometimes the core PCIe x1 bus intact. This makes the "miniPCIe" flash and solid-state drives sold for netbooks largely incompatible with true PCI Express Mini implementations. Also, the typical Asus miniPCIe SSD is 71 mm long, causing the Dell 51 mm model to often be (incorrectly) referred to as half length. A true 51 mm Mini PCIe SSD was announced in 2009, with two stacked PCB layers that allow for higher storage capacity. The announced design preserves the PCIe interface, making it compatible with the standard mini PCIe slot. No working product has yet been developed. Intel has numerous desktop boards with the PCIe x1 Mini-Card slot that typically do not support mSATA SSD. A list of desktop boards that natively support mSATA in the PCIe x1 Mini-Card slot (typically multiplexed with a SATA port) is provided on the Intel Support site. PCI Express M.2 M.2 replaces the mSATA standard and Mini PCIe. Computer bus interfaces provided through the M.2 connector are PCI Express 3.0 (up to four lanes), Serial ATA 3.0, and USB 3.0 (a single logical port for each of the latter two). It is up to the manufacturer of the M.2 host or device to choose which interfaces to support, depending on the desired level of host support and device type. PCI Express External Cabling PCI Express External Cabling (also known as External PCI Express, Cabled PCI Express, or ePCIe) specifications were released by the PCI-SIG in February 2007. Standard cables and connectors have been defined for x1, x4, x8, and x16 link widths, with a transfer rate of 250 MB/s per lane. The PCI-SIG also expects the norm to evolve to reach 500 MB/s, as in PCI Express 2.0. An example of the uses of Cabled PCI Express is a metal enclosure, containing a number of PCIe slots and PCIe-to-ePCIe adapter circuitry. This device would not be possible had it not been for the ePCIe specification. PCI Express OCuLink OCuLink (standing for "optical-copper link", since Cu is the chemical symbol for copper) is an extension for the "cable version of PCI Express". Version 1.0 of OCuLink, released in Oct 2015, supports up to 4 PCIe 3.0 lanes (3.9 GB/s) over copper cabling; a fiber optic version may appear in the future. The most recent version of OCuLink, OCuLink-2, supports up to 16 GB/s (PCIe 4.0 x8) while the maximum bandwidth of a USB 4 cable is 10GB/s. While initially intended for use in laptops for the connection of powerful external GPU boxes, OCuLink's popularity lies primarily in its use for PCIe interconnections in servers, a more prevalent application. Derivative forms Numerous other form factors use, or are able to use, PCIe. These include: Low-height card ExpressCard: Successor to the PC Card form factor (with x1 PCIe and USB 2.0; hot-pluggable) PCI Express ExpressModule: A hot-pluggable modular form factor defined for servers and workstations XQD card: A PCI Express-based flash card standard by the CompactFlash Association with x2 PCIe CFexpress card: A PCI Express-based flash card by the CompactFlash Association in three form factors supporting 1 to 4 PCIe lanes SD card: The SD Express bus, introduced in version 7.0 of the SD specification uses a x1 PCIe link XMC: Similar to the CMC/PMC form factor (VITA 42.3) AdvancedTCA: A complement to CompactPCI for larger applications; supports serial based backplane topologies AMC: A complement to the AdvancedTCA specification; supports processor and I/O modules on ATCA boards (x1, x2, x4 or x8 PCIe). FeaturePak: A tiny expansion card format (43mm × 65 mm) for embedded and small-form-factor applications, which implements two x1 PCIe links on a high-density connector along with USB, I2C, and up to 100 points of I/O Universal IO: A variant from Super Micro Computer Inc designed for use in low-profile rack-mounted chassis. It has the connector bracket reversed so it cannot fit in a normal PCI Express socket, but it is pin-compatible and may be inserted if the bracket is removed. M.2 (formerly known as NGFF) M-PCIe brings PCIe 3.0 to mobile devices (such as tablets and smartphones), over the M-PHY physical layer. U.2 (formerly known as SFF-8639) SlimSAS The PCIe slot connector can also carry protocols other than PCIe. Some 9xx series Intel chipsets support Serial Digital Video Out, a proprietary technology that uses a slot to transmit video signals from the host CPU's integrated graphics instead of PCIe, using a supported add-in. The PCIe transaction-layer protocol can also be used over some other interconnects, which are not electrically PCIe: Thunderbolt: A royalty-free interconnect standard by Intel that combines DisplayPort and PCIe protocols in a form factor compatible with Mini DisplayPort. Thunderbolt 3.0 also combines USB 3.1 and uses the USB-C form factor as opposed to Mini DisplayPort. USB4 History and revisions While in early development, PCIe was initially referred to as HSI (for High Speed Interconnect), and underwent a name change to 3GIO (for 3rd Generation I/O) before finally settling on its PCI-SIG name PCI Express. A technical working group named the Arapaho Work Group (AWG) drew up the standard. For initial drafts, the AWG consisted only of Intel engineers; subsequently, the AWG expanded to include industry partners. Since, PCIe has undergone several large and smaller revisions, improving on performance and other features. Comparison table
Technology
Computer hardware
null
143335
https://en.wikipedia.org/wiki/Celestial%20navigation
Celestial navigation
Celestial navigation, also known as astronavigation, is the practice of position fixing using stars and other celestial bodies that enables a navigator to accurately determine their actual current physical position in space or on the surface of the Earth without relying solely on estimated positional calculations, commonly known as dead reckoning. Celestial navigation is performed without using satellite navigation or other similar modern electronic or digital positioning means. Celestial navigation uses "sights," or timed angular measurements, taken typically between a celestial body (e.g., the Sun, the Moon, a planet, or a star) and the visible horizon. Celestial navigation can also take advantage of measurements between celestial bodies without reference to the Earth's horizon, such as when the Moon and other selected bodies are used in the practice called "lunars" or the lunar distance method, used for determining precise time when time is unknown. Celestial navigation by taking sights of the Sun and the horizon whilst on the surface of the Earth is commonly used, providing various methods of determining position, one of which is the popular and simple method called "noon sight navigation"—being a single observation of the exact altitude of the Sun and the exact time of that altitude (known as "local noon")—the highest point of the Sun above the horizon from the position of the observer in any single day. This angular observation, combined with knowing its simultaneous precise time, referred to as the time at the prime meridian, directly renders a latitude and longitude fix at the time and place of the observation by simple mathematical reduction. The Moon, a planet, Polaris, or one of the 57 other navigational stars whose coordinates are tabulated in any of the published nautical or air almanacs can also accomplish this same goal. Celestial navigation accomplishes its purpose by using angular measurements (sights) between celestial bodies and the visible horizon to locate one's position on the Earth, whether on land, in the air, or at sea. In addition, observations between stars and other celestial bodies accomplished the same results while in space,used in the Apollo space program and is still used on many contemporary satellites. Equally, celestial navigation may be used while on other planetary bodies to determine position on their surface, using their local horizon and suitable celestial bodies with matching reduction tables and knowledge of local time. For navigation by celestial means, when on the surface of the Earth at any given instant in time, a celestial body is located directly over a single point on the Earth's surface. The latitude and longitude of that point are known as the celestial body's geographic position (GP), the location of which can be determined from tables in the nautical or air almanac for that year. The measured angle between the celestial body and the visible horizon is directly related to the distance between the celestial body's GP and the observer's position. After some computations, referred to as "sight reduction," this measurement is used to plot a line of position (LOP) on a navigational chart or plotting worksheet, with the observer's position being somewhere on that line. The LOP is actually a short segment of a very large circle on Earth that surrounds the GP of the observed celestial body. (An observer located anywhere on the circumference of this circle on Earth, measuring the angle of the same celestial body above the horizon at that instant of time, would observe that body to be at the same angle above the horizon.) Sights on two celestial bodies give two such lines on the chart, intersecting at the observer's position (actually, the two circles would result in two points of intersection arising from sights on two stars described above, but one can be discarded since it will be far from the estimated position—see the figure at the example below). Most navigators will use sights of three to five stars, if available, since that will result in only one common intersection and minimize the chance of error. That premise is the basis for the most commonly used method of celestial navigation, referred to as the "altitude-intercept method." At least three points must be plotted. The plot intersection will usually provide a triangle where the exact position is inside of it. The accuracy of the sights is indicated by the size of the triangle. Joshua Slocum used both noon sight and star sight navigation to determine his current position during his voyage, the first recorded single-handed circumnavigation of the world. In addition, he used the lunar distance method (or "lunars") to determine and maintain known time at Greenwich (the prime meridian), thereby keeping his "tin clock" reasonably accurate and therefore his position fixes accurate. Celestial navigation can only determine longitude when the time at the prime meridian is accurately known. The more accurately time at the prime meridian (0° longitude) is known, the more accurate the fix;indeed, every four seconds of time source (commonly a chronometer or, in aircraft, an accurate "hack watch") error can lead to a positional error of one nautical mile. When time is unknown or not trusted, the lunar distance method can be used as a method of determining time at the prime meridian. A functioning timepiece with a second hand or digit, an almanac with lunar corrections, and a sextant are used. With no knowledge of time at all, a lunar calculation (given an observable Moon of respectable altitude) can provide time accurate to within a second or two with about 15 to 30 minutes of observations and mathematical reduction from the almanac tables. After practice, an observer can regularly derive and prove time using this method to within about one second, or one nautical mile, of navigational error due to errors ascribed to the time source. Example An example illustrating the concept behind the intercept method for determining position is shown to the right. (Two other common methods for determining one's position using celestial navigation are longitude by chronometer and ex-meridian methods.) In the adjacent image, the two circles on the map represent lines of position for the Sun and Moon at 12:00 GMT on October 29, 2005. At this time, a navigator on a ship at sea measured the Moon to be 56° above the horizon using a sextant. Ten minutes later, the Sun was observed to be 40° above the horizon. Lines of position were then calculated and plotted for each of these observations. Since both the Sun and Moon were observed at their respective angles from the same location, the navigator would have to be located at one of the two locations where the circles cross. In this case, the navigator is either located on the Atlantic Ocean, about west of Madeira, or in South America, about southwest of Asunción, Paraguay. In most cases, determining which of the two intersections is the correct one is obvious to the observer because they are often thousands of miles apart. As it is unlikely that the ship is sailing across South America, the position in the Atlantic is the correct one. Note that the lines of position in the figure are distorted because of the map's projection; they would be circular if plotted on a globe. An observer at the Gran Chaco point would see the Moon at the left of the Sun, and an observer at the Madeira point would see the Moon at the right of the Sun. Angular measurement Accurate angle measurement has evolved over the years. One simple method is to hold the hand above the horizon with one's arm stretched out. The angular width of the little finger is just over 1.5 degrees at extended arm's length and can be used to estimate the elevation of the Sun from the horizon plane and therefore estimate the time until sunset. The need for more accurate measurements led to the development of a number of increasingly accurate instruments, including the kamal, astrolabe, octant, and sextant. The sextant and octant are most accurate because they measure angles from the horizon, eliminating errors caused by the placement of an instrument's pointers, and because their dual-mirror system cancels relative motions of the instrument, showing a steady view of the object and horizon. Navigators measure distance on the Earth in degrees, arcminutes, and arcseconds. A nautical mile is defined as 1,852 meters but is also (not accidentally) one arc minute of angle along a meridian on the Earth. Sextants can be read accurately to within 0.1 arcminutes, so the observer's position can be determined within (theoretically) 0.1 nautical miles (185.2 meters, or about 203 yards). Most ocean navigators, measuring from a moving platform under fair conditions, can achieve a practical accuracy of approximately 1.5 nautical miles (2.8 km), enough to navigate safely when out of sight of land or other hazards. Practical navigation Practical celestial navigation usually requires a marine chronometer to measure time, a sextant to measure the angles, an almanac giving schedules of the coordinates of celestial objects, a set of sight reduction tables to help perform the height and azimuth computations, and a chart of the region. With sight reduction tables, the only calculations required are addition and subtraction. Small handheld computers, laptops and even scientific calculators enable modern navigators to "reduce" sextant sights in minutes, by automating all the calculation and/or data lookup steps. Most people can master simpler celestial navigation procedures after a day or two of instruction and practice, even using manual calculation methods. Modern practical navigators usually use celestial navigation in combination with satellite navigation to correct a dead reckoning track, that is, a course estimated from a vessel's position, course, and speed. Using multiple methods helps the navigator detect errors and simplifies procedures. When used this way, a navigator, from time to time, measures the Sun's altitude with a sextant, then compares that with a precalculated altitude based on the exact time and estimated position of the observation. On the chart, the straight edge of a plotter can mark each position line. If the position line indicates a location more than a few miles from the estimated position, more observations can be taken to restart the dead-reckoning track. In the event of equipment or electrical failure, taking Sun lines a few times a day and advancing them by dead reckoning allows a vessel to get a crude running fix sufficient to return to port. One can also use the Moon, a planet, Polaris, or one of 57 other navigational stars to track celestial positioning. Latitude Latitude was measured in the past either by measuring the altitude of the Sun at noon (the "noon sight") or by measuring the altitudes of any other celestial body when crossing the meridian (reaching its maximum altitude when due north or south), and frequently by measuring the altitude of Polaris, the north star (assuming it is sufficiently visible above the horizon, which it is not in the Southern Hemisphere). Polaris always stays within 1 degree of the celestial north pole. If a navigator measures the angle to Polaris and finds it to be 10 degrees from the horizon, then he is about 10 degrees north of the equator. This approximate latitude is then corrected using simple tables or almanac corrections to determine a latitude that is theoretically accurate to within a fraction of a mile. Angles are measured from the horizon because locating the point directly overhead, the zenith, is not normally possible. When haze obscures the horizon, navigators use artificial horizons, which are horizontal mirrors or pans of reflective fluid, especially mercury. In the latter case, the angle between the reflected image in the mirror and the actual image of the object in the sky is exactly twice the required altitude. Longitude If the angle to Polaris can be accurately measured, a similar measurement of a star near the eastern or western horizons would provide the longitude. The problem is that the Earth turns 15 degrees per hour, making such measurements dependent on time. A measure a few minutes before or after the same measure the day before creates serious navigation errors. Before good chronometers were available, longitude measurements were based on the transit of the moon or the positions of the moons of Jupiter. For the most part, these were too difficult to be used by anyone except professional astronomers. The invention of the modern chronometer by John Harrison in 1761 vastly simplified longitudinal calculation. The longitude problem took centuries to solve and was dependent on the construction of a non-pendulum clock (as pendulum clocks cannot function accurately on a tilting ship, or indeed a moving vehicle of any kind). Two useful methods evolved during the 18th century and are still practiced today: lunar distance, which does not involve the use of a chronometer, and the use of an accurate timepiece or chronometer. Presently, layperson calculations of longitude can be made by noting the exact local time (leaving out any reference for daylight saving time) when the Sun is at its highest point in Earth's sky. The calculation of noon can be made more easily and accurately with a small, exactly vertical rod driven into level ground—take the time reading when the shadow is pointing due north (in the northern hemisphere). Then take your local time reading and subtract it from GMT (Greenwich Mean Time), or the time in London, England. For example, a noon reading (12:00) near central Canada or the US would occur at approximately 6 p.m. (18:00) in London. The 6-hour difference is one quarter of a 24-hour day, or 90 degrees of a 360-degree circle (the Earth). The calculation can also be made by taking the number of hours (use decimals for fractions of an hour) multiplied by 15, the number of degrees in one hour. Either way, it can be demonstrated that much of central North America is at or near 90 degrees west longitude. Eastern longitudes can be determined by adding the local time to GMT, with similar calculations. Lunar distance An older but still useful and practical method of determining accurate time at sea before the advent of precise timekeeping and satellite-based time systems is called "lunar distances," or "lunars," which was used extensively for a short period and refined for daily use on board ships in the 18th century. Use declined through the middle of the 19th century as better and better timepieces (chronometers) became available to the average vessel at sea. Although most recently only used by sextant hobbyists and historians, it is now becoming more common in celestial navigation courses to reduce total dependence on GNSS systems as potentially the only accurate time source aboard a vessel. Designed for use when an accurate timepiece is not available or timepiece accuracy is suspect during a long sea voyage, the navigator precisely measures the angle between the Moon and the Sun or between the Moon and one of several stars near the ecliptic. The observed angle must be corrected for the effects of refraction and parallax, like any celestial sight. To make this correction, the navigator measures the altitudes of the Moon and Sun (or another star) at about the same time as the lunar distance angle. Only rough values for the altitudes are required. A calculation with suitable published tables (or longhand with logarithms and graphical tables) requires about 10 to 15 minutes' work to convert the observed angle(s) to a geocentric lunar distance. The navigator then compares the corrected angle against those listed in the appropriate almanac pages for every three hours of Greenwich time, using interpolation tables to derive intermediate values. The result is a difference in time between the time source (of unknown time) used for the observations and the actual prime meridian time (that of the "Zero Meridian" at Greenwich, also known as UTC or GMT). Knowing UTC/GMT, a further set of sights can be taken and reduced by the navigator to calculate their exact position on the Earth as a local latitude and longitude. Use of time The considerably more popular method was (and still is) to use an accurate timepiece to directly measure the time of a sextant sight. The need for accurate navigation led to the development of progressively more accurate chronometers in the 18th century (see John Harrison). Today, time is measured with a chronometer, a quartz watch, a shortwave radio time signal broadcast from an atomic clock, or the time displayed on a satellite time signal receiver. A quartz wristwatch normally keeps time within a half-second per day. If it is worn constantly, keeping it near body heat, its rate of drift can be measured with the radio, and by compensating for this drift, a navigator can keep time to better than a second per month. When time at the prime meridian (or another starting point) is accurately known, celestial navigation can determine longitude, and the more accurately latitude and time are known, the more accurate the longitude determination. The angular speed of the Earth is latitude-dependent. At the poles, or latitude 90°, the rotation velocity of the Earth reaches zero. At 45° latitude, one second of time is equivalent in longitude to , or one-tenth of a second means At the slightly bulged-out equator, or latitude 0°, the rotation velocity of Earth or its equivalent in longitude reaches its maximum at . Traditionally, a navigator checked their chronometer(s) with their sextant at a geographic marker surveyed by a professional astronomer. This is now a rare skill, and most harbormasters cannot locate their harbor's marker. Ships often carried more than one chronometer. Chronometers were kept on gimbals in a dry room near the center of the ship. They were used to set a hack watch for the actual sight, so that no chronometers were ever exposed to the wind and salt water on deck. Winding and comparing the chronometers was a crucial duty of the navigator. Even today, it is still logged daily in the ship's deck log and reported to the captain before eight bells on the forenoon watch (shipboard noon). Navigators also set the ship's clocks and calendar. Two chronometers provided dual modular redundancy, allowing a backup if one ceases to work but not allowing any error correction if the two displayed a different time, since in case of contradiction between the two chronometers, it would be impossible to know which one was wrong (the error detection obtained would be the same as having only one chronometer and checking it periodically: every day at noon against dead reckoning). Three chronometers provided triple modular redundancy, allowing error correction if one of the three was wrong, so the pilot would take the average of the two with closer readings (average precision vote). There is an old adage to this effect, stating: "Never go to sea with two chronometers; take one or three." Vessels engaged in survey work generally carried many more than three chronometers for example, HMS Beagle carried 22 chronometers. Modern celestial navigation The celestial line of position concept was discovered in 1837 by Thomas Hubbard Sumner when, after one observation, he computed and plotted his longitude at more than one trial latitude in his vicinity and noticed that the positions lay along a line. Using this method with two bodies, navigators were finally able to cross two position lines and obtain their position, in effect determining both latitude and longitude. Later in the 19th century came the development of the modern (Marcq St. Hilaire) intercept method; with this method, the body height and azimuth are calculated for a convenient trial position and compared with the observed height. The difference in arcminutes is the nautical mile "intercept" distance that the position line needs to be shifted toward or away from the direction of the body's subpoint. (The intercept method uses the concept illustrated in the example in the "How it works" section above.) Two other methods of reducing sights are the longitude by chronometer and the ex-meridian method. While celestial navigation is becoming increasingly redundant with the advent of inexpensive and highly accurate satellite navigation receivers (GNSS), it was used extensively in aviation until the 1960s and marine navigation until quite recently. However, since a prudent mariner never relies on any sole means of fixing their position, many national maritime authorities still require deck officers to show knowledge of celestial navigation in examinations, primarily as a backup for electronic or satellite navigation. One of the most common current uses of celestial navigation aboard large merchant vessels is for compass calibration and error checking at sea when no terrestrial references are available. In 1980, French Navy regulations still required an independently operated timepiece on board so that, in combination with a sextant, a ship's position could be determined by celestial navigation. The U.S. Air Force and U.S. Navy continued instructing military aviators on celestial navigation use until 1997, because: celestial navigation can be used independently of ground aids. celestial navigation has global coverage. celestial navigation cannot be jammed (although it can be obscured by clouds). celestial navigation does not give off any signals that could be detected by an enemy. The United States Naval Academy (USNA) announced that it was discontinuing its course on celestial navigation (considered to be one of its most demanding non-engineering courses) from the formal curriculum in the spring of 1998. In October 2015, citing concerns about the reliability of GNSS systems in the face of potential hostile hacking, the USNA reinstated instruction in celestial navigation in the 2015 to 2016 academic year. At another federal service academy, the US Merchant Marine Academy, there was no break in instruction in celestial navigation as it is required to pass the US Coast Guard License Exam to enter the Merchant Marine. It is also taught at Harvard, most recently as Astronomy 2. Celestial navigation continues to be used by private yachtsmen, and particularly by long-distance cruising yachts around the world. For small cruising boat crews, celestial navigation is generally considered an essential skill when venturing beyond visual range of land. Although satellite navigation technology is reliable, offshore yachtsmen use celestial navigation as either a primary navigational tool or as a backup. Celestial navigation was used in commercial aviation up until the early part of the jet age; early Boeing 747s had a "sextant port" in the roof of the cockpit. It was only phased out in the 1960s with the advent of inertial navigation and Doppler navigation systems, and today's satellite-based systems which can locate the aircraft's position accurate to a 3-meter sphere with several updates per second. A variation on terrestrial celestial navigation was used to help orient the Apollo spacecraft en route to and from the Moon. To this day, space missions such as the Mars Exploration Rover use star trackers to determine the attitude of the spacecraft. As early as the mid-1960s, advanced electronic and computer systems had evolved enabling navigators to obtain automated celestial sight fixes. These systems were used aboard both ships and US Air Force aircraft, and were highly accurate, able to lock onto up to 11 stars (even in daytime) and resolve the craft's position to less than . The SR-71 high-speed reconnaissance aircraft was one example of an aircraft that used a combination of automated celestial and inertial navigation. These rare systems were expensive, however, and the few that remain in use today are regarded as backups to more reliable satellite positioning systems. Intercontinental ballistic missiles use celestial navigation to check and correct their course (initially set using internal gyroscopes) while flying outside the Earth's atmosphere. The immunity to jamming signals is the main driver behind this seemingly archaic technique. X-ray pulsar-based navigation and timing (XNAV) is an experimental navigation technique for space whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space. A vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locations. Similar to GNSS, this comparison would allow the vehicle to triangulate its position accurately (±5 km). The advantage of using X-ray signals over radio waves is that X-ray telescopes can be made smaller and lighter. On 9 November 2016 the Chinese Academy of Sciences launched an experimental pulsar navigation satellite called XPNAV 1. SEXTANT (Station Explorer for X-ray Timing and Navigation Technology) is a NASA-funded project developed at the Goddard Space Flight Center that is testing XNAV on-orbit on board the International Space Station in connection with the NICER project, launched on 3 June 2017 on the SpaceX CRS-11 ISS resupply mission. Training Celestial navigation training equipment for aircraft crews combine a simple flight simulator with a planetarium. An early example is the Link Celestial Navigation Trainer, used in the Second World War. Housed in a high building, it featured a cockpit accommodating a whole bomber crew (pilot, navigator, and bombardier). The cockpit offered a full array of instruments, which the pilot used to fly the simulated airplane. Fixed to a dome above the cockpit was an arrangement of lights, some collimated, simulating constellations, from which the navigator determined the plane's position. The dome's movement simulated the changing positions of the stars with the passage of time and the movement of the plane around the Earth. The navigator also received simulated radio signals from various positions on the ground. Below the cockpit moved "terrain plates"—large, movable aerial photographs of the land below—which gave the crew the impression of flight and enabled the bomber to practice lining up bombing targets. A team of operators sat at a control booth on the ground below the machine, from which they could simulate weather conditions such as wind or clouds. This team also tracked the airplane's position by moving a "crab" (a marker) on a paper map. The Link Celestial Navigation Trainer was developed in response to a request made by the Royal Air Force (RAF) in 1939. The RAF ordered 60 of these machines, and the first one was built in 1941. The RAF used only a few of these, leasing the rest back to the US, where eventually hundreds were in use.
Technology
Navigation
null
143357
https://en.wikipedia.org/wiki/Medical%20ultrasound
Medical ultrasound
Medical ultrasound includes diagnostic techniques (mainly imaging techniques) using ultrasound, as well as therapeutic applications of ultrasound. In diagnosis, it is used to create an image of internal body structures such as tendons, muscles, joints, blood vessels, and internal organs, to measure some characteristics (e.g., distances and velocities) or to generate an informative audible sound. The usage of ultrasound to produce visual images for medicine is called medical ultrasonography or simply sonography, or echography. The practice of examining pregnant women using ultrasound is called obstetric ultrasonography, and was an early development of clinical ultrasonography. The machine used is called an ultrasound machine, a sonograph or an echograph. The visual image formed using this technique is called an ultrasonogram, a sonogram or an echogram. Ultrasound is composed of sound waves with frequencies greater than 20,000 Hz, which is the approximate upper threshold of human hearing. Ultrasonic images, also known as sonograms, are created by sending pulses of ultrasound into tissue using a probe. The ultrasound pulses echo off tissues with different reflection properties and are returned to the probe which records and displays them as an image. A general-purpose ultrasonic transducer may be used for most imaging purposes but some situations may require the use of a specialized transducer. Most ultrasound examination is done using a transducer on the surface of the body, but improved visualization is often possible if a transducer can be placed inside the body. For this purpose, special-use transducers, including transvaginal, endorectal, and transesophageal transducers are commonly employed. At the extreme, very small transducers can be mounted on small diameter catheters and placed within blood vessels to image the walls and disease of those vessels. Types The imaging mode refers to probe and machine settings that result in specific dimensions of the ultrasound image. Several modes of ultrasound are used in medical imaging: A-mode: Amplitude mode refers to the mode in which the amplitude of the transducer voltage is recorded as a function of two-way travel time of an ultrasound pulse. A single pulse is transmitted through the body and scatters back to the same transducer element. The voltage amplitudes recorded correlate linearly to acoustic pressure amplitudes. A-mode is one-dimensional. B-mode: In brightness mode, an array of transducer elements scans a plane through the body resulting in a two-dimensional image. Each pixel value of the image correlates to voltage amplitude registered from the backscattered signal. The dimensions of B-mode images are voltage as a function of angle and two-way time. M-mode: In motion mode, A-mode pulses are emitted in succession. The backscattered signal is converted to lines of bright pixels, whose brightness linearly correlates to backscattered voltage amplitudes. Each next line is plotted adjacent to the previous, resulting in an image that looks like a B-mode image. The M-mode image dimensions are however voltage as a function of two-way time and recording time. This mode is an ultrasound analogy to streak video recording in high-speed photography. As moving tissue transitions produce backscattering, this can be used to determine the displacement of specific organ structures, most commonly the heart. Most machines convert two-way time to imaging depth using as assumed speed of sound of 1540 m/s. As the actual speed of sound varies greatly in different tissue types, an ultrasound image is therefore not a true tomographic representation of the body. Three-dimensional imaging is done by combining B-mode images, using dedicated rotating or stationary probes. This has also been referred to as C-mode. An imaging technique refers to a method of signal generation and processing that results in a specific application. Most imaging techniques are operating in B-mode. Doppler sonography: This imaging technique makes use of the Doppler effect in detection and measuring moving targets, typically blood. Harmonic imaging: backscattered signal from tissue is filtered to comprise only frequency content of at least twice the centre frequency of the transmitted ultrasound. Harmonic imaging used for perfusion detection when using ultrasound contrast agents and for the detection of tissue harmonics. Common pulse schemes for the creation of harmonic response without the need of real-time Fourier analysis are pulse inversion and power modulation. B-flow is an imaging technique that digitally highlights moving reflectors (mainly red blood cells) while suppressing the signals from the surrounding stationary tissue. It aims to visualize flowing blood and surrounding stationary tissues simultaneously. It is thus an alternative or complement to Doppler ultrasonography in visualizing blood flow. Therapeutic ultrasound aimed at a specific tumor or calculus is not an imaging mode. However, for positioning a treatment probe to focus on a specific region of interest, A-mode and B-mode are typically used, often during treatment. Advantages and drawbacks Compared to other medical imaging modalities, ultrasound has several advantages. It provides images in real-time, is portable, and can consequently be brought to the bedside. It is substantially lower in cost than other imaging strategies. Drawbacks include various limits on its field of view, the need for patient cooperation, dependence on patient physique, difficulty imaging structures obscured by bone, air or gases, and the necessity of a skilled operator, usually with professional training. Uses Sonography (ultrasonography) is widely used in medicine. It is possible to perform both diagnosis and therapeutic procedures, using ultrasound to guide interventional procedures such as biopsies or to drain collections of fluid, which can be both diagnostic and therapeutic. Sonographers are medical professionals who perform scans which are traditionally interpreted by radiologists, physicians who specialize in the application and interpretation of medical imaging modalities, or by cardiologists in the case of cardiac ultrasonography (echocardiography). Sonography is effective for imaging soft tissues of the body. Superficial structures such as muscle, tendon, testis, breast, thyroid and parathyroid glands, and the neonatal brain are imaged at higher frequencies (7–18 MHz), which provide better linear (axial) and horizontal (lateral) resolution. Deeper structures such as liver and kidney are imaged at lower frequencies (1–6 MHz) with lower axial and lateral resolution as a price of deeper tissue penetration. Anesthesiology In anesthesiology, ultrasound is commonly used to guide the placement of needles when injecting local anesthetic solutions in the proximity of nerves identified within the ultrasound image (nerve block). It is also used for vascular access such as cannulation of large central veins and for difficult arterial cannulation. Transcranial Doppler is frequently used by neuro-anesthesiologists for obtaining information about flow-velocity in the basal cerebral vessels. Angiology (vascular) In angiology or vascular medicine, duplex ultrasound (B Mode imaging combined with Doppler flow measurement) is used to diagnose arterial and venous disease. This is particularly important in potential neurologic problems, where carotid ultrasound is commonly used for assessing blood flow and potential or suspected stenosis in the carotid arteries, while transcranial Doppler is used for imaging flow in the intracerebral arteries. Intravascular ultrasound (IVUS) uses a specially designed catheter with a miniaturized ultrasound probe attached to its distal end, which is then threaded inside a blood vessel. The proximal end of the catheter is attached to computerized ultrasound equipment and allows the application of ultrasound technology, such as a piezoelectric transducer or capacitive micromachined ultrasonic transducer, to visualize the endothelium of blood vessels in living individuals. In the case of the common and potentially, serious problem of blood clots in the deep veins of the leg, ultrasound plays a key diagnostic role, while ultrasonography of chronic venous insufficiency of the legs focuses on more superficial veins to assist with planning of suitable interventions to relieve symptoms or improve cosmetics. Cardiology (heart) Echocardiography is an essential tool in cardiology, assisting in evaluation of heart valve function, such as stenosis or insufficiency, strength of cardiac muscle contraction, and hypertrophy or dilatation of the main chambers. (ventricle and atrium) Emergency medicine Point of care ultrasound has many applications in emergency medicine. These include differentiating cardiac from pulmonary causes of acute breathlessness, and the Focused Assessment with Sonography for Trauma (FAST) exam, extended to include assessment for significant hemoperitoneum or pericardial tamponade after trauma (EFAST). Other uses include assisting with differentiating causes of abdominal pain such as gallstones and kidney stones. Emergency Medicine Residency Programs have a substantial history of promoting the use of bedside ultrasound during physician training. Gastroenterology/Colorectal surgery Both abdominal and endoanal ultrasound are frequently used in gastroenterology and colorectal surgery. In abdominal sonography, the major organs of the abdomen such as the pancreas, aorta, inferior vena cava, liver, gall bladder, bile ducts, kidneys, and spleen may be imaged. However, sound waves may be blocked by gas in the bowel and attenuated to differing degrees by fat, sometimes limiting diagnostic capabilities. The appendix can sometimes be seen when inflamed (e.g.: appendicitis) and ultrasound is the initial imaging choice, avoiding radiation if possible, although it frequently needs to be followed by other imaging methods such as CT. Endoanal ultrasound is used particularly in the investigation of anorectal symptoms such as fecal incontinence or obstructed defecation. It images the immediate perianal anatomy and is able to detect occult defects such as tearing of the anal sphincter. Hepatology Ultrasonography of liver tumors allows for both detection and characterization. Ultrasound imaging studies are often obtained during the evaluation process of Fatty liver disease. Ultrasonography reveals a "bright" liver with increased echogenicity. Pocket-sized ultrasound devices might be used as point-of-care screening tools to diagnose liver steatosis. Gynecology and obstetrics Gynecologic ultrasonography examines female pelvic organs (specifically the uterus, ovaries, and fallopian tubes) as well as the bladder, adnexa, and pouch of Douglas. It uses transducers designed for approaches through the lower abdominal wall, curvilinear and sector, and specialty transducers such as transvaginal ultrasound. Obstetrical sonography was originally developed in the late 1950s and 1960s by Sir Ian Donald and is commonly used during pregnancy to check the development and presentation of the fetus. It can be used to identify many conditions that could be potentially harmful to the mother and/or baby possibly remaining undiagnosed or with delayed diagnosis in the absence of sonography. It is currently believed that the risk of delayed diagnosis is greater than the small risk, if any, associated with undergoing an ultrasound scan. However, its use for non-medical purposes such as fetal "keepsake" videos and photos is discouraged. Obstetric ultrasound is primarily used to: Date the pregnancy (gestational age) Confirm fetal viability Determine location of fetus, intrauterine vs ectopic Check the location of the placenta in relation to the cervix Check for the number of fetuses (multiple pregnancy) Check for major physical abnormalities. Assess fetal growth (for evidence of intrauterine growth restriction (IUGR)) Check for fetal movement and heartbeat. Determine the sex of the baby According to the European Committee of Medical Ultrasound Safety (ECMUS) Nonetheless, care should be taken to use low power settings and avoid pulsed wave scanning of the fetal brain unless specifically indicated in high risk pregnancies. Figures released for the period 2005–2006 by the UK Government (Department of Health) show that non-obstetric ultrasound examinations constituted more than 65% of the total number of ultrasound scans conducted. Hemodynamics (blood circulation) Blood velocity can be measured in various blood vessels, such as middle cerebral artery or descending aorta, by relatively inexpensive and low risk ultrasound Doppler probes attached to portable monitors. These provide non-invasive or transcutaneous (non-piercing) minimal invasive blood flow assessment. Common examples are transcranial Doppler, esophageal Doppler and suprasternal Doppler. Otolaryngology (head and neck) Most structures of the neck, including the thyroid and parathyroid glands, lymph nodes, and salivary glands, are well-visualized by high-frequency ultrasound with exceptional anatomic detail. Ultrasound is the preferred imaging modality for thyroid tumors and lesions, and its use is important in the evaluation, preoperative planning, and postoperative surveillance of patients with thyroid cancer. Many other benign and malignant conditions in the head and neck can be differentiated, evaluated, and managed with the help of diagnostic ultrasound and ultrasound-guided procedures. Neonatology In neonatology, transcranial Doppler can be used for basic assessment of intracerebral structural abnormalities, suspected hemorrhage, ventriculomegaly or hydrocephalus and anoxic insults (periventricular leukomalacia). It can be performed through the soft spots in the skull of a newborn infant (Fontanelle) until these completely close at about 1 year of age by which time they have formed a virtually impenetrable acoustic barrier to ultrasound. The most common site for cranial ultrasound is the anterior fontanelle. The smaller the fontanelle, the more the image is compromised. Lung ultrasound has been found to be useful in diagnosing common neonatal respiratory diseases such as transient tachypnea of the newborn, respiratory distress syndrome, congenital pneumonia, meconium aspiration syndrome, and pneumothorax. A neonatal lung ultrasound score, first described by Brat et al., has been found to highly correlate with oxygenation in the newborn. Ophthalmology () In ophthalmology and optometry, there are two major forms of eye exam using ultrasound: A-scan ultrasound biometry, is commonly referred to as an A-scan (amplitude scan). A-mode provides data on the length of the eye, which is a major determinant in common sight disorders, especially for determining the power of an intraocular lens after cataract extraction. B-scan ultrasonography, or B-scan-Brightness scan, is a B-mode scan that produces a cross-sectional view of the eye and the orbit. It is an essential tool in ophthalmology for diagnosing and managing a wide array of conditions affecting the posterior segment of the eye.It is non invasive and uses frequency 10-15 MHz.It is often used in conjunction with other imaging techniques (like OCT or fluorescein angiography) for a more comprehensive evaluation of ocular conditions. Pulmonology (lungs) Ultrasound is used to assess the lungs in a variety of settings including critical care, emergency medicine, trauma surgery, as well as general medicine. This imaging modality is used at the bedside or examination table to evaluate a number of different lung abnormalities as well as to guide procedures such as thoracentesis, (drainage of pleural fluid (effusion)), needle aspiration biopsy, and catheter placement. Although air present in the lungs does not allow good penetration of ultrasound waves, interpretation of specific artifacts created on the lung surface can be used to detect abnormalities. Lung ultrasound basics The Normal Lung Surface: The lung surface is composed of visceral and parietal pleura. These two surfaces are typically pushed together and make up the pleural line, which is the basis of lung (or pleural) ultrasound. This line is visible less than a centimeter below the rib line in most adults. On ultrasound, it is visualized as a hyperechoic (bright white) horizontal line if the ultrasound probe is applied perpendicularly to the skin. Artifacts: Lung ultrasound relies on artifacts, which would otherwise be considered a hindrance in imaging. Air blocks the ultrasound beam and thus visualizing healthy lung tissue itself with this mode of imaging is not practical. Consequently, physicians and sonographers have learned to recognize patterns that ultrasound beams create when imaging healthy versus diseased lung tissue. Three commonly seen and utilized artifacts in lung ultrasound include lung sliding, A-lines, and B-lines. §  Lung Sliding: The presence of lung sliding, which indicates the shimmering of the pleural line that occurs with movement of the visceral and parietal pleura against one another with respiration (sometimes described as 'ants marching'), is the most important finding in normal aerated lung. Lung sliding indicates both that the lung is present at the chest wall and that the lung is functioning. §  A-lines: When the ultrasound beam makes contact with the pleural line, it is reflected back creating a bright white horizontal line. The subsequent reverberation artifacts that appear as equally spaced horizontal lines deep to the pleura are A-lines. Ultimately, A-lines are a reflection of the ultrasound beam from the pleura with the space between A-lines corresponding to the distance between the parietal pleura and the skin surface. A-lines indicate the presence of air, which means that these artifacts can be present in normal healthy lung (and also in patients with pneumothorax). §  B-lines: B-lines are also reverberation artifacts. They are visualized as hyperechoic vertical lines extending from the pleura to the edge of the ultrasound screen. These lines are sharply defined and laser-like and typically do not fade as they progress down the screen. A few B-lines that move along with the sliding pleura can be seen in normal lung due to acoustic impedance differences between water and air. However, excessive B-lines (three or more) are abnormal and are typically indicative of underlying lung pathology. Lung pathology assessed with ultrasound Pulmonary edema: Lung ultrasound has been shown to be very sensitive for the detection of pulmonary edema. It allows for improvement in diagnosis and management of critically ill patients, particularly when used in combination with echocardiography. The sonographic feature that is present in pulmonary edema is multiple B-lines. B-lines can occur in a healthy lung; however, the presence of 3 or more in the anterior or lateral lung regions is always abnormal. In pulmonary edema, B-lines indicate an increase in the amount of water contained in the lungs outside of the pulmonary vasculature. B-lines can also be present in a number of other conditions including pneumonia, pulmonary contusion, and lung infarction. Additionally, it is important to note that there are multiple types of interactions between the pleural surface and the ultrasound wave that can generate artifacts with some similarity to B-lines but which do not have pathologic significance. Pneumothorax: In clinical settings when pneumothorax is suspected, lung ultrasound can aid in diagnosis. In pneumothorax, air is present between the two layers of the pleura and lung sliding on ultrasound is therefore absent. The negative predictive value for lung sliding on ultrasound is reported as 99.2–100% – briefly, if lung sliding is present, a pneumothorax is effectively ruled out. The absence of lung sliding, however, is not necessarily specific for pneumothorax as there are other conditions that also cause this finding including acute respiratory distress syndrome, lung consolidations, pleural adhesions, and pulmonary fibrosis. Pleural effusion: Lung ultrasound is a cost-effective, safe, and non-invasive imaging method that can aid in the prompt visualization and diagnosis of pleural effusions. Effusions can be diagnosed by a combination of physical exam, percussion, and auscultation of the chest. However, these exam techniques can be complicated by a variety of factors including the presence of mechanical ventilation, obesity, or patient positioning, all of which reduce the sensitivity of the physical exam. Consequently, lung ultrasound can be an additional tool to augment plain chest Xray and chest CT. Pleural effusions on ultrasound appear as structural images within the thorax rather than an artifact. They will typically have four distinct borders including the pleural line, two rib shadows, and a deep border. In critically ill patients with pleural effusion, ultrasound may guide procedures including needle insertion, thoracentesis, and chest-tube insertion. Lung cancer staging: In pulmonology, endobronchial ultrasound (EBUS) probes are applied to standard flexible endoscopic probes and used by pulmonologists to allow for direct visualization of endobronchial lesions and lymph nodes prior to transbronchial needle aspiration. Among its many uses, EBUS aids in lung cancer staging by allowing for lymph node sampling without the need for major surgery. COVID-19: Lung ultrasound has proved useful in the diagnosis of COVID-19 especially in cases where other investigations are not available. Urinary tract Ultrasound is routinely used in urology to determine the amount of fluid retained in a patient's bladder. In a pelvic sonogram, images include the uterus and ovaries or urinary bladder in females. In males, a sonogram will provide information about the bladder, prostate, or testicles (for example to urgently distinguish epididymitis from testicular torsion). In young males, it is used to distinguish more benign testicular masses (varicocele or hydrocele) from testicular cancer, which is curable but must be treated to preserve health and fertility. There are two methods of performing pelvic sonography – externally or internally. The internal pelvic sonogram is performed either transvaginally (in a woman) or transrectally (in a man). Sonographic imaging of the pelvic floor can produce important diagnostic information regarding the precise relationship of abnormal structures with other pelvic organs and it represents a useful hint to treat patients with symptoms related to pelvic prolapse, double incontinence and obstructed defecation. It is also used to diagnose and, at higher frequencies, to treat (break up) kidney stones or kidney crystals (nephrolithiasis). Penis and scrotum Scrotal ultrasonography is used in the evaluation of testicular pain, and can help identify solid masses. Ultrasound is an excellent method for the study of the penis, such as indicated in trauma, priapism, erectile dysfunction or suspected Peyronie's disease. Musculoskeletal Musculoskeletal ultrasound is used to examine tendons, muscles, nerves, ligaments, soft tissue masses, and bone surfaces. It is helpful in diagnosing ligament sprains, muscles strains and joint pathology. It is an alternative or supplement to x-ray imaging in detecting fractures of the wrist, elbow and shoulder for patients up to 12 years (Fracture sonography). Quantitative ultrasound is an adjunct musculoskeletal test for myopathic disease in children; estimates of lean body mass in adults; proxy measures of muscle quality (i.e., tissue composition) in older adults with sarcopenia Ultrasound can also be used for needle guidance in muscle or joint injections, as in ultrasound-guided hip joint injection. Kidneys In nephrology, ultrasonography of the kidneys is essential in the diagnosis and management of kidney-related diseases. The kidneys are easily examined, and most pathological changes are distinguishable with ultrasound. It is an accessible, versatile, relatively economic, and fast aid for decision-making in patients with renal symptoms and for guidance in renal intervention. Using B-mode imaging, assessment of renal anatomy is easily performed, and US is often used as image guidance for renal interventions. Furthermore, novel applications in renal US have been introduced with contrast-enhanced ultrasound (CEUS), elastography and fusion imaging. However, renal US has certain limitations, and other modalities, such as CT (CECT) and MRI, should be considered for supplementary imaging in assessing renal disease. Venous access Intravenous access, for the collection of blood samples to assist in diagnosis or laboratory investigation including blood culture, or for administration of intravenous fluids for fluid maintenance of replacement or blood transfusion in sicker patients, is a common medical procedure. The need for intravenous access occurs in the outpatient laboratory, in the inpatient hospital units, and most critically in the Emergency Room and Intensive Care Unit. In many situations, intravenous access may be required repeatedly or over a significant time period. In these latter circumstances, a needle with an overlying catheter is introduced into the vein and the catheter is then inserted securely into the vein while the needle is withdrawn. The chosen veins are most frequently selected from the arm, but in challenging situations, a deeper vein from the neck (external jugular vein) or upper arm (subclavian vein) may need to be used. There are many reasons why the selection of a suitable vein may be problematic. These include, but are not limited to, obesity, previous injury to veins from inflammatory reaction to previous 'blood draws', previous injury to veins from recreational drug use. In these challenging situations, the insertion of a catheter into a vein has been greatly assisted by the use of ultrasound. The ultrasound unit may be 'cart-based' or 'handheld' using a linear transducer with a frequency of 10 to 15 megahertz. In most circumstances, choice of vein will be limited by the requirement that the vein is within 1.5 cms. from the skin surface. The transducer may be placed longitudinally or transversely over the chosen vein. Ultrasound training for intravenous cannulation is offered in most ultrasound training programs. Mechanism The creation of an image from sound has three steps – transmitting a sound wave, receiving echoes, and interpreting those echoes. Producing a sound wave A sound wave is typically produced by a piezoelectric transducer encased in a plastic housing. Strong, short electrical pulses from the ultrasound machine drive the transducer at the desired frequency. The frequencies can vary between 1 and 18 MHz, though frequencies up to 50–100 megahertz have been used experimentally in a technique known as biomicroscopy in special regions, such as the anterior chamber of the eye. Older technology transducers focused their beam with physical lenses. Contemporary technology transducers use digital antenna array techniques (piezoelectric elements in the transducer produce echoes at different times) to enable the ultrasound machine to change the direction and depth of focus. Near the transducer, the width of the ultrasound beam almost equals to the width of the transducer, after reaching a distance from the transducer (near zone length or Fresnel zone), the beam width narrows to half of the transducer width, and after that the width increases (far zone length or Fraunhofer's zone), where the lateral resolution decreases. Therefore, the wider the transducer width and the higher the frequency of ultrasound, the longer the Fresnel zone, and the lateral resolution can be maintained at a greater depth from the transducer. Ultrasound waves travel in pulses. Therefore, a shorter pulse length requires higher bandwidth (greater number of frequencies) to constitute the ultrasound pulse. As stated, the sound is focused either by the shape of the transducer, a lens in front of the transducer, or a complex set of control pulses from the ultrasound scanner, in the beamforming or spatial filtering technique. This focusing produces an arc-shaped sound wave from the face of the transducer. The wave travels into the body and comes into focus at a desired depth. Materials on the face of the transducer enable the sound to be transmitted efficiently into the body (often a rubbery coating, a form of impedance matching). In addition, a water-based gel is placed between the patient's skin and the probe to facilitate ultrasound transmission into the body. This is because air causes total reflection of ultrasound; impeding the transmission of ultrasound into the body. The sound wave is partially reflected from the layers between different tissues or scattered from smaller structures. Specifically, sound is reflected anywhere where there are acoustic impedance changes in the body: e.g. blood cells in blood plasma, small structures in organs, etc. Some of the reflections return to the transducer. Receiving the echoes The return of the sound wave to the transducer results in the same process as sending the sound wave, in reverse. The returned sound wave vibrates the transducer and the transducer turns the vibrations into electrical pulses that travel to the ultrasonic scanner where they are processed and transformed into a digital image. Forming the image To make an image, the ultrasound scanner must determine two characteristics from each received echo: How long it took the echo to be received from when the sound was transmitted. (Time and distance are equivalent.) How strong the echo was. Once the ultrasonic scanner determines these two, it can locate which pixel in the image to illuminate and with what intensity. Transforming the received signal into a digital image may be explained by using a blank spreadsheet as an analogy. First picture a long, flat transducer at the top of the sheet. Send pulses down the 'columns' of the spreadsheet (A, B, C, etc.). Listen at each column for any return echoes. When an echo is heard, note how long it took for the echo to return. The longer the wait, the deeper the row (1,2,3, etc.). The strength of the echo determines the brightness setting for that cell (white for a strong echo, black for a weak echo, and varying shades of grey for everything in between.) When all the echoes are recorded on the sheet, a greyscale image has been accomplished. In modern ultrasound systems, images are derived from the combined reception of echoes by multiple elements, rather than a single one. These elements in the transducer array work together to receive signals, a process essential for optimizing the ultrasonic beam's focus and producing detailed images. One predominant method for this is "delay-and-sum" beamforming. The time delay applied to each element is calculated based on the geometrical relationship between the imaging point, the transducer, and receiver positions. By integrating these time-adjusted signals, the system pinpoints focus onto specific tissue regions, enhancing image resolution and clarity. The utilization of multiple element reception combined with the delay-and-sum principles underpins the high-quality images characteristic of contemporary ultrasound scans. Displaying the image Images from the ultrasound scanner are transferred and displayed using the DICOM standard. Normally, very little post processing is applied. Sound in the body Ultrasonography (sonography) uses a probe containing multiple acoustic transducers to send pulses of sound into a material. Whenever a sound wave encounters a material with a different density (acoustical impedance), some of the sound wave is scattered but part is reflected back to the probe and is detected as an echo. The time it takes for the echo to travel back to the probe is measured and used to calculate the depth of the tissue interface causing the echo. The greater the difference between acoustic impedances, the larger the echo is. If the pulse hits gases or solids, the density difference is so great that most of the acoustic energy is reflected and it becomes impossible to progress further. The frequencies used for medical imaging are generally in the range of 1 to 18 MHz Higher frequencies have a correspondingly smaller wavelength, and can be used to make more detailed sonograms. However, the attenuation of the sound wave is increased at higher frequencies, so penetration of deeper tissues necessitates a lower frequency (3–5 MHz). Penetrating deep into the body with sonography is difficult. Some acoustic energy is lost each time an echo is formed, but most of it (approximately ) is lost from acoustic absorption. (See Acoustic attenuation for further details on modeling of acoustic attenuation and absorption.) The speed of sound varies as it travels through different materials, and is dependent on the acoustical impedance of the material. However, the sonographic instrument assumes that the acoustic velocity is constant at 1540 m/s. An effect of this assumption is that in a real body with non-uniform tissues, the beam becomes somewhat de-focused and image resolution is reduced. To generate a 2-D image, the ultrasonic beam is swept. A transducer may be swept mechanically by rotating or swinging or a 1-D phased array transducer may be used to sweep the beam electronically. The received data is processed and used to construct the image. The image is then a 2-D representation of the slice into the body. 3-D images can be generated by acquiring a series of adjacent 2-D images. Commonly a specialized probe that mechanically scans a conventional 2-D image transducer is used. However, since the mechanical scanning is slow, it is difficult to make 3D images of moving tissues. Recently, 2-D phased array transducers that can sweep the beam in 3-D have been developed. These can image faster and can even be used to make live 3-D images of a beating heart. Doppler ultrasonography is used to study blood flow and muscle motion. The different detected speeds are represented in color for ease of interpretation, for example leaky heart valves: the leak shows up as a flash of unique color. Colors may alternatively be used to represent the amplitudes of the received echoes. Expansions An additional expansion of ultrasound is bi-planar ultrasound, in which the probe has two 2D planes perpendicular to each other, providing more efficient localization and detection. Furthermore, an omniplane probe can rotate 180° to obtain multiple images. In 3D ultrasound, many 2D planes are digitally added together to create a 3-dimensional image of the object. Doppler ultrasonography Doppler ultrasonography employs the Doppler effect to assess whether structures (usually blood) are moving towards or away from the probe, and their relative velocity. By calculating the frequency shift of a particular sample volume, flow in an artery or a jet of blood flow over a heart valve, its speed and direction can be determined and visualized, as an example. Color Doppler is the measurement of velocity by color scale. Color Doppler images are generally combined with gray scale (B-mode) images to display duplex ultrasonography images. Uses include: Doppler echocardiography is the use of Doppler ultrasonography to examine the heart. An echocardiogram can, within certain limits, produce accurate assessment of the direction of blood flow and the velocity of blood and cardiac tissue at any arbitrary point using the Doppler effect. Velocity measurements allow assessment of cardiac valve areas and function, abnormal communications between the left and right side of the heart, leaking of blood through the valves (valvular regurgitation), and calculation of the cardiac output and E/A ratio (a measure of diastolic dysfunction). Contrast-enhanced ultrasound using gas-filled microbubble contrast media can be used to improve velocity or other flow-related measurements of interest. Transcranial Doppler (TCD) and transcranial color Doppler (TCCD), measure the velocity of blood flow through the brain's blood vessels through the cranium. They are useful in the diagnosis of emboli, stenosis, vasospasm from a subarachnoid hemorrhage (bleeding from a ruptured aneurysm), and other problems. Doppler fetal monitors use the Doppler effect to detect the fetal heartbeat during prenatal care. These are hand-held, and some models also display the heart rate in beats per minute (BPM). Use of this monitor is sometimes known as Doppler auscultation. The Doppler fetal monitor is commonly referred to simply as a Doppler or fetal Doppler and provides information similar to that provided by a fetal stethoscope. Contrast ultrasonography (ultrasound contrast imaging) A contrast medium for medical ultrasonography is a formulation of encapsulated gaseous microbubbles to increase echogenicity of blood, discovered by Dr. Raymond Gramiak in 1968 and named contrast-enhanced ultrasound. This contrast medical imaging modality is used throughout the world, for echocardiography in particular in the United States and for ultrasound radiology in Europe and Asia. Microbubbles-based contrast media is administered intravenously into the patient blood stream during the ultrasonography examination. Due to their size, the microbubbles remain confined in blood vessels without extravasating towards the interstitial fluid. An ultrasound contrast media is therefore purely intravascular, making it an ideal agent to image organ microvasculature for diagnostic purposes. A typical clinical use of contrast ultrasonography is detection of a hypervascular metastatic tumor, which exhibits a contrast uptake (kinetics of microbubbles concentration in blood circulation) faster than healthy biological tissue surrounding the tumor. Other clinical applications using contrast exist, as in echocardiography to improve delineation of left ventricle for visualizing contractibility of heart muscle after a myocardial infarction. Finally, applications in quantitative perfusion (relative measurement of blood flow) have emerged for identifying early patient response to anti-cancerous drug treatment (methodology and clinical study by Dr. Nathalie Lassau in 2011), enabling the best oncological therapeutic options to be determined. In oncological practice of medical contrast ultrasonography, clinicians use 'parametric imaging of vascular signatures' invented by Dr. Nicolas Rognin in 2010. This method is conceived as a cancer aided diagnostic tool, facilitating characterization of a suspicious tumor (malignant versus benign) in an organ. This method is based on medical computational science to analyze a time sequence of ultrasound contrast images, a digital video recorded in real-time during patient examination. Two consecutive signal processing steps are applied to each pixel of the tumor: calculation of a vascular signature (contrast uptake difference with respect to healthy tissue surrounding the tumor); automatic classification of the vascular signature into a unique parameter, the latter coded in one of the four following colors: green for continuous hyper-enhancement (contrast uptake higher than healthy tissue one), blue for continuous hypo-enhancement (contrast uptake lower than healthy tissue one), red for fast hyper-enhancement (contrast uptake before healthy tissue one) or yellow for fast hypo-enhancement (contrast uptake after healthy tissue one). Once signal processing in each pixel is completed, a color spatial map of the parameter is displayed on a computer monitor, summarizing all vascular information of the tumor in a single image called a parametric image (see last figure of press article as clinical examples). This parametric image is interpreted by clinicians based on predominant colorization of the tumor: red indicates a suspicion of malignancy (risk of cancer), green or yellow – a high probability of benignity. In the first case (suspicion of malignant tumor), the clinician typically prescribes a biopsy to confirm the diagnostic or a CT scan examination as a second opinion. In the second case (quasi-certain of benign tumor), only a follow-up is needed with a contrast ultrasonography examination a few months later. The main clinical benefits are to avoid a systemic biopsy (with inherent risks of invasive procedures) of benign tumors or a CT scan examination exposing the patient to X-ray radiation. The parametric imaging of vascular signatures method proved to be effective in humans for characterization of tumors in the liver. In a cancer screening context, this method might be potentially applicable to other organs such as breast or prostate. Molecular ultrasonography (ultrasound molecular imaging) The current future of contrast ultrasonography is in molecular imaging with potential clinical applications expected in cancer screening to detect malignant tumors at their earliest stage of appearance. Molecular ultrasonography (or ultrasound molecular imaging) uses targeted microbubbles originally designed by Dr Alexander Klibanov in 1997; such targeted microbubbles specifically bind or adhere to tumoral microvessels by targeting biomolecular cancer expression (overexpression of certain biomolecules that occurs during neo-angiogenesis or inflammation in malignant tumors). As a result, a few minutes after their injection in blood circulation, the targeted microbubbles accumulate in the malignant tumor; facilitating its localization in a unique ultrasound contrast image. In 2013, the very first exploratory clinical trial in humans for prostate cancer was completed at Amsterdam in the Netherlands by Dr. Hessel Wijkstra. In molecular ultrasonography, the technique of acoustic radiation force (also used for shear wave elastography) is applied in order to literally push the targeted microbubbles towards microvessels wall; first demonstrated by Dr. Paul Dayton in 1999. This allows maximization of binding to the malignant tumor; the targeted microbubbles being in more direct contact with cancerous biomolecules expressed at the inner surface of tumoral microvessels. At the stage of scientific preclinical research, the technique of acoustic radiation force was implemented as a prototype in clinical ultrasound systems and validated in vivo in 2D and 3D imaging modes. Elastography (ultrasound elasticity imaging) Ultrasound is also used for elastography, which is a relatively new imaging modality that maps the elastic properties of soft tissue. This modality emerged in the last two decades. Elastography is useful in medical diagnoses as it can discern healthy from unhealthy tissue for specific organs/growths. For example, cancerous tumors will often be harder than the surrounding tissue, and diseased livers are stiffer than healthy ones. There are many ultrasound elastography techniques. Interventional ultrasonography Interventional ultrasonography involves biopsy, emptying fluids, intrauterine Blood transfusion (Hemolytic disease of the newborn). Thyroid cysts: High frequency thyroid ultrasound (HFUS) can be used to treat several gland conditions. The recurrent thyroid cyst that was usually treated in the past with surgery, can be treated effectively by a new procedure called percutaneous ethanol injection, or PEI. With ultrasound guided placement of a 25 gauge needle within the cyst, and after evacuation of the cyst fluid, about 50% of the cyst volume is injected back into the cavity, under strict operator visualization of the needle tip. The procedure is 80% successful in reducing the cyst to minute size. Metastatic thyroid cancer neck lymph nodes: HFUS may also be used to treat metastatic thyroid cancer neck lymph nodes that occur in patients who either refuse, or are no longer candidates, for surgery. Small amounts of ethanol are injected under ultrasound guided needle placement. A power doppler blood flow study is done prior to injection. The blood flow can be destroyed and the node rendered inactive. Power doppler visualized blood flow can be eradicated, and there may be a drop in the cancer blood marker test, thyroglobulin, TG, as the node become non-functional. Another interventional use for HFUS is to mark a cancer node prior to surgery to help locate the node cluster at the surgery. A minute amount of methylene dye is injected, under careful ultrasound guided placement of the needle on the anterior surface, but not in the node. The dye will be evident to the thyroid surgeon when opening the neck. A similar localization procedure with methylene blue, can be done to locate parathyroid adenomas. Joint injections can be guided by medical ultrasound, such as in ultrasound-guided hip joint injections. Compression ultrasonography Compression ultrasonography is when the probe is pressed against the skin. This can bring the target structure closer to the probe, increasing spatial resolution of it. Comparison of the shape of the target structure before and after compression can aid in diagnosis. It is used in ultrasonography of deep venous thrombosis, wherein absence of vein compressibility is a strong indicator of thrombosis. Compression ultrasonography has both high sensitivity and specificity for detecting proximal deep vein thrombosis in symptomatic patients. Results are not reliable when the patient is asymptomatic, for example in high risk postoperative orthopedic patients. Panoramic ultrasonography Panoramic ultrasonography is the digital stitching of multiple ultrasound images into a broader one. It can display an entire abnormality and show its relationship to nearby structures on a single image. Multiparametric ultrasonography Multiparametric ultrasonography (mpUSS) combines multiple ultrasound techniques to produce a composite result. For example, one study combined B-mode, colour Doppler, real-time elastography, and contrast-enhanced ultrasound, achieving an accuracy similar to that of multiparametric MRI. Speed-of-Sound Imaging Speed-of-sound (SoS) imaging aims to find the spatial distribution of the SoS within the tissue. The idea is to find relative delay measurements for different transmission events and solve the limited-angle tomographic reconstruction problem using delay measurements and transmission geometry. Compared to shear-wave elastography, SoS imaging has better ex-vivo tissue differentiation for benign and malignant tumors. Attributes As with all imaging modalities, ultrasonography has positive and negative attributes. Strengths Muscle, soft tissue, and bone surfaces are imaged very well including the delineation of interfaces between solid and fluid-filled spaces. "Live" images can be dynamically selected, permitting diagnosis and documentation often rapidly. Live images also permit ultrasound-guided biopsies or injections, which can be cumbersome with other imaging modalities. Organ structure can be demonstrated. There are no known long-term side effects when used according to guidelines, and discomfort is minimal. Ability to image local variations in the mechanical properties of soft tissue. Equipment is widely available and comparatively flexible. Small, easily carried scanners are available which permit bedside examinations. Transducers have become relatively inexpensive compared to other modes of investigation, such as computed X-ray tomography, DEXA or magnetic resonance imaging. Spatial resolution is better in high frequency ultrasound transducers than most other imaging modalities. Use of an ultrasound research interface can offer a relatively inexpensive, real-time, and flexible method for capturing data required for specific research purposes of tissue characterization and development of new image processing techniques. Weaknesses Sonographic devices have trouble penetrating bone. For example, sonography of the adult brain is currently very limited. Sonography performs very poorly when there is gas between the transducer and the organ of interest, due to the extreme differences in acoustic impedance. For example, overlying gas in the gastrointestinal tract often makes ultrasound scanning of the pancreas difficult. Lung imaging however can be useful in demarcating pleural effusions, detecting heart failure and pneumonia. Even in the absence of bone or air, the depth penetration of ultrasound may be limited depending on the frequency of imaging. Consequently, there might be difficulties imaging structures deep in the body, especially in obese patients. Image quality and accuracy of diagnosis is limited with obese patients and overlying subcutaneous fat attenuates the sound beam. A lower frequency transducer is required with subsequent lower resolution. The method is operator-dependent. Skill and experience is needed to acquire good-quality images and make accurate diagnoses. There is no scout image as there is with CT and MRI. Once an image has been acquired there is no exact way to tell which part of the body was imaged. 80% of sonographers experience Repetitive Strain Injuries (RSI) or so-called Work-Related Musculoskeletal Disorders (WMSD) because of bad ergonomic positions. Risks and side-effects Ultrasonography is generally considered safe imaging, with the World Health Organizations stating: "Diagnostic ultrasound is recognized as a safe, effective, and highly flexible imaging modality capable of providing clinically relevant information about most parts of the body in a rapid and cost-effective fashion". Diagnostic ultrasound studies of the fetus are generally considered to be safe during pregnancy. However, this diagnostic procedure should be performed only when there is a valid medical indication, and the lowest possible ultrasonic exposure setting should be used to gain the necessary diagnostic information under the "as low as reasonably practicable" or ALARP principle. Although there is no evidence that ultrasound could be harmful to the fetus, medical authorities typically strongly discourage the promotion, selling, or leasing of ultrasound equipment for making "keepsake fetal videos". Studies on the safety of ultrasound A meta-analysis of several ultrasonography studies published in 2000 found no statistically significant harmful effects from ultrasonography. It was noted that there is a lack of data on long-term substantive outcomes such as neurodevelopment. A study at the Yale School of Medicine published in 2006 found a small but significant correlation between prolonged and frequent use of ultrasound and abnormal neuronal migration in mice. A study performed in Sweden in 2001 has shown that subtle effects of neurological damage linked to ultrasound were implicated by an increased incidence in left-handedness in boys (a marker for brain problems when not hereditary) and speech delays. The above findings, however, were not confirmed in a follow-up study. A later study, however, performed on a larger sample of 8865 children, has established a statistically significant, albeit weak association of ultrasonography exposure and being non-right handed later in life. Regulation Diagnostic and therapeutic ultrasound equipment is regulated in the US by the Food and Drug Administration, and worldwide by other national regulatory agencies. The FDA limits acoustic output using several metrics; generally, other agencies accept the FDA-established guidelines. Currently, New Mexico, Oregon, and North Dakota are the only US states that regulate diagnostic medical sonographers. Certification examinations for sonographers are available in the US from three organizations: the American Registry for Diagnostic Medical Sonography, Cardiovascular Credentialing International and the American Registry of Radiologic Technologists. The primary regulated metrics are Mechanical Index (MI), a metric associated with the cavitation bio-effect, and Thermal Index (TI) a metric associated with the tissue heating bio-effect. The FDA requires that the machine not exceed established limits, which are reasonably conservative in an effort to maintain diagnostic ultrasound as a safe imaging modality. This requires self-regulation on the part of the manufacturer in terms of machine calibration. Ultrasound-based pre-natal care and sex screening technologies were launched in India in the 1980s. With concerns about its misuse for sex-selective abortion, the Government of India passed the Pre-natal Diagnostic Techniques Act (PNDT) in 1994 to distinguish and regulate legal and illegal uses of ultrasound equipment. The law was further amended as the Pre-Conception and Pre-natal Diagnostic Techniques (Regulation and Prevention of Misuse) (PCPNDT) Act in 2004 to deter and punish prenatal sex screening and sex selective abortion. It is currently illegal and a punishable crime in India to determine or disclose the sex of a fetus using ultrasound equipment. Use in other animals Ultrasound is also a valuable tool in veterinary medicine, offering the same non-invasive imaging that helps in the diagnosis and monitoring of conditions in animals. History After the French physicist Pierre Curie's discovery of piezoelectricity in 1880, ultrasonic waves could be deliberately generated for industry. In 1940, the American acoustical physicist Floyd Firestone devised the first ultrasonic echo imaging device, the Supersonic Reflectoscope, to detect internal flaws in metal castings. In 1941, Austrian neurologist Karl Theo Dussik, in collaboration with his brother, Friedrich, a physicist, was likely the first person to image the human body ultrasonically, outlining the ventricles of a human brain. Ultrasonic energy was first applied to the human body for medical purposes by Dr George Ludwig at the Naval Medical Research Institute, Bethesda, Maryland, in the late 1940s. English-born physicist John Wild (1914–2009) first used ultrasound to assess the thickness of bowel tissue as early as 1949; he has been described as the "father of medical ultrasound". Subsequent advances took place concurrently in several countries but it was not until 1961 that David Robinson and George Kossoff's work at the Australian Department of Health resulted in the first commercially practical water bath ultrasonic scanner. In 1963 Meyerdirk & Wright launched production of the first commercial, hand-held, articulated arm, compound contact B-mode scanner, which made ultrasound generally available for medical use. France Léandre Pourcelot, a researcher and teacher at INSA (Institut National des Sciences Appliquées), Lyon, co-published a report in 1965 at the Académie des sciences, "Effet Doppler et mesure du débit sanguin" ("Doppler effect and measure of the blood flow"), the basis of his design of a Doppler flow meter in 1967. Scotland Parallel developments in Glasgow, Scotland by Professor Ian Donald and colleagues at the Glasgow Royal Maternity Hospital (GRMH) led to the first diagnostic applications of the technique. Donald was an obstetrician with a self-confessed "childish interest in machines, electronic and otherwise", who, having treated the wife of one of the company's directors, was invited to visit the Research Department of boilermakers Babcock & Wilcox at Renfrew. He adapted their industrial ultrasound equipment to conduct experiments on various anatomical specimens and assess their ultrasonic characteristics. Together with the medical physicist Tom Brown. and fellow obstetrician John MacVicar, Donald refined the equipment to enable differentiation of pathology in live volunteer patients. These findings were reported in The Lancet on 7 June 1958 as "Investigation of Abdominal Masses by Pulsed Ultrasound" – possibly one of the most important papers published in the field of diagnostic medical imaging. At GRMH, Professor Donald and James Willocks then refined their techniques to obstetric applications including fetal head measurement to assess the size and growth of the fetus. With the opening of the new Queen Mother's Hospital in Yorkhill in 1964, it became possible to improve these methods even further. Stuart Campbell's pioneering work on fetal cephalometry led to it acquiring long-term status as the definitive method of study of foetal growth. As the technical quality of the scans was further developed, it soon became possible to study pregnancy from start to finish and diagnose its many complications such as multiple pregnancy, fetal abnormality and placenta praevia. Diagnostic ultrasound has since been imported into practically every other area of medicine. Sweden Medical ultrasonography was used in 1953 at Lund University by cardiologist Inge Edler and Gustav Ludwig Hertz's son Carl Hellmuth Hertz, who was then a graduate student at the university's department of nuclear physics. Edler had asked Hertz if it was possible to use radar to look into the body, but Hertz said this was impossible. However, he said, it might be possible to use ultrasonography. Hertz was familiar with using ultrasonic reflectoscopes of the American acoustical physicist Floyd Firestone's invention for nondestructive materials testing, and together Edler and Hertz developed the idea of applying this methodology in medicine. The first successful measurement of heart activity was made on October 29, 1953, using a device borrowed from the ship construction company Kockums in Malmö. On December 16 the same year, the method was applied to generate an echo-encephalogram (ultrasonic probe of the brain). Edler and Hertz published their findings in 1954. United States In 1962, after about two years of work, Joseph Holmes, William Wright, and Ralph Meyerdirk developed the first compound contact B-mode scanner. Their work had been supported by U.S. Public Health Services and the University of Colorado. Wright and Meyerdirk left the university to form Physionic Engineering Inc., which launched the first commercial hand-held articulated arm compound contact B-mode scanner in 1963. This was the start of the most popular design in the history of ultrasound scanners. In the late 1960s Gene Strandness and the bio-engineering group at the University of Washington conducted research on Doppler ultrasound as a diagnostic tool for vascular disease. Eventually, they developed technologies to use duplex imaging, or Doppler in conjunction with B-mode scanning, to view vascular structures in real time while also providing hemodynamic information. The first demonstration of color Doppler was by Geoff Stevenson, who was involved in the early developments and medical use of Doppler shifted ultrasonic energy. Manufacturers Major manufacturers of Medical Ultrasound Devices and Equipment are: Canon Medical Systems Corporation Esaote GE Healthcare Fujifilm Mindray Medical International Limited Koninklijke Philips N.V. Samsung Medison Siemens Healthineers
Technology
Imaging
null
143410
https://en.wikipedia.org/wiki/Umbra%2C%20penumbra%20and%20antumbra
Umbra, penumbra and antumbra
The umbra, penumbra and antumbra are three distinct parts of a shadow, created by any light source after impinging on an opaque object. Assuming no diffraction, for a collimated beam (such as a point source) of light, only the umbra is cast. These names are most often used for the shadows cast by celestial bodies, though they are sometimes used to describe levels, such as in sunspots. Umbra The umbra (Latin for "shadow") is the innermost and darkest part of a shadow, where the light source is completely blocked by the occluding body. An observer within the umbra experiences a total occultation. The umbra of a round body occluding a round light source forms a right circular cone. When viewed from the cone's apex, the two bodies appear the same size. The distance from the Moon to the apex of its umbra is roughly equal to that between the Moon and Earth: . Since Earth's diameter is 3.7 times the Moon's, its umbra extends correspondingly farther: roughly . Penumbra The penumbra (from the Latin paene "almost, nearly" and umbra "shadow") is the region in which only a portion of the light source is obscured by the occluding body. An observer in the penumbra experiences a partial eclipse. An alternative definition is that the penumbra is the region where some or all of the light source is obscured (i.e., the umbra is a subset of the penumbra). For example, NASA's Navigation and Ancillary Information Facility defines that a body in the umbra is also within the penumbra. Antumbra The antumbra (from the Latin ante "before" and umbra "shadow") is the region from which the occluding body appears entirely within the disc of the light source. An observer in this region experiences an annular eclipse, in which a bright ring is visible around the eclipsing body. If the observer moves closer to the light source, the apparent size of the occluding body increases until it causes a full umbra.
Physical sciences
Celestial mechanics
Astronomy
143431
https://en.wikipedia.org/wiki/Stereographic%20projection
Stereographic projection
In mathematics, a stereographic projection is a perspective projection of the sphere, through a specific point on the sphere (the pole or center of projection), onto a plane (the projection plane) perpendicular to the diameter through the point. It is a smooth, bijective function from the entire sphere except the center of projection to the entire plane. It maps circles on the sphere to circles or lines on the plane, and is conformal, meaning that it preserves angles at which curves meet and thus locally approximately preserves shapes. It is neither isometric (distance preserving) nor equiareal (area preserving). The stereographic projection gives a way to represent a sphere by a plane. The metric induced by the inverse stereographic projection from the plane to the sphere defines a geodesic distance between points in the plane equal to the spherical distance between the spherical points they represent. A two-dimensional coordinate system on the stereographic plane is an alternative setting for spherical analytic geometry instead of spherical polar coordinates or three-dimensional cartesian coordinates. This is the spherical analog of the Poincaré disk model of the hyperbolic plane. Intuitively, the stereographic projection is a way of picturing the sphere as the plane, with some inevitable compromises. Because the sphere and the plane appear in many areas of mathematics and its applications, so does the stereographic projection; it finds use in diverse fields including complex analysis, cartography, geology, and photography. Sometimes stereographic computations are done graphically using a special kind of graph paper called a stereographic net, shortened to stereonet, or Wulff net. History The origin of the stereographic projection is not known, but it is believed to have been discovered by Ancient Greek astronomers and used for projecting the celestial sphere to the plane so that the motions of stars and planets could be analyzed using plane geometry. Its earliest extant description is found in Ptolemy's Planisphere (2nd century AD), but it was ambiguously attributed to Hipparchus (2nd century BC) by Synesius (), and Apollonius's Conics () contains a theorem which is crucial in proving the property that the stereographic projection maps circles to circles. Hipparchus, Apollonius, Archimedes, and even Eudoxus (4th century BC) have sometimes been speculatively credited with inventing or knowing of the stereographic projection, but some experts consider these attributions unjustified. Ptolemy refers to the use of the stereographic projection in a "horoscopic instrument", perhaps the described by Vitruvius (1st century BC). By the time of Theon of Alexandria (4th century), the planisphere had been combined with a dioptra to form the planispheric astrolabe ("star taker"), a capable portable device which could be used for measuring star positions and performing a wide variety of astronomical calculations. The astrolabe was in continuous use by Byzantine astronomers, and was significantly further developed by medieval Islamic astronomers. It was transmitted to Western Europe during the 11th–12th century, with Arabic texts translated into Latin. In the 16th and 17th century, the equatorial aspect of the stereographic projection was commonly used for maps of the Eastern and Western Hemispheres. It is believed that already the map created in 1507 by Gualterius Lud was in stereographic projection, as were later the maps of Jean Roze (1542), Rumold Mercator (1595), and many others. In star charts, even this equatorial aspect had been utilised already by the ancient astronomers like Ptolemy. François d'Aguilon gave the stereographic projection its current name in his 1613 work Opticorum libri sex philosophis juxta ac mathematicis utiles (Six Books of Optics, useful for philosophers and mathematicians alike). In the late 16th century, Thomas Harriot proved that the stereographic projection is conformal; however, this proof was never published and sat among his papers in a box for more than three centuries. In 1695, Edmond Halley, motivated by his interest in star charts, was the first to publish a proof. He used the recently established tools of calculus, invented by his friend Isaac Newton. Definition First formulation The unit sphere in three-dimensional space is the set of points such that . Let be the "north pole", and let be the rest of the sphere. The plane runs through the center of the sphere; the "equator" is the intersection of the sphere with this plane. For any point on , there is a unique line through and , and this line intersects the plane in exactly one point , known as the stereographic projection of onto the plane. In Cartesian coordinates on the sphere and on the plane, the projection and its inverse are given by the formulas In spherical coordinates on the sphere (with the zenith angle, , and the azimuth, ) and polar coordinates on the plane, the projection and its inverse are Here, is understood to have value when = 0. Also, there are many ways to rewrite these formulas using trigonometric identities. In cylindrical coordinates on the sphere and polar coordinates on the plane, the projection and its inverse are Other conventions Some authors define stereographic projection from the north pole (0, 0, 1) onto the plane , which is tangent to the unit sphere at the south pole (0, 0, −1). This can be described as a composition of a projection onto the equatorial plane described above, and a homothety from it to the polar plane. The homothety scales the image by a factor of 2 (a ratio of a diameter to a radius of the sphere), hence the values and produced by this projection are exactly twice those produced by the equatorial projection described in the preceding section. For example, this projection sends the equator to the circle of radius 2 centered at the origin. While the equatorial projection produces no infinitesimal area distortion along the equator, this pole-tangent projection instead produces no infinitesimal area distortion at the south pole. Other authors use a sphere of radius and the plane . In this case the formulae become In general, one can define a stereographic projection from any point on the sphere onto any plane such that is perpendicular to the diameter through , and does not contain . As long as meets these conditions, then for any point other than the line through and meets in exactly one point , which is defined to be the stereographic projection of P onto E. Generalizations More generally, stereographic projection may be applied to the unit -sphere in ()-dimensional Euclidean space . If is a point of and a hyperplane in , then the stereographic projection of a point is the point of intersection of the line with . In Cartesian coordinates (, from 0 to ) on and (, from 1 to n) on , the projection from is given by Defining the inverse is given by Still more generally, suppose that is a (nonsingular) quadric hypersurface in the projective space . In other words, is the locus of zeros of a non-singular quadratic form in the homogeneous coordinates . Fix any point on and a hyperplane in not containing . Then the stereographic projection of a point in is the unique point of intersection of with . As before, the stereographic projection is conformal and invertible on a non-empty Zariski open set. The stereographic projection presents the quadric hypersurface as a rational hypersurface. This construction plays a role in algebraic geometry and conformal geometry. Properties The first stereographic projection defined in the preceding section sends the "south pole" (0, 0, −1) of the unit sphere to (0, 0), the equator to the unit circle, the southern hemisphere to the region inside the circle, and the northern hemisphere to the region outside the circle. The projection is not defined at the projection point = (0, 0, 1). Small neighborhoods of this point are sent to subsets of the plane far away from (0, 0). The closer is to (0, 0, 1), the more distant its image is from (0, 0) in the plane. For this reason it is common to speak of (0, 0, 1) as mapping to "infinity" in the plane, and of the sphere as completing the plane by adding a point at infinity. This notion finds utility in projective geometry and complex analysis. On a merely topological level, it illustrates how the sphere is homeomorphic to the one-point compactification of the plane. In Cartesian coordinates a point on the sphere and its image on the plane either both are rational points or none of them: Stereographic projection is conformal, meaning that it preserves the angles at which curves cross each other (see figures). On the other hand, stereographic projection does not preserve area; in general, the area of a region of the sphere does not equal the area of its projection onto the plane. The area element is given in coordinates by Along the unit circle, where , there is no inflation of area in the limit, giving a scale factor of 1. Near (0, 0) areas are inflated by a factor of 4, and near infinity areas are inflated by arbitrarily small factors. The metric is given in coordinates by and is the unique formula found in Bernhard Riemann's Habilitationsschrift on the foundations of geometry, delivered at Göttingen in 1854, and entitled Über die Hypothesen welche der Geometrie zu Grunde liegen. No map from the sphere to the plane can be both conformal and area-preserving. If it were, then it would be a local isometry and would preserve Gaussian curvature. The sphere and the plane have different Gaussian curvatures, so this is impossible. Circles on the sphere that do not pass through the point of projection are projected to circles on the plane. Circles on the sphere that do pass through the point of projection are projected to straight lines on the plane. These lines are sometimes thought of as circles through the point at infinity, or circles of infinite radius. These properties can be verified by using the expressions of in terms of given in : using these expressions for a substitution in the equation of the plane containing a circle on the sphere, and clearing denominators, one gets the equation of a circle, that is, a second-degree equation with as its quadratic part. The equation becomes linear if that is, if the plane passes through the point of projection. All lines in the plane, when transformed to circles on the sphere by the inverse of stereographic projection, meet at the projection point. Parallel lines, which do not intersect in the plane, are transformed to circles tangent at projection point. Intersecting lines are transformed to circles that intersect transversally at two points in the sphere, one of which is the projection point. (Similar remarks hold about the real projective plane, but the intersection relationships are different there.) The loxodromes of the sphere map to curves on the plane of the form where the parameter measures the "tightness" of the loxodrome. Thus loxodromes correspond to logarithmic spirals. These spirals intersect radial lines in the plane at equal angles, just as the loxodromes intersect meridians on the sphere at equal angles. The stereographic projection relates to the plane inversion in a simple way. Let and be two points on the sphere with projections and on the plane. Then and are inversive images of each other in the image of the equatorial circle if and only if and are reflections of each other in the equatorial plane. In other words, if: is a point on the sphere, but not a 'north pole' and not its antipode, the 'south pole' , is the image of in a stereographic projection with the projection point and is the image of in a stereographic projection with the projection point , then and are inversive images of each other in the unit circle. Wulff net Stereographic projection plots can be carried out by a computer using the explicit formulas given above. However, for graphing by hand these formulas are unwieldy. Instead, it is common to use graph paper designed specifically for the task. This special graph paper is called a stereonet or Wulff net, after the Russian mineralogist George (Yuri Viktorovich) Wulff. The Wulff net shown here is the stereographic projection of the grid of parallels and meridians of a hemisphere centred at a point on the equator (such as the Eastern or Western hemisphere of a planet). In the figure, the area-distorting property of the stereographic projection can be seen by comparing a grid sector near the center of the net with one at the far right or left. The two sectors have equal areas on the sphere. On the disk, the latter has nearly four times the area of the former. If the grid is made finer, this ratio approaches exactly 4. On the Wulff net, the images of the parallels and meridians intersect at right angles. This orthogonality property is a consequence of the angle-preserving property of the stereographic projection. (However, the angle-preserving property is stronger than this property. Not all projections that preserve the orthogonality of parallels and meridians are angle-preserving.) For an example of the use of the Wulff net, imagine two copies of it on thin paper, one atop the other, aligned and tacked at their mutual center. Let be the point on the lower unit hemisphere whose spherical coordinates are (140°, 60°) and whose Cartesian coordinates are (0.321, 0.557, −0.766). This point lies on a line oriented 60° counterclockwise from the positive -axis (or 30° clockwise from the positive -axis) and 50° below the horizontal plane . Once these angles are known, there are four steps to plotting : Using the grid lines, which are spaced 10° apart in the figures here, mark the point on the edge of the net that is 60° counterclockwise from the point (1, 0) (or 30° clockwise from the point (0, 1)). Rotate the top net until this point is aligned with (1, 0) on the bottom net. Using the grid lines on the bottom net, mark the point that is 50° toward the center from that point. Rotate the top net oppositely to how it was oriented before, to bring it back into alignment with the bottom net. The point marked in step 3 is then the projection that we wanted. To plot other points, whose angles are not such round numbers as 60° and 50°, one must visually interpolate between the nearest grid lines. It is helpful to have a net with finer spacing than 10°. Spacings of 2° are common. To find the central angle between two points on the sphere based on their stereographic plot, overlay the plot on a Wulff net and rotate the plot about the center until the two points lie on or near a meridian. Then measure the angle between them by counting grid lines along that meridian. Applications within mathematics Complex analysis Although any stereographic projection misses one point on the sphere (the projection point), the entire sphere can be mapped using two projections from distinct projection points. In other words, the sphere can be covered by two stereographic parametrizations (the inverses of the projections) from the plane. The parametrizations can be chosen to induce the same orientation on the sphere. Together, they describe the sphere as an oriented surface (or two-dimensional manifold). This construction has special significance in complex analysis. The point in the real plane can be identified with the complex number . The stereographic projection from the north pole onto the equatorial plane is then Similarly, letting be another complex coordinate, the functions define a stereographic projection from the south pole onto the equatorial plane. The transition maps between the - and -coordinates are then and , with approaching 0 as goes to infinity, and vice versa. This facilitates an elegant and useful notion of infinity for the complex numbers and indeed an entire theory of meromorphic functions mapping to the Riemann sphere. The standard metric on the unit sphere agrees with the Fubini–Study metric on the Riemann sphere. Visualization of lines and planes The set of all lines through the origin in three-dimensional space forms a space called the real projective plane. This plane is difficult to visualize, because it cannot be embedded in three-dimensional space. However, one can visualize it as a disk, as follows. Any line through the origin intersects the southern hemisphere  ≤ 0 in a point, which can then be stereographically projected to a point on a disk in the XY plane. Horizontal lines through the origin intersect the southern hemisphere in two antipodal points along the equator, which project to the boundary of the disk. Either of the two projected points can be considered part of the disk; it is understood that antipodal points on the equator represent a single line in 3 space and a single point on the boundary of the projected disk (see quotient topology). So any set of lines through the origin can be pictured as a set of points in the projected disk. But the boundary points behave differently from the boundary points of an ordinary 2-dimensional disk, in that any one of them is simultaneously close to interior points on opposite sides of the disk (just as two nearly horizontal lines through the origin can project to points on opposite sides of the disk). Also, every plane through the origin intersects the unit sphere in a great circle, called the trace of the plane. This circle maps to a circle under stereographic projection. So the projection lets us visualize planes as circular arcs in the disk. Prior to the availability of computers, stereographic projections with great circles often involved drawing large-radius arcs that required use of a beam compass. Computers now make this task much easier. Further associated with each plane is a unique line, called the plane's pole, that passes through the origin and is perpendicular to the plane. This line can be plotted as a point on the disk just as any line through the origin can. So the stereographic projection also lets us visualize planes as points in the disk. For plots involving many planes, plotting their poles produces a less-cluttered picture than plotting their traces. This construction is used to visualize directional data in crystallography and geology, as described below. Other visualization Stereographic projection is also applied to the visualization of polytopes. In a Schlegel diagram, an -dimensional polytope in is projected onto an -dimensional sphere, which is then stereographically projected onto . The reduction from to can make the polytope easier to visualize and understand. Arithmetic geometry In elementary arithmetic geometry, stereographic projection from the unit circle provides a means to describe all primitive Pythagorean triples. Specifically, stereographic projection from the north pole (0,1) onto the -axis gives a one-to-one correspondence between the rational number points on the unit circle (with ) and the rational points of the -axis. If is a rational point on the -axis, then its inverse stereographic projection is the point which gives Euclid's formula for a Pythagorean triple. Tangent half-angle substitution The pair of trigonometric functions can be thought of as parametrizing the unit circle. The stereographic projection gives an alternative parametrization of the unit circle: Under this reparametrization, the length element of the unit circle goes over to This substitution can sometimes simplify integrals involving trigonometric functions. Applications to other disciplines Cartography The fundamental problem of cartography is that no map from the sphere to the plane can accurately represent both angles and areas. In general, area-preserving map projections are preferred for statistical applications, while angle-preserving (conformal) map projections are preferred for navigation. Stereographic projection falls into the second category. When the projection is centered at the Earth's north or south pole, it has additional desirable properties: It sends meridians to rays emanating from the origin and parallels to circles centered at the origin. Planetary science The stereographic is the only projection that maps all circles on a sphere to circles on a plane. This property is valuable in planetary mapping where craters are typical features. The set of circles passing through the point of projection have unbounded radius, and therefore degenerate into lines. Crystallography In crystallography, the orientations of crystal axes and faces in three-dimensional space are a central geometric concern, for example in the interpretation of X-ray and electron diffraction patterns. These orientations can be visualized as in the section Visualization of lines and planes above. That is, crystal axes and poles to crystal planes are intersected with the northern hemisphere and then plotted using stereographic projection. A plot of poles is called a pole figure. In electron diffraction, Kikuchi line pairs appear as bands decorating the intersection between lattice plane traces and the Ewald sphere thus providing experimental access to a crystal's stereographic projection. Model Kikuchi maps in reciprocal space, and fringe visibility maps for use with bend contours in direct space, thus act as road maps for exploring orientation space with crystals in the transmission electron microscope. Geology Researchers in structural geology are concerned with the orientations of planes and lines for a number of reasons. The foliation of a rock is a planar feature that often contains a linear feature called lineation. Similarly, a fault plane is a planar feature that may contain linear features such as slickensides. These orientations of lines and planes at various scales can be plotted using the methods of the Visualization of lines and planes section above. As in crystallography, planes are typically plotted by their poles. Unlike crystallography, the southern hemisphere is used instead of the northern one (because the geological features in question lie below the Earth's surface). In this context the stereographic projection is often referred to as the equal-angle lower-hemisphere projection. The equal-area lower-hemisphere projection defined by the Lambert azimuthal equal-area projection is also used, especially when the plot is to be subjected to subsequent statistical analysis such as density contouring. Rock mechanics The stereographic projection is one of the most widely used methods for evaluating rock slope stability. It allows for the representation and analysis of three-dimensional orientation data in two dimensions. Kinematic analysis within stereographic projection is used to assess the potential for various modes of rock slope failures—such as plane, wedge, and toppling failures—which occur due to the presence of unfavorably oriented discontinuities. This technique is particularly useful for visualizing the orientation of rock slopes in relation to discontinuity sets, facilitating the assessment of the most likely failure type. For instance, plane failure is more likely when the strike of a discontinuity set is parallel to the slope, and the discontinuities dip towards the slope at an angle steep enough to allow sliding, but not steeper than the slope itself. Additionally, some authors have developed graphical methods based on stereographic projection to easily calculate geometrical correction parameters—such as those related to the parallelism between the slope and discontinuities, the dip of the discontinuity, and the relative angle between the discontinuity and the slope—for rock mass classifications in slopes, including slope mass rating (SMR) and rock mass rating. Photography Some fisheye lenses use a stereographic projection to capture a wide-angle view. Compared to more traditional fisheye lenses which use an equal-area projection, areas close to the edge retain their shape, and straight lines are less curved. However, stereographic fisheye lenses are typically more expensive to manufacture. Image remapping software, such as Panotools, allows the automatic remapping of photos from an equal-area fisheye to a stereographic projection. The stereographic projection has been used to map spherical panoramas, starting with Horace Bénédict de Saussure's in 1779. This results in effects known as a little planet (when the center of projection is the nadir) and a tube (when the center of projection is the zenith). The popularity of using stereographic projections to map panoramas over other azimuthal projections is attributed to the shape preservation that results from the conformality of the projection.
Mathematics
Non-Euclidean geometry
null
143471
https://en.wikipedia.org/wiki/Tuber
Tuber
Tubers are a type of enlarged structure that plants use as storage organs for nutrients, derived from stems or roots. Tubers help plants perennate (survive winter or dry months), provide energy and nutrients, and are a means of asexual reproduction. Stem tubers manifest as thickened rhizomes (underground stems) or stolons (horizontal connections between organisms); examples include the potato and yam. The term root tuber describes modified lateral roots, as in sweet potatoes, cassava, and dahlias. Terminology The term originates from the Latin , meaning 'lump, bump, or swelling'. Some writers limit the definition of tuber to structures derived from stems, while others also apply the term to structures derived from roots. Stem tubers A stem tuber forms from thickened rhizomes or stolons. The top sides of the tuber produce shoots that grow into typical stems and leaves and the undersides produce roots. They tend to form at the sides of the parent plant and are most often located near the soil surface. The underground tuber is normally a short-lived storage and regenerative organ developing from a shoot that branches off a mature plant. The offspring or new tubers are attached to a parent tuber or form at the end of a hypogeogenous (initiated below ground) rhizome. In the autumn the plant dies, except for the new offspring tubers, which have one dominant bud that in spring regrows a new shoot producing stems and leaves; in summer the tubers decay and new tubers begin to grow. Some plants also form smaller tubers or tubercules that act like seeds, producing small plants that resemble (in morphology and size) seedlings. Some stem tubers are long-lived, such as those of tuberous begonias, but many plants have tubers that survive only until the plants have fully leafed out, at which point the tuber is reduced to a shriveled-up husk. Stem tubers generally start off as enlargements of the hypocotyl section of a seedling, but sometimes also include the first node or two of the epicotyl and the upper section of the root. The tuber has a vertical orientation, with one or a few vegetative buds on the top and fibrous roots produced on the bottom from a basal section. Typically the tuber has an oblong rounded shape. Tuberous begonias, yams, and cyclamens are commonly grown stem tubers. Mignonette vine (Anredera cordifolia) produces aerial stem tubers on vines; the tubers fall to the ground and grow. Plectranthus esculentus, of the mint family Lamiaceae, produces tuberous underground organs from the base of the stem, weighing up to per tuber, forming from axillary buds producing short stolons that grow into tubers. Even though legumes are not commonly associated with forming stem tubers, Lathyrus tuberosus is an example native to Asia and Europe, where it was once grown as a crop. Potatoes Potatoes are stem tubersenlarged stolons thicken to develop into storage organs. The tuber has all the parts of a normal stem, including nodes and internodes. The nodes are the eyes and each has a leaf scar. The nodes or eyes are arranged around the tuber in a spiral fashion beginning on the end opposite the attachment point to the stolon. The terminal bud is produced at the farthest point away from the stolon attachment and tubers, and thus show the same apical dominance as a normal stem. Internally, a tuber is filled with starch stored in enlarged parenchyma-like cells. The inside of a tuber has the typical cell structures of any stem, including a pith, vascular zones, and a cortex. The tuber is produced in one growing season and used to perennate the plant and as a means of propagation. When fall comes, the above-ground structure of the plant dies, but the tubers survive underground over winter until spring, when they regenerate new shoots that use the stored food in the tuber to grow. As the main shoot develops from the tuber, the base of the shoot close to the tuber produces adventitious roots and lateral buds on the shoot. The shoot also produces stolons that are long etiolated stems. The stolon elongates during long days with the presence of high auxins levels that prevent root growth off of the stolon. Before new tuber formation begins, the stolon must be a certain age. The enzyme lipoxygenase makes a hormone, jasmonic acid, which is involved in the control of potato tuber development. The stolons are easily recognized when potato plants are grown from seeds. As the plants grow, stolons are produced around the soil surface from the nodes. The tubers form close to the soil surface and sometimes even on top of the ground. When potatoes are cultivated, the tubers are cut into pieces and planted much deeper into the soil. Planting the pieces deeper creates more area for the plants to generate the tubers and their size increases. The pieces sprout shoots that grow to the surface. These shoots are rhizome-like and generate short stolons from the nodes while in the ground. When the shoots reach the soil surface, they produce roots and shoots that grow into the green plant. Root tubers A root tuber, tuberous root or storage root is a modified lateral root, enlarged to function as a storage organ. The enlarged area of the tuber can be produced at the end or middle of a root or involve the entire root. It is thus different in origin, but similar in function and appearance, to a stem tuber. Plants with tuberous roots include the sweet potato (Ipomoea batatas), cassava, dahlia, and Sagittaria (arrowhead) species. Root tubers are perennating organs, thickened roots that store nutrients over periods when the plant cannot actively grow, thus permitting survival from one year to the next. The massive enlargement of secondary roots typically represented by sweet potato have the internal and external cell and tissue structures of a normal root; they produce adventitious roots and stems, which again produce adventitious roots. In root tubers, there are no nodes and internodes or reduced leaves. The proximal end of the tuber, which was attached to the old plant, has crown tissue that produces buds which grow into new stems and foliage. The distal end of the tuber normally produces unmodified roots. In stem tubers the order is reversed, with the distal end producing stems. Tuberous roots are biennial in duration: the plant produces tubers the first year, and at the end of the growing season, the shoots often die, leaving the newly generated tubers; the next growing season, the tubers produce new shoots. As the shoots of the new plant grow, the stored reserves of the tuber are consumed in the production of new roots, stems, and reproductive organs; any remaining root tissue dies concurrently to the plant's regeneration of the next generation of tubers. Hemerocallis fulva (orange daylily) and a number of daylily hybrids have large root tubers; H. fulva spreads by underground stolons that end with a new fan that grows roots that produce thick tubers and then send out more stolons. Plants with root tubers can be propagated from late summer to late winter by digging up the tubers and separating them, making sure that each piece has some crown tissue for replanting. Root tubers are a rich source of nutrients for humans and wild animals, e.g. those of Sagittaria plants which are eaten by ducks.
Biology and health sciences
Plant anatomy and morphology: General
Biology
143494
https://en.wikipedia.org/wiki/Indira%20Gandhi%20International%20Airport
Indira Gandhi International Airport
Indira Gandhi International Airport is the primary international airport serving New Delhi, the capital of India, and the National Capital Region (NCR). The airport, spread over an area of , is situated in Palam, Delhi, southwest of the New Delhi Railway Station and from New Delhi city centre. Named after Indira Gandhi (1917–1984), the former Prime Minister of India, it is the busiest airport of India in terms of passenger traffic since 2009. It is also the busiest airport in the country in terms of cargo traffic. In the financial year of 2023–24, the airport handled 7.36 crore (73.6 million) passengers, the highest ever in the airport's history. As of 2024, it is the tenth-busiest airport in the world, as per the latest rankings issued by the UK-based air consultancy firm, OAG. It is the second-busiest airport in the world by seating capacity, having a seating capacity of over 36 lakh (3.6 million) seats, and the busiest airport in Asia by passenger traffic, handling over 6.55 crore (65.5 million) passengers in 2023. In fact, it is routinely one of the busiest airports in the world, according to the Airports Council International rankings. The airport was operated by the Indian Air Force before its management was transferred to the Airports Authority of India. In May 2006, the management of the airport was passed over to Delhi International Airport Limited (DIAL), a consortium led by the GMR Group. In September 2008, the airport inaugurated a runway. With the commencement of operations at Terminal 3 in 2010, it became India's and South Asia's largest aviation hub. The Terminal 3 building has a capacity to handle 3.4 crore (34 million) passengers annually and was the world's 8th largest passenger terminal upon completion. The airport inaugurated a runway and the Eastern Cross Taxiways (ECT) with dual parallel taxiways in July 2023. The airport uses an advanced system called Airport Collaborative Decision Making (A-CDM) to help keep takeoffs and landings timely and predictable. The other airport serving NCR is the Hindon Airport, which is much smaller in size and primarily handles regional flights out of the city under the UDAN Scheme. The former airport, which used to be the primary airport of NCR, Safdarjung Airport is now used mainly by VVIP helicopters and small charter helicopters due to its short runway. To offset the burgeoning traffic, the construction of a new airport, Noida International Airport, is currently underway. History Palam Airport had a peak capacity of around 1,300 passengers per hour. In 1979–80, a total of 30 lakh (3 million) domestic and international passengers flew into and out of Palam Airport. Owing to an increase in air traffic in the '70s and '80s, an additional terminal with nearly four times the area of the old Palam terminal was constructed. With the inauguration of this new international terminal, Terminal 2, on 2 May 1986, the airport was renamed as Indira Gandhi International Airport (IGIA). The old domestic airport (Palam) is known as Terminal 1 and was divided into separate buildings – 1A, 1B, and 1C. Blocks 1A and 1B were used to handle international operations while domestic operations took place in Block 1C. Block 1A and 1B later became dedicated terminals for domestic airlines and are currently closed down. It is planned that they will be demolished after the construction of newer terminals. Block 1C was also turned into a domestic arrivals terminal, and was rebuilt and opened on 24 February 2022. The newly constructed domestic departures block 1D is now used by all domestic low-cost airlines (IndiGo, and SpiceJet). There is also a separate technical area for VIP passengers. The domestic arrivals terminal 1C was demolished and rebuilt into a brand-new domestic arrivals terminal. For this expansion work, GoAir and select flights of IndiGo were moved to Terminal 2 as well as select flights of SpiceJet and IndiGo to Terminal 3. In October 2001, Canada 3000 commenced a flight to Toronto. This was the first nonstop service between India and North America. Russia's decision to open its airspace after the Cold War allowed the airline to save time by flying a direct route over the Arctic. Even though the 11 September attacks had precipitated a global decline in air travel, Canada 3000 was hoping that the service would help it improve its financial position. Nevertheless, the company collapsed one month later. Significant growth in the Indian aviation industry led to a major increase in passenger traffic. The capacity of Terminal 1 was estimated to be 71.5 lakh (7.15 million) passengers per annum (mppa). The actual throughput for 2005/06 was an estimated 1.04 crore (10.4 million) passengers. Including the then closed down international terminal (Terminal 2), the airport had a total capacity of 1.25 crore (12.5 million) passengers per year, whereas the total passenger traffic in 2006/07 was 1.65 crore (16.5 million) passengers per year. In 2008, the total passenger count at the airport reached 2.4 crore (23.97 million). To ease the traffic congestion on the existing terminals and in preparation for the 2010 Commonwealth Games, a much larger Terminal 3 was constructed and inaugurated on 3 July 2010. The new terminal's construction took 37 months for completion and this terminal increased the airport's total passenger capacity by 34 million. Apart from the three budget domestic airlines handled by Terminals 1 and 2, all other airlines operate their flights from Terminal 3. In June 2022, Delhi International Airport became India's first to run entirely on Hydro Power and solar energy. Ownership On 31 January 2006, the aviation minister Praful Patel announced that the empowered Group of Ministers have agreed to sell the management-rights of Delhi Airport to the DIAL consortium and the Mumbai Airport to the GVK Group. On 2 May 2006, the management of Delhi and Mumbai airports were handed over to the private consortia. Delhi International Airport Limited (DIAL) is a consortium of the GMR Group (54% (currently 64%)), Fraport (10%) and Malaysia Airports (10% (currently no share)), and the Airports Authority of India retains a 26% stake. Nine years later, in May 2015, Malaysia Airports chose to exit from DIAL venture and sold its entire 10% stake to majority shareholder GMR Infra for $79 million. Following this GMR Group's stake at DIAL increased to 64%. Earlier, GMR indicated that it was interested in buying out the 10% stake of Fraport. Facilities Runways Delhi Airport has four near-parallel runways: runway 11R/29L, , runway 11L/29R, , runway 10/28, , and runway 09/27, . The 09/27 runway of the Delhi Airport was the airport's first-ever runway; the British constructed the 2,816 metre-long and 60 metre-wide runway in the pre-independence era and used it during World War II. In addition to Chaudhary Charan Singh International Airport in Lucknow and Jaipur Airport in Jaipur, Delhi Airport is the only airport in India to have been equipped with the CAT III-B ILS, as of 2017. In the winter of 2005, there were a record number of disruptions at Delhi Airport due to fog/smog. Since then some domestic airlines have trained their pilots to operate under CAT-II conditions of a minimum visibility. On 31 March 2006, IGI became the first Indian airport to operate two runways simultaneously following a test run involving a SpiceJet plane landing on runway 28 and a Jet Airways plane taking off from runway 27 at the same time. The initially proposed mode involving simultaneous takeoffs in westerly flow to increase handling traffic capacity caused several near misses over the west side of the airport where the centrelines of runways 10/28 and 9/27 intersect. The runway use was changed to segregate dependent mode on 25 December 2007, which was a few days after a near miss involving an Airbus A330-200 of Qatar Airways and an IndiGo A320 aircraft. The new method involved the use of runway 28 for all departures and runway 27 for all arrivals. This more streamlined model was adopted during day hours (– 2300 0600 – 2300 IST) until 24 September 2008. On 21 August 2008, the airport inaugurated its third runway, 11R/29L, costing ₹1,000 crore and long. The runway has one of the world's longest paved threshold displacements of . This, in turn decreases the available landing length on runway 29L to . The reason for the long threshold displacement is due to the presence of a 263 m high Shiv statue, which is located near runway 29L. The runway increased the airport's capacity to handle up to 100 flights from the previous 45–60 flights per hour. The new runway was opened for commercial operations on 25 September 2008 and gradually began full round-the-clock operations by the end of October of the same year. Since 2012, all three runways were operated simultaneously to handle traffic during day hours. Only runways 11R/29L and 10/28 are operated during night (2300–0600 IST) hours with single runway landing restriction during westerly traffic flow that is rotated late night (0300 IST) and reversed weekly to distribute and mitigate night time landing noise over nearby residential areas. To cater for the demand of increasing air traffic, the master plan for the construction of a fourth parallel runway next to the existing runway 11R/29L was cleared in 2017. along with the Eastern Cross Taxiways (ECT) - a pair of elevated parallel taxiways linking the northern part of the airport with the southern runways. It will be elevated as it will pass over the airport approach roads. It will be long and both the taxiways will be wide, with a wide gap separating the taxiways, making it capable of handling Airbus A380 and Boeing 747 type aircraft. It will help flights reducing duration to reach the southern runways from 9–10 minutes to only two minutes, as well as reducing pollution and traffic. The fourth runway and the ECT was inaugurated on 14 July 2023. Terminals IGI Airport serves as a major hub or a focus destination for several Indian carriers including Air India, Alliance Air, IndiGo, and SpiceJet. Approximately 80 airlines serve this airport. At present, there are three actively scheduled passenger terminals, as well as a cargo terminal. In 2021, DIAL introduced an e-boarding facility for passengers at all the three terminals of the airport, by which all boarding gates will have contactless e-boarding gates with boarding card scanners, which will allow passengers to flash their physical or e-boarding cards to verify flight details in order to proceed for security checks. Terminal 3 is an integrated terminal used for both international and domestic flights. The Indian carriers operating international flights are Air India, IndiGo, and SpiceJet. The domestic side of Terminal 3 is used by Air India, Air India Express, and select flights of SpiceJet and IndiGo. Select flights of IndiGo use Terminal 2 for their domestic operations. Currently operational terminals Terminal 1 Terminal 1 is used by the low cost domestic carriers, such as SpiceJet and IndiGo. In 2022, Terminal 1D was fully expanded with an arrivals hall, with the goal of enhancing its annual passenger handling capacity from the previous 1.8 crore (18 million) to 4 crore (40 million). Terminal 2 Terminal 2 was opened on 1 May 1986, at a cost of 95 crores and was used for international flights until July 2010 when operations shifted to Terminal 3. After this, the terminal remained operational for only three months per year catering to Hajj flights. In 2017, after revamping Terminal 2 at a cost of 100 crores, DIAL shifted all operations of GoAir and select operations of IndiGo to that terminal in order to continue expansion work of Terminal 1. Terminal 3 Designed by HOK working in consultation with Mott MacDonald, Terminal 3 is a two-tier building spread over an area of 54 lakh (5.4 million) square feet (approx 502,000 square metre ) making it the world's 15th largest terminal in the world, with the lower floor being the arrivals area, and the upper floor being a departures area. This terminal has 168 check-in counters, 78 aerobridges at 48 contact stands, 54 parking bays, 95 immigration counters, 18 X-ray screening areas, shorter waiting times, duty-free shops, and other features.The international flights leave from gates 1-26 (gates 2, 4, 6 are bus gates) and the domestic flights leave from gates 27-62 (gates 42, 44 are bus gates) This new terminal was timed to be completed for the 2010 Commonwealth Games, which was held in Delhi and is connected to Delhi by an eight-lane Delhi–Gurgaon Expressway and the Delhi Metro through its Airport Express (Orange Line). The terminal was officially inaugurated on 3 July 2010. All international airlines shifted their operations to the new terminal in late July 2010 and all full service domestic carriers in November 2010. The arrival area is equipped with 14 baggage carousels. Terminal 3 has India's first automated parking management and guidance system in a multi-level car park, which comprises seven levels and a capacity of 4,300 cars. Terminal 3 forms the first phase of the airport expansion which tentatively includes the construction of additional passenger and cargo terminals (Terminal 4, 5, and 6). Domestic full-service airlines Air India operates from Terminal 3. Air India Express, although a low cost airline, also operates its domestic flights from this terminal. Some flights of SpiceJet and IndiGo were also shifted to Terminal 3 temporarily for the expansion of Terminal 1. On 16 December 2024, the Indira Gandhi International airport became the first in India to connect directly to 150 airports or destinations — both domestic and international — with the launch of a Thai AirAsia X direct flight between Delhi and Bangkok’s Don Mueang airport. General Aviation Terminal India's first general aviation terminal was commissioned in this airport in September 2020. The terminal caters to support the movement and processing of passengers flying through chartered flights or private jets from the airport. Air cargo complex The air cargo complex is located at a distance of from Terminal 3. It consists of separate brownfield and greenfield cargo terminals. The cargo operations at the brownfield terminal are managed by Celebi Delhi Cargo Management India Pvt. Ltd., which is a joint venture between Delhi International Airport Private Ltd (DIAL) and the Turkish company Celebi Ground Handling (CGH). CGH was awarded the contract to develop, modernise, and finance the existing cargo terminal and to operate the terminal for a period of twenty-five years by DIAL in November 2009. It started its operations in June 2010. In addition to the existing terminal, a new greenfield terminal is being developed in phases by Delhi Cargo Service Centre (DCSC), also a joint venture between DIAL and Cargo Service Center (CSC). The greenfield cargo terminal project consists of two terminals built over a plot of 48,000 square metres and 28,500 square metres, respectively. Phase 1A of the project has been completed and is fully operational. Once the entire project is completed, these two new terminals will have an annual handling capacity of 12.5 lakh (1.25 million) tonnes. The cargo operations of the airport received "e-Asia 2007" award in 2007 for "Implementation of e-Commerce / Electronic Data Interchange in Air Cargo Sector". Previous terminals Terminal 1A Terminal 1A was built in 1982 as a temporary structure for international VIPs arriving for the 1983 Commonwealth Heads of Government Meeting held in Delhi. After the event, the building was unused until Indian Airlines started operating Airbus A320 operations in 1988. It had to be refurbished after a fire gutted the interiors in October 1996 and DIAL significantly upgraded the terminal. The terminal was closed after Air India shifted operations to the new Terminal 3 on 11 November 2010. DIAL had earlier planned to use the terminal for Haj operations as well as for charter planes; however, it never materialised. The terminal lay unused until 2018, when DIAL decided to demolish it. Terminal 1B Terminal 1B was also built in the late 1980s and was used only for domestic departures. Upon the opening of the new domestic departures Terminal 1D in 2009, Terminal 1B was closed and is expected to be demolished on the completion of newer terminals. Terminal 1C Terminal 1C was also built in the late 1980s and was used only for domestic arrivals. The terminal has been upgraded with a newly expanded greeting area and a larger luggage reclaim area with eight belts. Terminal 1C was shut down, torn, and rebuilt into a brand new domestic arrivals hall on 24 February 2022. Terminal 1D Terminal 1D was developed by DIAL and inaugurated on 27 February 2009 as a domestic departures terminal with a total floor space of and a capacity to handle 1.5 crore (15 million) passengers per year. The terminal commenced operations on 19 April 2009. It has 72 Common Use Terminal Equipment (CUTE) enabled check-in counters, 16 self check-in counters, and 16 security channels. Airlines and destinations Passenger Cargo Statistics Connectivity IGI complex has four passenger terminals, one cargo terminal and a commercial Aerocity. These are the Terminal 1 in the northeast corner for the domestic flights, Aerocity commercial hub in the southeast corner, co-located in Terminal 2 (for domestic budget airlines) and Terminal 3 (international flights) in the southwest corner, cargo terminal between Terminal 3 and Aerocity. Delhi Aerocity metro station is the main interconnectivity hub for the IGI on Yellow Line (operational) and Golden Line (expected completion by March 2026), with the existing NH48 and existing Dwarka Expressway next to it. Also adjacent to it are the proposed Aerocity ISBT (west of the Aerocity metro station), underground Delhi Aerocity RRTS on Delhi–Alwar Regional Rapid Transit System (expected completion by December 2024, east of the Aerocity metro station), proposed at-grade Automatic People Mover (APM) light rail for moving passengers between various terminals within the restricted area, and under-construction Aerocity Passenger Transport Centre (PTC) (east of the Aerocity metro station) for connectivity via autorickshaw, ride hailing bikes and cars, etc. The upgraded Bijwasan railway station (expected completion by December 2024) is adjacent to the Dwarka Sector 21 metro interchange station on Orange and Blue Line, and Bijwasan railway station will connect to the Haryana Orbital Rail Corridor (expected completion by March 2025) via the Patli railway station in the south. Air train In September 2024, DIAL issued tenders for an elevated cum at-grade Automated People Mover (APM) system to be completed by the end of 2027. The 7.7 km line will have four stops — T2/3, T1, Aerocity and cargo city. This line will be the first APM at an Indian airport and is proposed to be implemented on a design, build, finance, operate and transfer (DBFOT) model. Metro rail IGI complex has three metro stations. Terminal 1 in the northeast corner of IGI Complex is served by the Terminal 1-IGI Airport metro station on the Magenta Line of Delhi Metro. Terminal 2 and Terminal 3 are co-located in the southwest. Both are served by the same IGI Airport metro station on the Orange Line (Airport Express Line), which runs from New Delhi metro station (Connects to Yellow Line and New Delhi Railway Station) to Dwarka Sector 21 metro station (connects to Blue Line, Bijwasan railway station and Dwarka ISBT Bus Terminal) and IICC - Dwarka Sector 25 metro station (India International Convention and Expo Centre, will be further extended to Gurgaon), with trains running every 10 minutes. Dwarka Sector 21 metro station, west of IGI, is the metro interchange of Orange and Blue Line. Kirti Nagar to Bamnoli Metrolite, proposed light metro, will interchange at IICC - Dwarka Sector 25 metro station for connectivity to the airport. Bamnoli will also be connected further south to Rapid Metro Gurgaon (at Rezang La Chowk in Palam Vihar) via the existing IICC - Dwarka Sector 25 metro station (India International Convention and Expo Centre). East of IGI, the line connects to the Yellow Line and New Delhi Railway Station at New Delhi Station. The line also links the Pink Line at Dhaula Kuan (Walkover Bridge between Dhaula Kuan and Durgabai Deshmukh South Campus Station). Delhi Aerocity metro station in southeast corner of IGI, between Terminal 1 and Terminals 2 and 3 metro stations, is the metro interchange of Orange and Magenta lines. Metro Phase-IV is extending the Magenta Line further east from Arocity Metro to Tuglakabad, via Vasant Kunj and Mehrauli Archaeological Park, with expected completion by 2026. Railways Bijwasan railway station, immediately to the west of IGI on the Delhi–Jaipur line, is being upgraded to a major world-class regional multimodal transport hub. Construction for ₹270.83 crore project started in 2022 and is scheduled to be completed in 2024. Hisar International Airport-IGI Airport line (HIAIGI Line) will directly connect IGI with Hisar Airport. In the first phase, the missing Garhi Harsaru-Farukhnagar–Jhajjar rail link will be constructed. In the second phase, a short Hisar Airport rail line spur from the Jakhal–Hisar line to Hisar Airport will be constructed. Haryana Orbital Rail Corridor (HORC) connects to the Delhi–Jaipur line at Patli railway station few kilometres south of Bijwasan. HORC will also provide direct rail connectivity to the Noida Airport via the Palwal-Jewar rail spur. Another smaller station near IGI on the Delhi–Jaipur line is the Palam railway station, located north of Bijwasan station and northeast of IGI, and from Terminals 1 and 3 respectively. Several suburban passenger trains run regularly between these stations. Roads and expressways The airport, which lies in south Delhi near the border with Haryana state, is connected to Delhi in the north and Gurgaon in Haryana in the south by two expressways, both of which have eight lanes, the older and busier 27.7 km long at-grade Delhi–Gurgaon Expressway NH 48 (part of Delhi-Jaipur National Highway) which runs through Gurgaon and the newer 26.7 km long elevated Dwarka Expressway NH-248BB which passes west of Gurgaon. The Dwarka Expressway begins and ends at NH-48 DELHI-Jaipur acting as a western bypass to Gurgaon. It begins immediately east of IGI airport at Shiv Murti and terminates in Haryana near Kherki Daula Toll Plaza, south of Gurgaon, near Western Peripheral Expressway (WPE). WPE in turn connects, listed from west to east, IGI to Delhi–Ambala–Amritsar NH 1, Delhi–Amritsar–Katra Expressway, NH9 Delhi-Hisar (Hisar Airport 150 km west of IGI), Delhi–Jaipur NH-48, Gurgaon–Sohna Elevated Expressway, Delhi–Mumbai Expressway, Faridabad–Noida–Ghaziabad Expressway (FNG), Palwal-Jewar Airport Expressway, Eastern Peripheral Expressway (EPE), etc. Urban Extension Road-II, a 75.7 km-long six-lane expressway, connects the IGI airport to the south, southwest and western suburbs of Delhi as well as to the Delhi-Hisar NH-9. Buses As of 2024, two Inter-State Bus Terminals (ISBT) for long-distance buses are being constructed for the IGI. Aerocity Inter State Bus Terminus (Aerocity ISBT), adjacent to the Aerocity Metro Interchange Station near Terminal 1 of IGI, proposed in 2023, with IGI complex. Dwarka Dwarka Inter State Bus Terminus (Dwarka ISBT), adjacent and west of "Dwarka Sector 21 metro station", construction started on 27 acres in 2022, will cater to buses from Haryana and Punjab. It is also close to Bijwasan railway station. It is 11 km west of IGI T3. Gurgaon Inter State Bus Terminus, announced in 2023 over 15 acres of Sihi village near Kherki Daula toll plaz where Dwarka Expressway meets Delhi-Jaipur Highway NH48. Will cater to the buses from Haryana, Punjab, Rajasthan and Uttar Pradesh. It is 28 km south of IGI. Local transport Air conditioned low-floor buses operated by Delhi Transport Corporation (DTC) regularly run between the airport and the city. Metered taxis are also available from Terminals 1 and 3 to all areas of Delhi. Alternate airports nearby Under the National Capital Region Transport Plan, the following international airports are being developed as an alternate to IGI: Hisar International Airport, 190 km west of IGI. In April 2023, Haryana Chief Minister Manohar Lal Khattar approved the Hisar International Airport-IGI Airport line (HIAIGI Line) rail link between IGI and Hisar airport via Bijwasan-Gurgaon-Garhi Harsaru-Sultanpur-Farukhnagar-Jhajjar, Rohtak-Hansi-Hisar. Noida International Airport, 100 km southeast. Awards In 2010, IGIA was conferred the fourth best airport award in the world in the 1.5–2.5 crore (15–25 million) category, and Most Improved Airport in the Indo-Pacific Region by Airports Council International. The airport was rated as the Best Airport in the world in the 2.5–4 crore (25–40 million) passengers category in 2015, by Airports Council International. It was awarded The Best Airport in Central Asia and Best Airport Staff in Central Asia at the Skytrax World Airport Awards 2015. It also stood first in the new rankings for 2015 Airport Service Quality (ASQ) Awards conducted by Airports Council International. The airport, along with Mumbai Airport, was adjudged as the "World's Best Airport" at the Airport Service Quality Awards 2017, in the highest category of airports handling more than 4 crore (40 million) passengers annually. The airport was awarded the "best airport" in Asia-Pacific in 2020 (over 4 crore (40 million) passengers per annum) by the Airports Council International. In 2023, the airport was awarded as the Cleanest Airport in the Asia-Pacific Region and also stood first again in the rankings for 2022 Airport Service Quality (ASQ) Awards in the category of over 4 crore (40 million) passengers per annum, conducted by Airports Council International. Future expansion The newer domestic arrivals and departures terminals 1C and 1D, respectively, have been connected and expanded into a singular domestic terminal which are now known as simply, Terminal 1, capable of handling up to 40 million annual passengers. Terminals 4, 5, and 6 will be built at a later stages which will be triggered by growth in passenger traffic. Once completed, all international flights will move to these three new terminals. Terminal 3 will then be solely used for handling domestic air traffic. A new cargo handling building is also planned. According to Delhi International Airport Limited (DIAL), these new terminals will increase the airport's annual passenger volume capacity to 10 crore (100 million). DIAL submitted a plan in 2016 to the then aviation secretary R N Choubey regarding the expansion of the airport with a new fourth runway and Terminal 4 in a phased manner. The Master Plan of Airport in 2016 was then reviewed and updated by DIAL in consultation with the Airports Authority of India. According to the plan, the terminal construction should have started after the fourth runway was completed and Terminal 1 was expanded. However, the conversion and expansion of Terminal 2 into a fully-international terminal has been put on halt and postponed. Accidents and incidents 1970: The pilot of a Royal Nepal Airlines Fokker F27-200 (9N-AAR) lost control due to severe thunderstorms and downdrafts, crashing just short of the runway. The plane was landing after a flight from Kathmandu, Nepal. Of the five crew and 18 passengers, one crew member was killed. 1972: Japan Air Lines Flight 471 crashed outside of Palam Airport, killing 82 of 87 occupants; ten of eleven crew members and 72 of 76 passengers died, and three people on the ground. 1973: Indian Airlines Flight 440 crashed while on approach to Palam Airport, killing 48 of the 65 passengers and crew on board. On 29 August 1978, Air India Flight 123, a Boeing 747-237B (registered VT-EBO), flying from Delhi to Frankfurt carrying 377 passengers and crew, aborted take-off at 150 knots due to No. 3 engine failure. While the crew hit the brakes and deployed thrust reversers, the plane veered off the runway and entered soft ground resulting in left-hand wing landing gear collapse and substantial damage, as No.3 and 4 reversers were not effective. The No. 3 engine failed due to ingestion of tire pieces. The plane sustained substantial damaged but was repaired and put back to service. 1988: An Air France Boeing 747 on 24 July 1988 at 0124hrs flying as flight AF187 from Delhi to Paris Charles de Gaulle carrying 275 people (260 passengers and 15 crew) suffered an accident during take-off at Indira Gandhi International Airport. The copilot was pilot flying. During takeoff the aircraft attained V1 speed (156 kts). 2.5 seconds later the No. 4 engine fire warning came on. The copilot rejected the takeoff at a speed of 172 kts, which was past the safe limit for the aircraft which was at the threshold of its maximum take off weight. The aircraft overran the runway, causing the main gear to collapse and damage to the nose section and undercarriage as the aircraft veered left at the end of the runway as it slid and struck lighting and radar equipment. There was no fire in No. 4 engine it was found. There were no fatalities and one minor injury as passengers evacuated the aircraft on slides. The aircraft was repaired over a period of 6 months on site at Delhi and put back in service. 1990: An Air India Boeing 747 flying on the London-Delhi-Mumbai route and carrying 215 people (195 passengers and 20 crew) touched down at Indira Gandhi International Airport after a flight from London Heathrow Airport. On application of reverse thrust, a failure of the no. 1 engine pylon to wing attachment caused this engine to tilt nose down. Hot exhaust gases caused a fire on the left-wing. There were no casualties but the aircraft was damaged beyond repair and written off. 1993: An Uzbekistan Airlines Tupolev Tu-154 that had been leased by Indian Airlines due to an ongoing pilot strike flipped over and caught fire while landing in bad weather. There were no fatalities, but the aircraft was destroyed by a post-crash fire. 1994: A Sahara Airlines Boeing 737-2R4C (registered VT-SIA) crashed while performing a training flight killing all four people on board and one person on the ground. Wreckage struck an Aeroflot Ilyushin-86 (registered RA-86119) parked nearby, killing four people inside. 1995: Indian Airlines Flight 492 (IC 492), a Boeing 737-2A8 (Registered VT-ECS), was damaged beyond repair when the aircraft overshot the runway at Delhi Airport due to pilot error, on its scheduled flight from Jaipur to Delhi. 1996: The airport was involved in the Charkhi Dadri mid-air collision when a Saudia Boeing 747-100B, climbing out after take-off, collided with an incoming Kazakhstan Airlines Ilyushin Il-76 chartered by a fashion company, causing the deaths of all 349 people on board the two planes. On 24 December 1999, Indian Airlines Flight 814 bound for Delhi was hijacked. The plane was taken to Pakistan, Afghanistan and the UAE. After the turn of the millennium, the plane was allowed to go back to Delhi. One passenger was killed. On 17 December 2009, Air India One, a Boeing 747-400 (registered as VT-EVA), operating as an executive flight for Prime Minister Manmohan Singh from Delhi to Copenhagen, was hit by an by a food delivery trolley shortly before it was scheduled for takeoff. The Prime Minister took off on a substitute Boeing 747-400 aircraft after a delay of three hours. On 10 November 2016, Air India Flights 142 from Paris and 154 from Vienna, both Boeing 787-8 Dreamliners heading to Delhi, were nearly involved in a midair collision 12 nautical miles away from the airport, due to conflicting instructions from TCAS and ATC. The incident prompted a DGCA and AAIB investigation, which concluded that the breach of separation between the two aircraft occurred due to incorrect label management, wrong separation technique for sequencing of arrival aircraft and inadequate surveillance. 28 June 2024: A portion of the roof of Terminal 1 collapsed on parked vehicles amid heavy rains in the early morning. One person was killed and eight were injured.
Technology
Asia
null
143540
https://en.wikipedia.org/wiki/Poinsettia
Poinsettia
The poinsettia (; Euphorbia pulcherrima) is a commercially important flowering plant species of the diverse spurge family Euphorbiaceae. Indigenous to Mexico and Central America, the poinsettia was first described by Europeans in 1834. It is particularly well known for its red and green foliage and is widely used in Christmas floral displays. It derives its common English name from Joel Roberts Poinsett, the first United States minister to Mexico, who is credited with introducing the plant to the US in the 1820s. Poinsettias are shrubs or small trees, with heights of . Though often stated to be highly toxic, the poinsettia is not dangerous to pets or children. Exposure to the plant, even consumption, most often results in no effect, though it can cause nausea, vomiting, or diarrhea. Wild poinsettias occur from Mexico to southern Guatemala, growing on mid-elevation, Pacific-facing slopes. One population in the Mexican state of Guerrero is much further inland, however, and is thought to be the ancestor of most cultivated populations. Wild poinsettia populations are highly fragmented, as their habitat is experiencing largely unregulated deforestation. They were cultivated by the Aztecs for use in traditional medicine. They became associated with the Christmas holiday and are popular seasonal decorations. Every year in the United States, approximately 70 million poinsettias of many cultivated varieties are sold in a six-week period. Many of these poinsettias are grown by Paul Ecke Ranch, which serves half the worldwide market and 70 percent of the US market. Taxonomy The poinsettia was described as a new species in 1834 by the German scientist Johann Friedrich Klotzsch. Klotzsch credited Carl Ludwig Willdenow with the species name "pulcherrima", and the authority is given as Willd. ex Klotzsch. The holotype had been collected in Mexico during an 1803–1804 expedition by Alexander von Humboldt and Aimé Bonpland. It was known by the common name "poinsettia" as early as 1836, derived from Joel Roberts Poinsett, a botanist and the first US Minister to Mexico. Possibly as early as 1826, Poinsett began sending poinsettias from Mexico back to his greenhouses in South Carolina. Prior to poinsettia, it was known as "Mexican flame flower" or "painted leaf". Description Euphorbia pulcherrima is a shrub or small tree, typically reaching a height of . The plant bears dark green dentate leaves that measure in length. The colored bracts—which are normally flaming red, with cultivars being orange, pale green, cream, pink, white, or marbled—are often mistaken for flower petals because of their groupings and colors, but are actually leaves. The colors of the bracts are created through photoperiodism, meaning that they require darkness (at least fourteen hours at a time for 6–8 weeks in a row) to change color. The plants also require abundant light during the day for the brightest color. Semi-evergreen, they generally lose most of their leaves during winter. The flowers of the poinsettia are unassuming. They are grouped within the cyathia (small yellow structures found in the center of each leaf bunch, or false flowers). Nothing is known about pollination in wild poinsettias, though wasps are noted to occasionally visit the cyathia. All flowers in the Euphorbiaceae are unisexual (either male or female only), and they are often very small in size. In Euphorbia, the flowers are reduced even more and then aggregated into an inflorescence or cluster of flowers. Toxicity Poinsettias are popularly, though incorrectly, said to be toxic to humans and other animals. This misconception was spread by a 1919 urban legend of a two-year-old child dying after consuming a poinsettia leaf. In 1944, the plant was included in H. R. Arnold's book Poisonous Plants of Hawaii on this premise. Though Arnold later admitted that the story was hearsay and that poinsettias were not proven to be poisonous, the plant was thus thought deadly. In 1970 the US Food and Drug Administration published a newsletter stating erroneously that "one poinsettia leaf can kill a child", and in 1980 they were prohibited from nursing homes in a county in North Carolina due to this supposed toxicity. An attempt to determine a poisonous dose of poinsettia to rats failed, even after reaching experimental doses equivalent to consuming 500 leaves, or nearly of sap. Contact with any part of the plant by children or pets often has no effect, though it may cause nausea, diarrhea, or vomiting if swallowed. External exposure to the plant may result in a skin rash for some. A survey of more than 20,000 calls to the American Association of Poison Control Centers from 1985–1992 related to poinsettia exposure showed no fatalities. In 92.4% of calls, there was no effect from exposure, and in 3.4% of calls there were minor effects, defined as "minimally bothersome". Similarly, a cat or dog's exposure to poinsettias rarely necessitates medical treatment. If ingested, mild drooling or vomiting can occur, or rarely, diarrhea. In rare cases, exposure to the eye may result in eye irritation. Skin exposure to the sap may cause itchiness, redness, or swelling. It can induce asthma and allergic rhinitis in certain groups of people. Chemical composition Pulcherrol and pulcherryl acetate are among the components of its latex. Triterpenes are found in aerial parts of the plant, including its latex and leaves. One such triterpenoid skeleton is being investigated for its anti-Alzheimer's disease bioactivity. Range and habitat The poinsettia occurs in North and Central America, from Mexico to southern Guatemala. Its range is about long, encompassing mid-elevation tropical dry forests. Most wild populations are on Pacific-facing slopes in steep canyons. Populations were once found in rolling hill areas, though many have gone extinct. It has been hypothesized that the inaccessibility of the canyons may protect the wild populations from human disturbance. There is a somewhat anomalous population of wild poinsettias in the northern part of the Mexican state of Guerrero and Oaxaca, which is much further inland in the hot and seasonally dry forests than the rest of the species' range. Genetic analyses showed that the wild populations in northern Guerrero are the likely ancestors of most cultivated poinsettias. Conservation The tropical dry forests where wild poinsettias grow experience largely unregulated deforestation, resulting in habitat loss. Its natural habitat is thus highly fragmented, particularly near metropolitan areas such as Taxco. Population sizes are frequently very small, with as few as a dozen individuals. Populations can be up to several hundred individuals, but this is not typical. A conservation risk typical for species with wild and cultivated populations is the contamination of the wild gene pool by hybridization with cultivated individuals. This has not been documented in wild poinsettias, though, as cultivars seldom flower and do not produce fruits. As of 2012, wild poinsettias were not protected by Mexican law. In culture Aztec people use the plant to produce red dye and as an antipyretic medication. In Nahuatl, the language of the Aztecs, the plant is called , meaning "flower that grows in residues or soil", or, literally, "excrement flower", because: "Birds would eat the seeds and deposit them somewhere, and so it seemed that the seeds would germinate and grow from bird droppings." Today it is known in Mexico and Guatemala as or simply , meaning "Christmas Eve flower". In Spain it is known as or , meaning "Easter flower". In Chile and Peru, the plant became known as the "crown of the Andes". From the 17th century, friars of the Franciscan Christian religious order in Mexico included the plants in their Christmas celebrations. The star-shaped leaf pattern is said to symbolize the Star of Bethlehem, the red color represents the blood shed during the sacrifice of Jesus' crucifixion, and the white leaves represent the purity of Jesus. The use of the poinsetta during Christmastide is additionally related to a Christian folk story in Mexico about a poor girl named Pepita: Poinsettias are popular Christmas decorations in homes, churches, offices, and elsewhere across North America, as a result of an extensive marketing campaign by the Ecke family that began by shipping free poinsettias to television stations for use on-air. In the US, December 12 is National Poinsettia Day, marking the anniversary of Joel Roberts Poinsett's death. Cultivation The Aztecs were the first to cultivate poinsettias. Cultivation in the US began when diplomat Joel Roberts Poinsett sent some of the plants back to his greenhouses in South Carolina in the 1820s. Specific details about its spread from there are largely unverifiable, but it was exhibited at the Pennsylvania Horticultural Society's 1829 Philadelphia Flower Show by Colonel Robert Carr. Carr described it as "a new Euphorbia with bright scarlet bracts or floral leaves, presented to the Bartram Collection by Mr. Poinsett, United States Minister of Mexico." The poinsettia is the world's most economically important potted plant. Each year in the US, approximately 70 million poinsettias are sold in a period of six weeks, at a value of US$250 million. In Puerto Rico, where poinsettias are grown extensively in greenhouses, the industry is valued at $5 million annually. There are over 100 cultivated varieties of poinsettia that have been patented in the US. To produce extra axillary buds that are necessary for plants containing multiple flowers, a phytoplasma infection—whose symptoms include the proliferation of axillary buds—is used. The discovery of the role phytoplasmas play in the growth of axillary buds is credited to Ing-Ming Lee of the USDA Agricultural Research Service. American industry Albert Ecke emigrated from Germany to Los Angeles in 1900, opening a dairy and orchard in the Eagle Rock area. He became intrigued by the plant and sold them from street stands. His son, Paul Ecke, developed the grafting technique, but it was the third generation of Eckes, Paul Ecke Jr., who was responsible for advancing the association between the plant and Christmas. Besides changing the market from mature plants shipped by rail to cuttings sent by air, he sent free plants to television stations for them to display on air from Thanksgiving to Christmas. He also appeared on television programs like The Tonight Show and Bob Hope's Christmas specials to promote the plants. Until the 1990s, the Ecke family, who had moved their operation to Encinitas, California, in 1923, had a virtual monopoly on poinsettias owing to a technique that made their plants much more attractive. They produced a fuller, more compact plant by grafting two varieties of poinsettia together. A poinsettia left to grow on its own will naturally take an open, somewhat weedy look. The Eckes' technique made it possible to get every seedling to branch, resulting in a bushier plant. In the late 1980s, university researcher John Dole discovered the grafting method (grafting rarer densely-branched cultivars onto more common sparsely-branched cultivars) – previously known only to the Eckes – and published it. This allowed competitors to flourish, particularly those using low-cost labor in Latin America. The Ecke family's business, now led by Paul Ecke III, decided to stop producing plants in the US, but as of 2008, they still served about 70 percent of the domestic market and 50 percent of the worldwide market. Diseases Poinsettias are susceptible to several diseases, mostly fungal, but also bacterial and parasitic. Conditions that promote poinsettia propagation also favor certain diseases. Fungal diseases affecting greenhouse poinsettia operations include Pythium root rot, Rhizoctonia root and stem rot, black root rot, scab, powdery mildew, and Botrytis blight. Bacterial diseases include bacterial soft rot and bacterial canker, while a viral disease is Poinsettia mosaic virus. Infection by poinsettia branch-inducing phytoplasma is actually desirable, as it keeps the plants shorter with more flowers. It is the first known phytoplasma that has economically advantageous effects. Gallery
Biology and health sciences
Malpighiales
null
143803
https://en.wikipedia.org/wiki/Tears
Tears
Tears are a clear liquid secreted by the lacrimal glands (tear gland) found in the eyes of all land mammals. Tears are made up of water, electrolytes, proteins, lipids, and mucins that form layers on the surface of eyes. The different types of tears—basal, reflex, and emotional—vary significantly in composition. The functions of tears include lubricating the eyes (basal tears), removing irritants (reflex tears), and also aiding the immune system. Tears also occur as a part of the body's natural pain response. Emotional secretion of tears may serve a biological function by excreting stress-inducing hormones built up through times of emotional distress. Tears have symbolic significance among humans. Physiology Chemical composition Tears are made up of three layers: lipid, aqueous, and mucous. Tears are composed of water, salts, antibodies, and lysozymes (antibacterial enzymes); though composition varies among different tear types. The composition of tears caused by an emotional reaction differs from that of tears as a reaction to irritants, such as onion fumes, dust, or allergens. Emotional tears contain higher concentrations of stress hormones such as adrenocorticotropic hormone and leucine enkephalin (a natural pain killer), which suggests that emotional tears play a biological role in balancing stress hormone levels. Drainage of tear film The lacrimal glands secrete lacrimal fluid, which flows through the main excretory ducts into the space between the eyeball and the lids. When the eyes blink, the lacrimal fluid is spread across the surface of the eye. Lacrimal fluid gathers in the lacrimal lake which is found in the medial part of the eye. The lacrimal papilla is an elevation in the inner side of the eyelid, at the edge of the lacrimal lake. The lacrimal canaliculi open into the papilla. The opening of each canaliculus is the lacrimal punctum. From the punctum, tears will enter the lacrimal sac, then on to the nasolacrimal duct, and finally into the nasal cavity. An excess of tears, as caused by strong emotion, can cause the nose to run. Quality of vision is affected by the stability of the tear film. Types There are three basic types of tears: basal, reflex and emotional. Nictitating membrane Some mammals, such as cats, camels, polar bears, seals and aardvarks, have a full translucent third eyelid called a nictitating membrane, while others have a vestigial nictitating membrane. The membrane works to protect and moisten the eyelid while maintaining visibility. It also contributes to the aqueous portion of the tear film and possibly immunoglobulins. Humans and some primates have a much smaller nictitating membrane; this may be because they do not capture prey or root vegetation with their teeth, so that there is no evolutionary advantage of the third eyelid. Neurology The trigeminal V1 (fifth cranial) nerve bears the sensory pathway of the tear reflexes. When the trigeminal nerve is cut, tears from reflexes will stop, while emotional tears will not. The great (superficial) petrosal nerve from cranial nerve VII provides autonomic innervation to the lacrimal gland. It is responsible for the production of much of the aqueous portion of the tear film. Human culture In nearly all human cultures, crying is associated with tears trickling down the cheeks and accompanied by characteristic sobbing sounds. Emotional triggers are most often sadness and grief, but crying can also be triggered by anger, happiness, fear, laughter or humor, frustration, remorse, or other strong, intense emotions. Emotional tears can also be triggered by listening to music or by reading, watching or listening to various forms of media. Crying is often associated with babies and children. Some cultures consider crying to be undignified and infantile, casting aspersions on those who cry publicly, except if it is due to the death of a close friend or relative. In most Western cultures, it is more socially acceptable for women and children to cry than men, reflecting masculine sex-role stereotypes. In some Latin regions, crying among men is more acceptable. There is evidence for an interpersonal function of crying as tears express a need for help and foster willingness to help in an observer. Some modern psychotherapy movements such as Re-evaluation Counseling encourage crying as beneficial to health and mental well-being. An insincere display of grief or dishonest remorse is sometimes called crocodile tears in reference to an Ancient Greek anecdote that crocodiles would pretend to weep while luring or devouring their prey. In addition, "crocodile tears syndrome" is a colloquialism for Bogorad's syndrome, an uncommon consequence of recovery from Bell's palsy in which faulty regeneration of the facial nerve causes people to shed tears while eating. Pathology Bogorad's syndrome Bogorad's syndrome, also known as "Crocodile Tears Syndrome", is an uncommon consequence of nerve regeneration subsequent to Bell's palsy or other damage to the facial nerve. Efferent fibers from the superior salivary nucleus become improperly connected to nerve axons projecting to the lacrimal glands, causing one to shed tears (lacrimate) on the side of the palsy during salivation while smelling foods or eating. It is presumed that this would cause salivation while crying due to the inverse improper connection of the lacrimal nucleus to the salivary glands, but this would be less noticeable. The condition was first described in 1926 by its namesake, Russian neuropathologist F. A. Bogorad, in an article titled "Syndrome of the Crocodile Tears" (alternatively, "The Symptom of the Crocodile Tears") that argued the tears were caused by the act of salivation. Keratoconjunctivitis sicca (dry eye) Keratoconjunctivitis sicca, known in the vernacular as dry eye, is a very common disorder of the tear film. Despite the eyes being dry, those affected can still experience watering of the eyes, which is, in fact, a response to irritation caused by the original tear film deficiency. Lack of Meibomian gland secretion can mean that the tears are not enveloped in a hydrophobic film coat, leading to tears spilling onto the face. Treatment for dry eyes to compensate for the loss of tear film include eye-drops composed of methyl cellulose or carboxy- methyl cellulose or hemi-cellulose in strengths of either 0.5% or 1% depending upon the severity of drying up of the cornea. Familial dysautonomia Familial dysautonomia is a genetic condition that can be associated with a lack of overflow tears (Alacrima) during emotional crying. Obstruction of the punctum, nasolacrimal canal, or nasolacrimal duct can cause even normal levels of the basal tear to overflow onto the face (Epiphora), giving the appearance of constant psychic tearing. This can have significant social consequences. Pseudobulbar affect Pseudobulbar affect (PBA) is a condition involving episodic uncontrollable laughter or crying. PBA mostly occurs in people with neurological injuries affecting how the brain controls emotions. Scientists believe PBA results from prefrontal cortex damage. PBA often involves crying. Hence, PBA is mistakable for depression. But PBA is neurological; depression is psychological. Patients with PBA do not experience typical depression symptoms like sleep disturbances or appetite loss.
Biology and health sciences
Visual system
Biology
143847
https://en.wikipedia.org/wiki/Pudu
Pudu
The pudus (Mapudungun püdü or püdu, , ) are two species of South American deer from the genus Pudu, and are the world's smallest deer. The chevrotains (mouse-deer; Tragulidae) are smaller, but they are not true deer. The name is a loanword from Mapudungun, the language of the indigenous Mapuche people of central Chile and south-western Argentina. The two species of pudus are the northern pudu (Pudu mephistophiles) from Venezuela, Colombia, Ecuador, and Peru, and the southern pudu (Pudu puda; sometimes incorrectly modified to Pudu pudu) from southern Chile and south-western Argentina. Pudus range in size from tall, and up to long. The southern pudu is classified as near threatened, while the northern pudu is classified as Data Deficient in the IUCN Red List. Taxonomy The genus Pudu was first erected by English naturalist John Edward Gray in 1850. Pudua was a Latinized version of the name proposed by Alfred Henry Garrod in 1877, but was ruled invalid. Pudus are classified in the New World deer subfamily Capreolinae within the deer family Cervidae. The term "pudú" itself is derived from the language of the Mapuche people of the Los Lagos Region of south-central Chile. Because they live on the slopes of the Andes Mountain Range, they are also known as the "Chilean mountain goat". Two similar species of pudús are recognised: Description The pudus are the world's smallest deer, with the southern pudu being slightly larger than the northern pudu. It has a stocky frame supported by short and slender legs. It is high at the shoulder and up to in length. Pudus normally weigh up to , but the highest recorded weight of a pudu is . Pudus have small, black eyes, black noses, and rounded ears with lengths of . Sexual dimorphism in the species includes an absence of antlers in females. Males have short, spiked antlers that are not forked, as seen in most species of deer. The antlers, which are shed annually, can extend from in length and protrude from between the ears. Also on the head are large preorbital glands. Pudus have small hooves, dewclaws, and short tails about in length when measured without hair. Coat coloration varies with season, sex, and individual genes. The fur is long and stiff, typically pressed close to the body, with a reddish-brown to dark-brown hue. The neck and shoulders of an aged pudu turn a dark gray-brown in the winter. Habitat and distribution The pudú inhabits temperate rainforests in South America, where the dense underbrush and bamboo thickets offer protection from predators. Southern Chile, south-west Argentina, Chiloé Island, and northwest South America are home to the deer. The northern pudú is found in the northern Andes of Colombia, Venezuela, Ecuador, and Peru, from above sea level. The southern species is found in the slope of the southern Andes from sea level to . The climate of the pudú's habitat is composed of two main seasons: a damp, moderate winter and an arid summer. Annual precipitation in these areas of Argentina and Chile ranges from . Behavior Social The pudú is a solitary animal whose behavior in the wild is largely unknown because of its secretive nature. Pudús are crepuscular, most active in the morning, late afternoon, and evening. Their home range generally extends about , much of which consists of crisscrossing pudú-trodden paths. Each pudú has its own home range, or territory. A single animal's territory is marked with sizable dung piles found on paths and near eating and resting areas. Large facial glands for scent communication allow correspondence with other pudú deer. Pudús do not interact socially, other than to mate. An easily frightened animal, the deer barks when in fear. Its fur bristles and the pudú shivers when angered. Predators of the pudús include the horned owl, Andean fox, Magellan fox, cougar, and other small cats. The pudú is a wary animal that moves slowly and stops often, smelling the air for scents of predators. Being a proficient climber, jumper, and sprinter, the deer flees in a zigzag path when being pursued. The lifespan of the pudús ranges from 8 to 10 years in the wild. The longest recorded lifespan is 15 years and 9 months. However, such longevity is rare and most pudús die at a much younger age, from a wide range of causes. Maternal neglect of newborns, as well as a wide range of diseases, can decrease the population. A popular rumor is that if alarmed to a high degree, pudús die from fear-induced cardiac complications. Diet The pudús are herbivorous, consuming vines, leaves from low trees, shrubs, succulent sprouts, herbs, ferns, blossoms, buds, tree bark, and fallen fruit. They can survive without drinking water for long periods due to the high water content of the succulent foliage in their diets. Pudús have various methods of obtaining the foliage they need. Their small stature and cautious nature create obstacles in attaining food. They stop often while searching for food to stand on their hind legs and smell the wind, detecting food scents. Females and fawns peel bark from saplings using their teeth, but mature males may use their spikelike antlers. The deer may use their front legs to press down on saplings until they snap or become low enough to the ground so they can reach the leaves. Forced to stand on their hind legs due to their small size, the deer climb branches and tree stumps to reach higher foliage. They bend bamboo shoots horizontally in order to walk on them and eat from higher branches. Reproduction Pudús are solitary and only come together for rut. Mating season is in the Southern Hemisphere autumn, from April to May. Pudú DNA is arranged into 70 chromosomes. To mate, the pudú male rests his chin on the female's back, then sniffs her rear before mounting her from behind, holding her with his fore legs. The gestation period ranges from 202 to 223 days (around 7 months) with the average being 210 days. A single offspring or sometimes twins are born in austral spring, from November to January. Newborns weigh with the average birth weight being . Newborns less than or more than die. Females and males weigh the same at birth. Fawns have reddish-brown fur and southern pudú fawns have white spots running the length of their backs. Young are weaned after 2 months. Females mature sexually in 6 months, while males mature in 8–12 months. Fawns are fully grown in 3 months, but may stay with their mothers for 8 to 12 months. Status and conservation The southern pudu is currently listed as near threatened on the IUCN Red List, mainly because of overhunting and habitat loss, while the northern pudu is currently classified as being 'Data deficient'. Pudu puda is listed in CITES Appendix I, and Pudu mephistophiles is listed in CITES Appendix II. The southern species is more easily maintained in captivity than the northern, though small populations of the northern formerly existed in zoos. , more than 100 southern pudús are kept at Species360-registered institutions with the vast majority in European and US zoos. Pudús are difficult to transport because they are easily overheated and stressed. Pudús are protected in various national parks; parks require resources to enforce protection of the deer. Efforts to preserve the pudú species are being taken in order to prevent extinction. An international captive-breeding program for the southern pudú led by Universidad de Concepcion in Chile has been started. Some deer have been bred in captivity and reintroduced into Nahuel Huapi National Park in Argentina. Reintroduction efforts include the use of radio collars for tracking. The Convention on International Trade in Endangered Species has banned the international trading of pudús. The Wildlife Conservation Society protects their natural habitat and works to recreate it for pudús in captivity. Despite efforts made by the World Wildlife Fund, the size of the pudú population remains unknown. Threats to the pudús remain despite various conservation efforts. Threats Pudús are threatened due to the destruction of their rainforest habitat. The land is cleared for human development, cattle ranching, agriculture, logging, and exotic tree plantations. Habitat fragmentation and road accidents cause pudú deaths. They are taken from the wild as pets, as well as exported illegally. They are overhunted and killed for food by specially trained hunting dogs. The recently introduced red deer compete with pudús for food. Domestic dogs prey upon pudús and transfer parasites through contact. Pudús are very susceptible to diseases such as bladder worms, lungworms, roundworms, and heartworms.
Biology and health sciences
Deer
Animals
143912
https://en.wikipedia.org/wiki/Reindeer
Reindeer
The reindeer or caribou (Rangifer tarandus) is a species of deer with circumpolar distribution, native to Arctic, subarctic, tundra, boreal, and mountainous regions of Northern Europe, Siberia, and North America. It is the only representative of the genus Rangifer. More recent studies suggest the splitting of reindeer and caribou into six distinct species over their range. Reindeer occur in both migratory and sedentary populations, and their herd sizes vary greatly in different regions. The tundra subspecies are adapted for extreme cold, and some are adapted for long-distance migration. Reindeer vary greatly in size and color from the smallest, the Svalbard reindeer (R. (t.) platyrhynchus), to the largest, Osborn's caribou (R. t. osborni). Although reindeer are quite numerous, some species and subspecies are in decline and considered vulnerable. They are unique among deer (Cervidae) in that females may have antlers, although the prevalence of antlered females varies by subspecies. Reindeer are the only successfully semi-domesticated deer on a large scale in the world. Both wild and domestic reindeer have been an important source of food, clothing, and shelter for Arctic people from prehistorical times. They are still herded and hunted today. In some traditional Christmas legends, Santa Claus's reindeer pull a sleigh through the night sky to help Santa Claus deliver gifts to good children on Christmas Eve. Description Names follow international convention before the recent revision (see Reindeer#Taxonomy below). Reindeer / caribou (Rangifer) vary in size from the smallest, the Svalbard reindeer (R. (t.) platyrhynchus), to the largest, Osborn's caribou (R. t. osborni). They also vary in coat color and antler architecture. The North American range of caribou extends from Alaska through the Yukon, the Northwest Territories and Nunavut throughout the tundra, taiga (boreal forest) and south through the Canadian Rocky Mountains. Of the eight subspecies classified by Harding (2022) into the Arctic caribou (R. arcticus), the migratory mainland barren-ground caribou of Arctic Alaska and Northern Canada (R. t. arcticus), summer in tundra and winter in taiga, a transitional forest zone between boreal forest and tundra; the nomadic Peary caribou (R. t. pearyi) lives in the polar desert of the high Arctic Archipelago and Grant's caribou (R. t. granti also called the Porcupine caribou) lives in the western end of the Alaska Peninsula and the adjacent islands; the other four subspecies, Osborn's caribou (R. t. osborni), Stone's caribou (R. t. stonei), the Rocky Mountain caribou (R. t. fortidens) and the Selkirk Mountains caribou (R. t. montanus) are all montane. The extinct insular Queen Charlotte Islands caribou (R. t. dawsoni), lived on Graham Island in Haida Gwaii (formerly known as the Queen Charlotte Islands). The boreal woodland caribou (R. t. caribou), lives in the boreal forest of northeastern Canada: the Labrador or Ungava caribou of northern Quebec and northern Labrador (R. t. caboti), and the Newfoundland caribou of Newfoundland (R. t. terranovae) have been found to be genetically in the woodland caribou lineage. In Eurasia, both wild and domestic reindeer are distributed across the tundra and into the taiga. Eurasian mountain reindeer (R. t. tarandus) are close to North American caribou genetically and visually, but with sufficient differences to warrant division into two species. The unique, insular Svalbard reindeer inhabits the Svalbard Archipelago. The Finnish forest reindeer (R. t. fennicus) is spottily distributed in the coniferous forest zones from Finland to east of Lake Baikal: the Siberian forest reindeer (R. t. valentinae, formerly called the Busk Mountains reindeer (R. t. buskensis) by American taxonomists) occupies the Altai and Ural Mountains. Male ("bull") and female ("cow") reindeer can grow antlers annually, although the proportion of females that grow antlers varies greatly between populations. Antlers are typically larger on males. Antler architecture varies by species and subspecies and, together with pelage differences, can often be used to distinguish between species and subspecies (see illustrations in Geist, 1991 and Geist, 1998). Status About 25,000 mountain reindeer (R. t. tarandus) still live in the mountains of Norway, notably in Hardangervidda. In Sweden there are approximately 250,000 reindeer in herds managed by Sámi villages. Russia manages 19 herds of Siberian tundra reindeer (R. t. sibiricus) that total about 940,000. The Taimyr herd of Siberian tundra reindeer is the largest wild reindeer herd in the world, varying between 400,000 and 1,000,000; it is a metapopulation consisting of several subpopulations — some of which are phenotypically different — with different migration routes and calving areas. The Kamchatkan reindeer (R. t. phylarchus), a forest subspecies, formerly included reindeer west of the Sea of Okhotsk which, however, are indistinguishable genetically from the Jano-Indigirka, East Siberian taiga and Chukotka populations of R. t. sibiricus. Siberian tundra reindeer herds have been in decline but are stable or increasing since 2000. Insular (island) reindeer, classified as the Novaya Zemlya reindeer (R. t. pearsoni) occupy several island groups: the Novaya Zemlya Archipelago (about 5,000 animals at last count, but most of these are either domestic reindeer or domestic-wild hybrids), the New Siberia Archipelago (about 10,000 to 15,000), and Wrangel Island (200 to 300 feral domestic reindeer). What was once the second largest herd is the migratory Labrador caribou (R. t. caboti) George River herd in Canada, with former variations between 28,000 and 385,000. As of January 2018, there are fewer than 9,000 animals estimated to be left in the George River herd, as reported by the Canadian Broadcasting Corporation. The New York Times reported in April 2018 of the disappearance of the only herd of southern mountain woodland caribou in the contiguous United States, with an expert calling it "functionally extinct" after the herd's size dwindled to a mere three animals. After the last individual, a female, was translocated to a wildlife rehabilitation center in Canada, caribou were considered extirpated from the contiguous United States. The Committee on the Status of Endangered Wildlife in Canada (COSEWIC) classified both the Southern Mountain population DU9 (R. t. montanus) and the Central Mountain population DU8 (R. t. fortidens) as Endangered and the Northern Mountain population DU7 (R. t. osborni) as Threatened. Some species and subspecies are rare and three subspecies have already become extinct: the Queen Charlotte Islands caribou (R. t. dawsoni) from western Canada, the Sakhalin reindeer (R. t. setoni) from Sakhalin and the East Greenland caribou from eastern Greenland, although some authorities believe that the latter, R. t. eogroenlandicus Degerbøl, 1957, is a junior synonym of the Peary caribou. Historically, the range of the sedentary boreal woodland caribou covered more than half of Canada and into the northern states of the contiguous United States from Maine to Washington. Boreal woodland caribou have disappeared from most of their original southern range and were designated as Threatened in 2002 by the Committee on the Status of Endangered Wildlife in Canada (COSEWIC). Environment and Climate Change Canada reported in 2011 that there were approximately 34,000 boreal woodland caribou in 51 ranges remaining in Canada (Environment Canada, 2011b), although those numbers included montane populations classified by Harding (2022) into subspecies of the Arctic caribou. Siberian tundra reindeer herds are also in decline, and Rangifer as a whole is considered to be Vulnerable by the International Union for Conservation of Nature (IUCN). Naming Charles Hamilton Smith is credited with the name Rangifer for the reindeer genus, which Albertus Magnus used in his , fol. Liber 22, Cap. 268: "Dicitur Rangyfer quasi ramifer". This word may go back to the Sámi word . Carl Linnaeus chose the word tarandus as the specific epithet, making reference to Ulisse Aldrovandi's fol. 859–863, Cap. 30: De Tarando (1621). However, Aldrovandi and Conrad Gessner thought that rangifer and tarandus were two separate animals. In any case, the tarandos name goes back to Aristotle and Theophrastus. The use of the terms reindeer and caribou for essentially the same animal can cause confusion, but the ICUN clearly delineates the issue: "Reindeer is the European name for the species of Rangifer, while in North America, Rangifer species are known as Caribou." The word reindeer is an anglicized version of the Old Norse words ("reindeer") and ("animal") and has nothing to do with reins. The word caribou comes through French, from the Mi'kmaq , meaning "snow shoveler", and refers to its habit of pawing through the snow for food. Because of its importance to many cultures, Rangifer and some of its species and subspecies have names in many languages. Inuvialuit of the western Canadian Arctic and Inuit of the eastern Canadian Arctic, who speak different dialects of the Inuit languages, both call the barren-ground caribou . The Wekʼèezhìi (Tłı̨chǫ) people, a Dene (Athapascan) group, call the Arctic caribou and the boreal woodland caribou . The Gwichʼin (also a Dene group) have over 24 distinct caribou-related words. Reindeer are also called by the Greenlandic Inuit and , sometimes , by the Icelanders. Evolution The "glacial-interglacial cycles of the upper Pleistocene had a major influence on the evolution" of Rangifer species and other Arctic and sub-Arctic species. Isolation of tundra-adapted species Rangifer in Last Glacial Maximum refugia during the last glacial – the Wisconsin glaciation in North America and the Weichselian glaciation in Eurasia – shaped "intraspecific genetic variability" particularly between the North American and Eurasian parts of the Arctic. Reindeer / caribou (Rangifer) are in the subfamily Odocoileinae, along with roe deer (Capreolus), Eurasian elk / moose (Alces), and water deer (Hydropotes). These antlered cervids split from the horned ruminants Bos (cattle and yaks), Ovis (sheep) and Capra (goats) about 36 million years ago. The Eurasian clade of Odocoileinae (Capreolini, Hydropotini and Alcini) split from the New World tribes of Capreolinae (Odocoileini and Rangiferini) in the Late Miocene, 8.7–9.6 million years ago. Rangifer "evolved as a mountain deer, ...exploiting the subalpine and alpine meadows...". Rangifer originated in the Late Pliocene and diversified in the Early Pleistocene, a 2+ million-year period of multiple glacier advances and retreats. Several named Rangifer fossils in Eurasia and North America predate the evolution of modern tundra reindeer. Archaeologists distinguish "modern" tundra reindeer and barren-ground caribou from primitive forms – living and extinct – that did not have adaptations to extreme cold and to long-distance migration. They include a broad, high muzzle to increase the volume of the nasal cavity to warm and moisten the air before it enters the throat and lungs, bez tines set close to the brow tines, distinctive coat patterns, short legs and other adaptations for running long distances, and multiple behaviors suited to tundra, but not to forest (such as synchronized calving and aggregation during rutting and post-calving). As well, many genes, including those for vitamin D metabolism, fat metabolism, retinal development, circadian rhythm, and tolerance to cold temperatures, are found in tundra caribou that are lacking or rudimentary in forest types. For this reason, forest-adapted reindeer and caribou could not survive in tundra or polar deserts. The oldest undoubted Rangifer fossil is from Omsk, Russia, dated to 2.1-1.8 Ma. The oldest North American Rangifer fossil is from the Yukon, 1.6 million years before present (BP). A fossil skull fragment from Süßenborn, Germany, R. arcticus stadelmanni, (which is probably misnamed) with "rather thin and cylinder-shaped" antlers, dates to the Middle Pleistocene (Günz) Period, 680,000-620,000 BP. Rangifer fossils become increasingly frequent in circumpolar deposits beginning with the Riss glaciations, the second youngest of the Pleistocene Epoch, roughly 300,000–130,000 BP. By the 4-Würm period (110,000–70,000 to 12,000–10,000 BP), its European range was extensive, supplying a major food source for prehistoric Europeans. North American fossils outside of Beringia that predate the Last Glacial Maximum (LGM) are of Rancholabrean age (240,000–11,000 years BP) and occur along the fringes of the Rocky Mountain and Laurentide ice sheets as far south as northern Alabama; and in Sangamonian deposits (~100,000 years BP) from western Canada. A R. t. pearyi-sized caribou occupied Greenland before and after the LGM and persisted in a relict enclave in northeastern Greenland until it went extinct about 1900 (see discussion of R. t. eogroenlandicus below). Archaeological excavations showed that larger barren-ground-sized caribou appeared in western Greenland about 4,000 years ago. The late Valerius Geist (1998) dates the Eurasian reindeer radiation dates to the large Riss glaciation (347,000 to 128,000 years ago), based on the Norwegian-Svalbard split 225,000 years ago. Finnish forest reindeer (R. t. fennicus) likely evolved from Cervus [Rangifer] geuttardi Desmarest, 1822, a reindeer that adapted to forest habitats in Eastern Europe as forests expanded during an interglacial period before the LGM (the Würmian or Weichsel glaciation);. The fossil species geuttardi was later replaced by R. constantini, which was adapted for grasslands, in a second immigration 19,000–20,000 years ago when the LGM turned its forest habitats into tundra, while fennicus survived in isolation in southwestern Europe. R. constantini was then replaced by modern tundra / barren-ground caribou adapted to extreme cold, probably in Beringia, before dispersing west (R. t. tarandus in the Scandinavian mountains and R. t. sibiricus across Siberia) and east (R. t. arcticus in the North American Barrenlands) when rising seas isolated them. Likewise in North America, DNA analysis shows that woodland caribou (R. caribou) diverged from primitive ancestors of tundra / barren-ground caribou not during the LGM, 26,000–19,000 years ago, as previously assumed, but in the Middle Pleistocene around 357,000 years ago. At that time, modern tundra caribou had not even evolved. Woodland caribou are likely more related to extinct North American forest caribou than to barren-ground caribou. For example, the extinct caribou Torontoceros [Rangifer] hypogaeus, had features (robust and short pedicles, smooth antler surface, and high position of second tine) that relate it to forest caribou. Humans started hunting reindeer in both the Mesolithic and Neolithic Periods, and humans are today the main predator in many areas. Norway and Greenland have unbroken traditions of hunting wild reindeer from the Last Glacial Period until the present day. In the non-forested mountains of central Norway, such as Jotunheimen, it is still possible to find remains of stone-built trapping pits, guiding fences and bow rests, built especially for hunting reindeer. These can, with some certainty, be dated to the Migration Period, although it is not unlikely that they have been in use since the Stone Age. Cave paintings by ancient Europeans include both tundra and forest types of reindeer. A 2022 study of ancient environmental DNA from the Early Pleistocene (2 million years ago) Kap Kobenhavn Formation of northern Greenland identified preserved DNA fragments of Rangifer, identified as basal but potentially ancestral to modern reindeer. This suggests that reindeer have inhabited Greenland since at least the Early Pleistocene. Around this time, northern Greenland was warmer than the Holocene, with a boreal forest hosting a species assemblage with no modern analogue. These are among the oldest DNA fragments ever sequenced. Taxonomy Naming and research on museum collections Carl Linnaeus in 1758 named the Eurasian tundra species Cervus tarandus, the genus Rangifer being credited to Smith, 1827. Rangifer has had a convoluted history because of the similarity in antler architecture (brow tines asymmetrical and often palmate, bez tines, a back tine sometimes branched, and branched at the distal end, often palmate). Because of individual variability, early taxonomists were unable to discern consistent patterns among populations, nor could they, examining collections in Europe, appreciate the difference in habitats and the differing function they imposed on antler architecture. Comparative morphometrics, the measurement of skulls, is often seen as more objective than description of differences of color or antler patterns, but actually confounds genetic variance with epistatic and statistical variance as well as compounded environment-based variance. For example, woodland caribou males, rutting in boreal forest where only a few females can be found, collect harems and defend them against other males, for which they have short, straight, strong, much-branched antlers, beams flattened in cross-section, designed for combat — and not too large, so as not to impede them in forested winter ranges. By contrast, modern tundra caribou (see Evolution above) have synchronized calving as a predator-avoidance strategy, which requires large rutting aggregations. Males cannot defend a harem because, while he was busy fighting, they would disappear into the mass of the herd. Males therefore tend individual females; their fights are infrequent and brief. Their antlers are thin, beams round in cross-section, sweep back and then forward with a cluster of branches at the top; these are designed more for visual stimulation of the females. Their bez tines are set low, just above the brow tine, which is vertically flattened to protect the eyes while the buck "threshes" low brush, a courtship display. The low bez tines help the wide flat brow tines dig craters in the hard-packed tundra snow for forage, for which reason brow tines are often called "shovels" in North America and "ice tines" in Europe. The differences in antler architecture reflect fundamental differences in ecology and behavior, and in turn deep divisions in ancestry that were not apparent to the early taxonomists. Similarly, working on museum collections where skins were often faded and in poor states of preservation, early taxonomists could not readily perceive differences in coat patterns that are consistent within a subspecies, but variable among them. Geist calls these "nuptial" characteristics: sexually selected characters that are highly conserved and diagnostic among subspecies. Biological exploration expeditions Towards the end of the 19th century, national museums began sending out biological exploration expeditions and collections accumulated. Taxonomists, usually working for the museums, began naming subspecies more rigorously, based on statistical differences in detailed cranial, dental and skeletal measurements than antlers and pelage, supplemented by better knowledge of differences in ecology and behavior. From 1898 to 1937, mammalogists named 12 new species (other than barren-ground and woodland, which had been named earlier) of caribou in Canada and Alaska, and three new species and nine new subspecies in Eurasia, each properly described according to the evolving rules of zoological nomenclature, with type localities designated and type specimens deposited in museums (see table in Species and subspecies below). Reclassification In the mid-20th century, as definitions of "species" evolved, mammalogists in Europe and North America made all Rangifer species conspecific with R. tarandus, and synonymized most of the subspecies. Alexander William Francis Banfield's often-cited A Revision of the Reindeer and Caribou, Genus Rangifer (1961), eliminated R. t. caboti (the Labrador caribou), R. t. osborni (Osborn's caribou — from British Columbia) and R. t. terranovae (the Newfoundland caribou) as invalid and included only barren-ground caribou, renamed as R. t. groenlandicus (formerly R. arcticus) and woodland caribou as R. t. caribou. However, Banfield made multiple errors, eliciting a scathing review by Ian McTaggart-Cowan in 1962. Most authorities continued to consider all or most subspecies valid; some were quite distinct. In his chapter in the authoritative 2005 reference work Mammal Species of the World, referenced by the American Society of Mammalogists, English zoologist Peter Grubb agreed with Valerius Geist, a specialist on large mammals, that these subspecies were valid (i.e., before the recent revision): In North America, R. t. caboti, R. t. caribou, R. t. dawsoni, R. t. groenlandicus, R. t. osborni, R. t. pearyi, and R. t. terranovae; and in Eurasia, R. t. tarandus, R. t. buskensis (called R. t. valentinae in Europe; see below), R. t. phylarchus, R. t. pearsoni, R. t. sibiricus and R. t. platyrhynchus. These subspecies were retained in the 2011 replacement work Handbook of the Mammals of the World Vol. 2: Hoofed Mammals. Most Russian authors also recognized R. t. angustirostris, a forest reindeer from east of Lake Baikal. However, since 1991, many genetic studies have revealed deep divergence between modern tundra reindeer and woodland caribou. Geist (2007) and others continued arguing that the woodland caribou was incorrectly classified, noting that "true woodland caribou, the uniformly dark, small-maned type with the frontally emphasized, flat-beamed antlers", is "scattered thinly along the southern rim of North American caribou distribution". He affirms that the "true woodland caribou is very rare, in very great difficulties and requires the most urgent of attention." Ecotypes In 2011, noting that the former classifications of Rangifer tarandus, either with prevailing taxonomy on subspecies, designations based on ecotypes, or natural population groupings, failed to capture "the variability of caribou across their range in Canada" needed for effective subspecies conservation and management, COSEWIC developed Designatable Unit (DU) attribution, an adaptation of "evolutionary significant units". The 12 designatable units for caribou in Canada (that is, excluding Alaska and Greenland) based on ecology, behavior and, importantly, genetics (but excluding morphology and archaeology) essentially followed the previously named subspecies distributions, without naming them as such, plus some ecotypes. Ecotypes are not phylogenetically based and cannot substitute for taxonomy. Genetic, molecular, and archaeological evidence Meanwhile, genetic data continued to accumulate, revealing sufficiently deep divisions to easily separate Rangifer back into six previously named species and to resurrect several previously named subspecies. Molecular data showed that the Greenland caribou (R. t. groenlandicus) and the Svalbard reindeer (R. t. platyrhynchus), although not closely related to each other, were the most genetically divergent among Rangifer clades; that modern (see Evolution above) Eurasian tundra reindeer (R. t. tarandus and R. t. sibiricus) and North American barren-ground caribou (R. t. arcticus), although sharing ancestry, were separable at the subspecies level; that Finnish forest reindeer (R. t. fennicus) clustered well apart from both wild and domestic tundra reindeer and that boreal woodland caribou (R. t. caribou) were separable from all others. Meanwhile, archaeological evidence was accumulating that Eurasian forest reindeer descended from an extinct forest-adapted reindeer and not from tundra reindeer (see Evolution above); since they do not share a direct common ancestor, they cannot be Biological specificity#conspecific|conspecific. Similarly, woodland caribou diverged from the ancestors of Arctic caribou before modern barren-ground caribou had evolved, and were more likely related to extinct North American forest reindeer (see Evolution above). Lacking a direct shared ancestor, barren-ground and woodland caribou cannot be conspecific. Molecular data also revealed that the four western Canadian montane ecotypes are not woodland caribou: they share a common ancestor with modern barren-ground caribou / tundra reindeer, but distantly, having diverged > 60,000 years ago — before the modern ecotypes had evolved their cold- and darkness-adapted physiologies and mass-migration and aggregation behaviors (see Evolution above). Before Banfield (1961), taxonomists using cranial, dental and skeletal measurements had unequivocally allied these western montane ecotypes with barren-ground caribou, naming them (as in Osgood 1909 Murie, 1935 and Anderson 1946, among others) R. t. stonei, R. t. montanus, R. t. fortidens and R. t. osborni, respectively, and this phylogeny was confirmed by genetic analysis. Novel genetics-based clades DNA also revealed three unnamed clades that, based on genetic distance, genetic divergence and shared vs. private haplotypes and alleles, together with ecological and behavioral differences, may justify separation at the subspecies level: the Atlantic-Gaspésie caribou (COSEWIC DU11), an eastern montane ecotype of the boreal woodland caribou, and the Baffin Island caribou. Neither one of these clades has yet been formally described or named. Jenkins et al. (2012) said that "[Baffin Island] caribou are unique compared to other Barrenground herds, as they do not overwinter in forested habitat, nor do all caribou undertake long seasonal migrations to calving areas." It also shares a mtDNA haplotype with Labrador caribou, in the North American lineage (i.e., woodland caribou). Røed et al. (1991) had noted:Among Baffin Island caribou the TFL2 allele was the most common allele (p=0.521), while this allele was absent, or present in very low frequencies, in other caribou populations (Table 1), including the Canadian barren-ground caribou from the Beverly herd. A large genetic difference between Baffin Island caribou and the Beverly herd was also indicated by eight alleles found in the Beverly herd which were absent from the Baffin Island samples.Jenkins et al. (2018) also reported genetic distinctiveness of Baffin Island caribou from all other barren-ground caribou; its genetic signature was not found on the mainland or on other islands; nor were Beverly herd (the nearest mainly barren-ground caribou) alleles present in Baffin Island caribou, evidence of reproductive isolation. These advances in Rangifer genetics were brought together with previous morphological-based descriptions, ecology, behavior and archaeology to propose a new revision of the genus. Species and subspecies Abbreviations: AMNH - American Museum of Natural History BCPM - British Columbia Provincial Museum (= RBCM the Royal British Columbia Museum) NHMUK - British Museum (Natural History) (originally the BMNH) DMNH - Denver Museum of Natural History MCZ - Museum of Comparative Zoology MSI - Museum of the Smithsonian Institution NMC - National Museum of Canada (originally the CGS Canadian Geological Survey Museum, now the CMN Canadian Museum of Nature) NR - Naturhistoriska Riksmuseet RSMNH - Royal Swedish Museum of Natural History USNM, - United States National Museum ZMASL - Zoological Museum of the Zoological Institute of the Russian Academy of Sciences (formerly the Zoological Museum of the Academy of Sciences), Leningrad The table above includes, as per the recent revision, R. t. caboti (the Labrador caribou (the Eastern Migratory population DU4)), and R. t. terranovae (the Newfoundland caribou (the Newfoundland population DU5)), which molecular analyses have shown to be of North American (i.e., woodland caribou) lineage; and four mountain ecotypes now known to be of distant Beringia-Eurasia lineage (see Taxonomy above). The scientific name Tarandus rangifer buskensis Millais, 1915 (the Busk Mountains reindeer) was selected as the senior synonym to R. t. valentinae Flerov, 1933, in Mammal Species of the World but Russian authors do not recognize Millais and Millais' articles in a hunting travelogue, The Gun at Home and Abroad, seem short of a taxonomic authority. The scientific name groenlandicus is fraught with problems. Edwards (1743) illustrated and claimed to have seen a male specimen ("head of perfect horns...") from Greenland and said that a Captain Craycott had brought a live pair from Greenland to England in 1738. He named it Capra groenlandicus, Greenland reindeer. Linnaeus, in the 12th edition of Systema naturae, gave grœnlandicus as a synonym for Cervus tarandus. Borowski disagreed (and again changed the spelling), saying Cervus grönlandicus was morphologically distinct from Eurasian tundra reindeer. Baird placed it under the genus Rangifer as R. grœnlandicus. It went back and forth as a full species or subspecies of the barren-ground caribou (R. arcticus) or a subspecies of the tundra reindeer (R. tarandus), but always as the Greenland reindeer / caribou. Taxonomists consistently documented morphological differences between Greenland and other caribou / reindeer in cranial measurements, dentition, antler architecture, etc. Then Banfield (1961) in his famously flawed revision, gave the name groenlandicus to all the barren-ground caribou in North America, Greenland included, because groenlandicus pre-dates Richardson's R. arctus. However, because genetic data shows the Greenland caribou to be the most distantly related of any caribou to all the others (genetic distance, FST = 44%, whereas most cervid (deer family) species have a genetic distance of 2% to 5%)--as well as behavioral and morphological differences—a recent revision returned it to species status as R. groenlandicus. Although it has been assumed that the larger caribou that appeared in Greenland 4,000 years ago originated from Baffin Island (itself unique; see Taxonomy above), a reconstruction of LGM glacial retreat and caribou advance (Yannic et al. 2013) shows colonization by NAL lineage caribou more likely. Their PCA and tree diagrams show Greenland caribou clustering outside of the Beringian-Eurasian lineage. The scientific name R. t. granti has a very interesting history. Allen (1902) named it as a distinct species, R. granti, from the "western end of Alaska Peninsula, opposite Popoff Island" and noting that:Rangifer granti is a representative of the Barren Ground group of Caribou, which includes R. arcticus of the Arctic Coast and R. granlandicus of Greenland. It is not closely related to R. stonei of the Kenai Peninsula, from which it differs not only in its very much smaller size, but in important cranial characters and in coloration. ...The external and cranial differences between R. granti and the various forms of the Woodland Caribou are so great in almost every respect that no detailed comparison is necessary. ...According to Mr. Stone, Rangifer granti inhabits the " barren land of Alaska Peninsula, ranging well up into the mountains in summer, but descending to the lower levels in winter, generally feeding on the low flat lands near the coast and in the foothills...As regards cranial characters no comparison is necessary with R. montanus or with any of the woodland forms."Osgood and Murie (1935), agreeing with grantis close relationship with the barren-ground caribou, brought it under R. arcticus as a subspecies, R. t. granti. Anderson (1946) and Banfield (1961), based on statistical analysis of cranial, dental and other characters, agreed. But Banfield (1961) also synonymized Alaska's large R. stonei with other mountain caribou of British Columbia and the Yukon as invalid subspecies of woodland caribou, then R. t. caribou. This left the small, migratory barren-ground caribou of Alaska and the Yukon, including the Porcupine caribou herd, without a name, which Banfield rectified in his 1974 Mammals of Canada by extending to them the name "granti". The late Valerius Geist (1998), in the only error in his whole illustrious career, re-analyzed Banfield's data with additional specimens found in an unpublished report he cites as "Skal, 1982", but was "not able to find diagnostic features that could segregate this form from the western barren ground type." But Skal 1982 had included specimens from the eastern end of the Alaska Peninsula and the Kenai Peninsula, the range of the larger Stone's caribou. Later, geneticists comparing barren-ground caribou of Alaska with those of mainland Canada found little difference and they all became the former R. t. groenlandicus (now R. t. arcticus). R. t. granti was lost in the oblivion of invalid taxonomy until Alaskan researchers sampled some small, pale caribou from the western end of the Alaska Peninsula, their range enclosing the type locality designated by Allen (1902) and found them to be genetically distinct from all other caribou in Alaska. Thus, granti was rediscovered, its range restricted to that originally described. Stone's caribou (R. t. stonei), a large montane type, was described from the Kenai Peninsula (where, apparently, it was never common except in years of great abundance), the eastern end of the Alaska Peninsula, and mountains throughout southern and eastern Alaska. It was placed under R. arcticus as a subspecies, R. t. stonei, and later synonymised as noted above. The same genetic analyses mentioned above for R. t. granti resulted in resurrecting R. t. stonei as well. The Sakhalin reindeer (R. t. setoni), endemic to Sakhalin, was described as Rangifer tarandus setoni Flerov, 1933, but Banfield (1961) brought it under R. t. fennicus as a junior synonym. The wild reindeer on the island are apparently extinct, having been replaced by domestic reindeer. Some of the Rangifer species and subspecies may be further divided by ecotype depending on several behavioral factors – predominant habitat use (northern, tundra, mountain, forest, boreal forest, forest-dwelling, woodland, woodland (boreal), woodland (migratory) or woodland (mountain), spacing (dispersed or aggregated) and migration patterns (sedentary or migratory). North American examples of this are the Torngat Mountain population DU10, an ecotype of R. t. caboti; a recently discovered and unnamed clade between the Mackenzie River and Great Bear Lake of Beringian-Eurasian lineage, an ecotype of R. t. osborni; the Atlantic-Gaspésie population DU11, an eastern montane ecotype of the boreal woodland caribou (R. t. caribou); the Baffin Island caribou, an ecotype of the barren-ground caribou (R. t. arcticus); and the Dolphin-Union "herd", another ecotype of R. t. arcticus. The last three of these likely qualify as subspecies, but they have not yet been formally described or named. Physical characteristics Naming in this and following sections follows the taxonomy in the authoritative 2011 reference work Handbook of the Mammals of the World Vol. 2: Hoofed Mammals. Antlers In most cervid species, only males grow antlers; the reindeer is the only cervid species in which females also grow them normally. Androgens play an essential role in the antler formation of cervids. The antlerogenic genes in reindeer have more sensitivity to androgens in comparison with other cervids. There is considerable variation among species and subspecies in the size of the antlers (e.g., they are rather small and spindly in the northernmost species and subspecies), but on average the bull's antlers are the second largest of any extant deer, after those of the male moose. In the largest subspecies, the antlers of large bulls can range up to in width and in beam length. They have the largest antlers relative to body size among living deer species. Antler size measured in number of points reflects the nutritional status of the reindeer and climate variation of its environment. The number of points on male reindeer increases from birth to 5 years of age and remains relatively constant from then on. "In male caribou, antler mass (but not the number of tines) varies in concert with body mass." While antlers of male woodland caribou are typically smaller than those of male barren-ground caribou, they can be over across. They are flattened in cross-section, compact and relatively dense. Geist describes them as frontally emphasized, flat-beamed antlers. Woodland caribou antlers are thicker and broader than those of the barren-ground caribou and their legs and heads are longer. Quebec-Labrador male caribou antlers can be significantly larger and wider than other woodland caribou. Central barren-ground male caribou antlers are perhaps the most diverse in configuration and can grow to be very high and wide. Osborn's caribou antlers are typically the most massive, with the largest circumference measurements. The antlers' main beams begin at the brow "extending posterior over the shoulders and bowing so that the tips point forward. The prominent, palmate brow tines extend forward, over the face." The antlers typically have two separate groups of points, lower and upper. Antlers begin to grow on male reindeer in March or April and on female reindeer in May or June. This process is called antlerogenesis. Antlers grow very quickly every year on the bulls. As the antlers grow, they are covered in thick velvet, filled with blood vessels and spongy in texture. The antler velvet of the barren-ground caribou and the boreal woodland caribou is dark chocolate brown. The velvet that covers growing antlers is a highly vascularised skin. This velvet is dark brown on woodland or barren-ground caribou and slate-grey on Peary caribou and the Dolphin-Union caribou herd. Velvet lumps in March can develop into a rack measuring more than a in length by August. When the antler growth is fully grown and hardened, the velvet is shed or rubbed off. To Inuit, for whom the caribou is a "culturally important keystone species", the months are named after landmarks in the caribou life cycle. For example, amiraijaut in the Igloolik region is "when velvet falls off caribou antlers." Male reindeer use their antlers to compete with other males during the mating season. Butler (1986) showed that the social requirements of caribou females during the rut determines the mating strategies of males and, consequently, the form of male antlers. In describing woodland caribou, which have a harem-defense mating system, SARA wrote, "During the rut, males engage in frequent and furious sparring battles with their antlers. Large males with large antlers do most of the mating." Reindeer continue to migrate until the bulls have spent their back fat. By contrast, barren-ground caribou males tend individual females and their fights are brief and much less intense; consequently, their antlers are long, and thin, round in cross-section and less branched and are designed more for show (or sexual attraction) than fighting. In late autumn or early winter after the rut, male reindeer lose their antlers, growing a new pair the next summer with a larger rack than the previous year. Female reindeer keep their antlers until they calve. In the Scandinavian and Arctic Circle populations, old bulls' antlers fall off in late December, young bulls' antlers fall off in the early spring, and cows' antlers fall off in the summer. When male reindeer shed their antlers in early to mid-winter, the antlered cows acquire the highest ranks in the feeding hierarchy, gaining access to the best forage areas. These cows are healthier than those without antlers. Calves whose mothers do not have antlers are more prone to disease and have a significantly higher mortality. Cows in good nutritional condition, for example, during a mild winter with good winter range quality, may grow new antlers earlier as antler growth requires high intake. According to a respected Igloolik elder, Noah Piugaattuk, who was one of the last outpost camp leaders, caribou (tuktu) antlers According to the Igloolik Oral History Project (IOHP), "Caribou antlers provided the Inuit with a myriad of implements, from snow knives and shovels to drying racks and seal-hunting tools. A complex set of terms describes each part of the antler and relates it to its various uses". Currently, the larger racks of antlers are used by Inuit in Inuit art as materials for carving. Iqaluit-based Jackoposie Oopakak's 1989 carving, entitled Nunali, which means "place where people live", and which is part of the permanent collection of the National Gallery of Canada, includes a massive set of caribou antlers on which he has intricately carved the miniaturized world of the Inuit where "Arctic birds, caribou, polar bears, seals, and whales are interspersed with human activities of fishing, hunting, cleaning skins, stretching boots, and travelling by dog sled and kayak...from the base of the antlers to the tip of each branch". Pelt The color of the fur varies considerably, both between individuals and depending on season and species. Northern populations, which usually are relatively small, are whiter, while southern populations, which typically are relatively large, are darker. This can be seen well in North America, where the northernmost subspecies, the Peary caribou, is the whitest and smallest subspecies of the continent, while the Selkirk Mountains caribou (Southern Mountain population DU9) is the darkest and nearly the largest, only exceeded in size by Osborn's caribou (Northern Mountain population DU7). The coat has two layers of fur: a dense woolly undercoat and a longer-haired overcoat consisting of hollow, air-filled hairs.{{efn|According to Inuit elder Marie Kilunik of the Aivilingmiut, Canadian Inuit preferred the caribou skins from caribou taken in the late summer or autumn, when their coats had thickened. They used it for winter clothing "because each hair is hollow and fills with air trapping heat." Fur is the primary insulation factor that allows reindeer to regulate their core body temperature in relation to their environment, the thermogradient, even if the temperature rises to . In 1913, Dugmore noted how the woodland caribou swim so high out of the water, unlike any other mammal, because their hollow, "air-filled, quill-like hair" acts as a supporting "life jacket". A darker belly color may be caused by two mutations of MC1R. They appear to be more common in domestic reindeer herds. Heat exchange Blood moving into the legs is cooled by blood returning to the body in a countercurrent heat exchange (CCHE), a highly efficient means of minimizing heat loss through the skin's surface. In the CCHE mechanism, in cold weather, blood vessels are closely knotted and intertwined with arteries to the skin and appendages that carry warm blood with veins returning to the body that carry cold blood causing the warm arterial blood to exchange heat with the cold venous blood. In this way, their legs for example are kept cool, maintaining the core body temperature nearly higher with less heat lost to the environment. Heat is thus recycled instead of being dissipated. The "heart does not have to pump blood as rapidly in order to maintain a constant body core temperature and thus, metabolic rate." CCHE is present in animals like reindeer, fox and moose living in extreme conditions of cold or hot weather as a mechanism for retaining the heat in (or out of) the body. These are countercurrent exchange systems with the same fluid, usually blood, in a circuit, used for both directions of flow. Reindeer have specialized counter-current vascular heat exchange in their nasal passages. Temperature gradient along the nasal mucosa is under physiological control. Incoming cold air is warmed by body heat before entering the lungs and water is condensed from the expired air and captured before the reindeer's breath is exhaled, then used to moisten dry incoming air and possibly be absorbed into the blood through the mucous membranes. Like moose, caribou have specialized noses featuring nasal turbinate bones that dramatically increase the surface area within the nostrils. Hooves The reindeer has large feet with crescent-shaped cloven hooves for walking in snow or swamps. According to the Species at Risk Public Registry (SARA), woodland Reindeer hooves adapt to the season: in the summer, when the tundra is soft and wet, the footpads become sponge-like and provide extra traction. In the winter, the pads shrink and tighten, exposing the rim of the hoof, which cuts into the ice and crusted snow to keep it from slipping. This also enables them to dig down (an activity known as "cratering") through the snow to their favourite food, a lichen known as reindeer lichen (Cladonia rangiferina). Size The females (or "cows" as they are often called) usually measure in length and weigh . The males (or "bulls" as they are often called) are typically larger (to an extent which varies between the different species and subspecies), measuring in length and usually weighing . Exceptionally large bulls have weighed as much as . Weight varies drastically between the seasons, with bulls losing as much as 40% of their pre-rut weight. The shoulder height is usually , and the tail is long. The reindeer from Svalbard are the smallest of all. They are also relatively short-legged and may have a shoulder height of as little as , thereby following Allen's rule. Clicking sound The knees of many species and subspecies of reindeer are adapted to produce a clicking sound as they walk. The sounds originate in the tendons of the knees and may be audible from several hundred meters away. The frequency of the knee-clicks is one of a range of signals that establish relative positions on a dominance scale among reindeer. "Specifically, loud knee-clicking is discovered to be an honest signal of body size, providing an exceptional example of the potential for non-vocal acoustic communication in mammals." The clicking sound made by reindeer as they walk is caused by small tendons slipping over bone protuberances (sesamoid bones) in their feet. The sound is made when a reindeer is walking or running, occurring when the full weight of the foot is on the ground or just after it is relieved of the weight. Eyes A study by researchers from University College London in 2011 revealed that reindeer can see light with wavelengths as short as 320 nm (i.e. in the ultraviolet range), considerably below the human threshold of 400 nm. It is thought that this ability helps them to survive in the Arctic, because many objects that blend into the landscape in light visible to humans, such as urine and fur, produce sharp contrasts in ultraviolet. It has been proposed that UV flashes on power lines are responsible for reindeer avoiding power lines because "...in darkness these animals see power lines not as dim, passive structures but, rather, as lines of flickering light stretching across the terrain." In 2023, researchers studying reindeer living in Cairngorms National Park, Scotland, suggested that UV visual sensitivity in reindeer helps them detect UV-absorbing lichens against a background of UV-reflecting snows. The tapetum lucidum of Arctic reindeer eyes changes in color from gold in summer to blue in winter to improve their vision during times of continuous darkness, and perhaps enable them to better spot predators. Biology and behaviors Seasonal body composition Reindeer have developed adaptations for optimal metabolic efficiency during warm months as well as for during cold months. The body composition of reindeer varies highly with the seasons. Of particular interest is the body composition and diet of breeding and non-breeding females between the seasons. Breeding females have more body mass than non-breeding females between the months of March and September with a difference of around more than non-breeding females. From November to December, non-breeding females have more body mass than breeding females, as non-breeding females are able to focus their energies towards storage during colder months rather than lactation and reproduction. Body masses of both breeding and non-breeding females peaks in September. During the months of March through April, breeding females have more fat mass than the non-breeding females with a difference of almost . After this, however, non-breeding females on average have a higher body fat mass than do breeding females. The environmental variations play a large part in reindeer nutrition, as winter nutrition is crucial to adult and neonatal survival rates. Lichens are a staple during the winter months as they are a readily available food source, which reduces the reliance on stored body reserves. Lichens are a crucial part of the reindeer diet; however, they are less prevalent in the diet of pregnant reindeer compared to non-pregnant individuals. The amount of lichen in a diet is found more in non-pregnant adult diets than pregnant individuals due to the lack of nutritional value. Although lichens are high in carbohydrates, they are lacking in essential proteins that vascular plants provide. The amount of lichen in a diet decreases in latitude, which results in nutritional stress being higher in areas with low lichen abundance. In a study of seasonal light-dark cycles on sleep patterns of female reindeer, researchers performed non-invasive electroencephalography (EEG) on reindeer kept in a stable at the UiT The Arctic University of Norway. The EEG recordings showed that: (1) the more time reindeer spend ruminating, the less time they spend in non-rapid eye movement sleep (NREM sleep); and (2) reindeer's brainwaves during rumination resemble the brainwaves present during NREM sleep. These results suggest that, by reducing the time requirement for NREM sleep, reindeer are able to spend more time feeding during the summer months, when food is abundant. Reproduction and life cycle Reindeer mate in late September to early November, and the gestation period is about 228–234 days. During the mating season, bulls battle for access to cows. Two bulls will lock each other's antlers together and try to push each other away. The most dominant bulls can collect as many as 15–20 cows to mate with. A bull will stop eating during this time and lose much of his body fat reserves. To calve, "females travel to isolated, relatively predator-free areas such as islands in lakes, peatlands, lake-shores, or tundra." As females select the habitat for the birth of their calves, they are warier than males. Dugmore noted that, in their seasonal migrations, the herd follows a female for that reason. Newborns weigh on average . In May or June, the calves are born. After 45 days, the calves are able to graze and forage, but continue suckling until the following autumn when they become independent from their mothers. Bulls live four years less than the cows, whose maximum longevity is about 17 years. Cows with a normal body size and who have had sufficient summer nutrition can begin breeding anytime between the ages of 1 and 3 years. When a cow has undergone nutritional stress, it is possible for her to not reproduce for the year. Dominant bulls, those with larger body size and antler racks, inseminate more than one cow a season. Social structure, migration and range Some populations of North American caribou; for example, many herds in the barren-ground caribou subspecies and some woodland caribou in Ungava and northern Labrador, migrate the farthest of any terrestrial mammal, traveling up to a year, and covering . Other North American populations, the boreal woodland caribou for example, are largely sedentary. The European populations are known to have shorter migrations. Island populations, such as the Novaya Zemlya and Svalbard reindeer and the Peary caribou, make local movements both within and among islands. Migrating reindeer can be negatively affected by parasite loads. Severely infected individuals are weak and probably have shortened lifespans, but parasite levels vary between populations. Infections create an effect known as culling: infected migrating animals are less likely to complete the migration. Normally travelling about a day while migrating, the caribou can run at speeds of . Young calves can already outrun an Olympic sprinter when only 1 day old. During the spring migration, smaller herds will group together to form larger herds of 50,000 to 500,000 animals, but during autumn migrations, the groups become smaller and the reindeer begin to mate. During winter, reindeer travel to forested areas to forage under the snow. By spring, groups leave their winter grounds to go to the calving grounds. A reindeer can swim easily and quickly, normally at about but, if necessary, at and migrating herds will not hesitate to swim across a large lake or broad river. The barren-ground caribou form large herds and undertake lengthy seasonal migrations from winter feeding grounds in taiga to spring calving grounds and summer range in the tundra. The migrations of the Porcupine herd of barren-ground caribou are among the longest of any mammal. Greenland caribou, found in southwestern Greenland, are "mixed migrators" and many individuals do not migrate; those that do migrate less than 60 km. Unlike the individual-tending mating system, aggregated rutting, synchronized calving and aggregated post-calving of barren-ground caribou, Greenland caribou have a harem-defense mating system and dispersed calving and they do not aggregate. Although most wild tundra reindeer migrate between their winter range in taiga and summer range in tundra, some ecotypes or herds are more or less sedentary. Novaya Zemlya reindeer (R. t. pearsoni) formerly wintered on the mainland and migrated across the ice to the islands for summer, but only a few now migrate. Finnish forest reindeer (R. t. fennicus) were formerly distributed in most of the coniferous forest zones south of the tree line, including some mountains, but are now spottily distributed within this zone. As an adaptation to their Arctic environment, they have lost their circadian rhythm. Ecology Distribution and habitat Originally, the reindeer was found in Scandinavia, Eastern Europe, Greenland, Russia, Mongolia and northern China north of the 50th latitude. In North America, it was found in Canada, Alaska, and the northern contiguous United States from Maine to Washington. In the 19th century, it was still present in southern Idaho. Even in historical times, it probably occurred naturally in Ireland, and it is believed to have lived in Scotland until the 12th century, when the last reindeer were hunted in Orkney. During the Late Pleistocene Epoch, reindeer occurred further south in North America, such as in Nevada, Tennessee, and Alabama, and as far south as Spain in Europe. Though their range retreated northwards during the terminal Pleistocene, reindeer returned to Northern Europe during the Younger Dryas. Today, wild reindeer have disappeared from these areas, especially from the southern parts, where it vanished almost everywhere. Large populations of wild reindeer are still found in Norway, Finland, Siberia, Greenland, Alaska and Canada. According to Grubb (2005), Rangifer is "circumboreal in the tundra and taiga" from "Svalbard, Norway, Finland, Russia, Alaska (USA) and Canada including most Arctic islands, and Greenland, south to northern Mongolia, China (Inner Mongolia), Sakhalin Island, and USA (northern Idaho and Great Lakes region)." Reindeer were introduced to, and are feral in, "Iceland, Kerguelen Islands, South Georgia Island, Pribilof Islands, St. Matthew Island"; a free-ranging semi-domesticated herd is also present in Scotland. There is strong regional variation in Rangifer herd size. There are large population differences among individual herds and the size of individual herds has varied greatly since 1970. The largest of all herds (in Taimyr, Russia) has varied between 400,000 and 1,000,000; the second largest herd (at the George River in Canada) has varied between 28,000 and 385,000. While Rangifer is a widespread and numerous genus in the northern Holarctic, being present in both tundra and taiga (boreal forest), by 2013, many herds had "unusually low numbers" and their winter ranges in particular were smaller than they used to be. Caribou and reindeer numbers have fluctuated historically, but many herds are in decline across their range. This global decline is linked to climate change for northern migratory herds and industrial disturbance of habitat for non-migratory herds. Barren-ground caribou are susceptible to the effects of climate change due to a mismatch in the phenological process between the availability of food during the calving period. In November 2016, it was reported that more than 81,000 reindeer in Russia had died as a result of climate change. Longer autumns, leading to increased amounts of freezing rain, created a few inches of ice over lichen, causing many reindeer to starve to death. Diet Reindeer are ruminants, having a four-chambered stomach. They mainly eat lichens in winter, especially reindeer lichen (Cladonia rangiferina); they are the only large mammal able to metabolize lichen owing to specialised bacteria and protozoa in their gut. They are also the only animals (except for some gastropods) in which the enzyme lichenase, which breaks down lichenin to glucose, has been found. However, they also eat the leaves of willows and birches, as well as sedges and grasses. Reindeer are osteophagous; they are known to gnaw and partly consume shed antlers as a dietary supplement and in some extreme cases will cannibalise each other's antlers before shedding. There is also some evidence to suggest that on occasion, especially in the spring when they are nutritionally stressed, they will feed on small rodents (such as lemmings), fish (such as the Arctic char (Salvelinus alpinus)), and bird eggs. Reindeer herded by the Chukchis have been known to devour mushrooms enthusiastically in late summer. During the Arctic summer, when there is continuous daylight, reindeer change their sleeping pattern from one synchronised with the sun to an ultradian pattern, in which they sleep when they need to digest food. δ13CC values indicate reindeer living in the region around Biśnik Cave exhibited minimal ecological change during the transition from MIS 3 to MIS 2. Dental mesowear indicates that during the Late Pleistocene, reindeer living in central Alaska had highly abrasive diets similar to wild horses. Predators A variety of predators prey heavily on reindeer, including overhunting by people in some areas, which contributes to the decline of populations. Golden eagles prey on calves and are the most prolific hunter on the calving grounds. Wolverines will take newborn calves or birthing cows, as well as (less commonly) infirm adults. Brown bears and polar bears prey on reindeer of all ages but, like wolverines, are most likely to attack weaker animals, such as calves and sick reindeer, since healthy adult reindeer can usually outpace a bear. The gray wolf is the most effective natural predator of adult reindeer and sometimes takes large numbers, especially during the winter. Some gray wolf packs, as well as individual grizzly bears in Canada, may follow and live off of a particular reindeer herd year-round. In 2020, scientists on Svalbard witnessed, and were able to film for the first time, a polar bear attack reindeer, driving one into the ocean, where the polar bear caught up with and killed it. The same bear successfully repeated this hunting technique the next day. On Svalbard, reindeer remains account for 27.3% in polar bear scats, suggesting that they "may be a significant part of the polar bear's diet in that area". Additionally, as carrion, reindeer may be scavenged opportunistically by red and Arctic foxes, various species of eagles, hawks and falcons, and common ravens. Bloodsucking insects, such as mosquitoes, black flies, and especially the reindeer warble fly or reindeer botfly (Hypoderma tarandi) and the reindeer nose botfly (Cephenemyia trompe), are a plague to reindeer during the summer and can cause enough stress to inhibit feeding and calving behaviors. An adult reindeer will lose perhaps about of blood to biting insects for every week it spends in the tundra. The population numbers of some of these predators is influenced by the migration of reindeer. Tormenting insects keep caribou on the move, searching for windy areas like hilltops and mountain ridges, rock reefs, lakeshore and forest openings, or snow patches that offer respite from the buzzing horde. Gathering in large herds is another strategy that caribou use to block insects. Reindeer are good swimmers and, in one case, the entire body of a reindeer was found in the stomach of a Greenland shark (Somniosus microcephalus), a species found in the far North Atlantic. Other threats White-tailed deer (Odocoileus virginianus) commonly carry meningeal worm or brainworm (Parelaphostrongylus tenuis), a nematode parasite that causes reindeer, moose (Alces alces), elk (Cervus canadensis), and mule deer (Odocoileus hemionus) to develop fatal neurological symptoms which include a loss of fear of humans. White-tailed deer that carry this worm are partially immune to it. Changes in climate and habitat beginning in the 20th century have expanded range overlap between white-tailed deer and caribou, increasing the frequency of infection within the reindeer population. This increase in infection is a concern for wildlife managers. Human activities, such as "clear-cutting forestry practices, forest fires, and the clearing for agriculture, roadways, railways, and power lines," favor the conversion of habitats into the preferred habitat of the white-tailed deer – "open forest interspersed with meadows, clearings, grasslands, and riparian flatlands." Towards the end of the Soviet Union, there was increasingly open admission from the Soviet government that reindeer numbers were being negatively affected by human activity, and that this must be remediated especially by supporting reindeer breeding by native herders. Conservation Current status According to the IUCN, Rangifer tarandus, as a species, is not endangered because of its overall large population and its widespread range, but, as of 2015, the IUCN has classified the reindeer as Vulnerable due to an observed population decline of 40% over the last +25 years. Some reindeer species and subspecies are rare and three subspecies have already become extinct. In North America, the Queen Charlotte Islands caribou and the East Greenland caribou both became extinct in the early 20th century, the Peary caribou is designated as Endangered, the boreal woodland caribou is designated as Threatened and some individual populations are endangered as well. While the barren-ground caribou is not designated as Threatened, many individual herds — including some of the largest — are declining and there is much concern at the local level. Grant's caribou, a small, pale subspecies endemic to the western end of the Alaska Peninsula and the adjacent islands, has not been assessed as to its conservation status. The status of the Dolphin-Union "herd" was upgraded to Endangered in 2017. In NWT, Dolphin-Union caribou were listed as Special Concern under the NWT Species at Risk (NWT) Act (2013). Both the Selkirk Mountains caribou (Southern Mountain population DU9) and the Rocky Mountain caribou (Central Mountain population DU8) are classified as Endangered in Canada in regions such as southeastern British Columbia at the Canada–United States border, along the Columbia and Kootenay Rivers and around Kootenay Lake. Rocky Mountain caribou are extirpated from Banff National Park, but a small population remains in Jasper National Park and in mountain ranges to the northwest into British Columbia. Montane caribou are now considered extirpated in the contiguous United States, including Washington and Idaho. Osborn's caribou (Northern Mountain population DU7) is classified as Threatened in Canada. In Eurasia, the Sakhalin reindeer is extinct (and has been replaced by domestic reindeer) and reindeer on most of the Novaya Zemlya islands have also been replaced by domestic reindeer, although some wild reindeer still persist on the northern islands. Many Siberian tundra reindeer herds have declined, some dangerously, but the Taymir herd remains strong and in total about 940,000 wild Siberian tundra reindeer were estimated in 2010. There is strong regional variation in Rangifer herd size. By 2013, many caribou herds in North America had "unusually low numbers" and their winter ranges in particular were smaller than they used to be. Caribou numbers have fluctuated historically, but many herds are in decline across their range. There are many factors contributing to the decline in numbers. Boreal woodland caribou Ongoing human development of their habitat has caused populations of boreal woodland caribou to disappear from their original southern range. In particular, boreal woodland caribou were extirpated in many areas of eastern North America in the beginning of the 20th century. Professor Marco Musiani of the University of Calgary said in a statement that "The woodland caribou is already an endangered subspecies in southern Canada and the United States...[The] warming of the planet means the disappearance of their critical habitat in these regions. Caribou need undisturbed lichen-rich environments and these types of habitats are disappearing." Boreal woodland caribou were designated as Threatened in 2002 by the Committee on the Status of Endangered Wildlife in Canada, (COSEWIC). Environment Canada reported in 2011 that there were approximately 34 000 boreal woodland caribou in 51 ranges remaining in Canada (Environment Canada, 2011b). "According to Geist, the "woodland caribou is highly endangered throughout its distribution right into Ontario." In 2002, the Atlantic-Gaspésie population DU11 of the boreal woodland caribou was designated as Endangered by COSEWIC. The small isolated population of 200 animals was at risk from predation and habitat loss. Peary caribou In 1991, COSEWIC assigned "endangered status" to the Banks Island and High Arctic populations of the Peary caribou. The Low Arctic population of the Peary caribou was designated as Threatened. In 2004, all three were designated as "endangered." In 2015, COSEWIC returned the status to Threatened. Relationship with humans Arctic peoples have depended on caribou for food, clothing, and shelter. European prehistoric cave paintings represent both tundra and forest forms, the latter either the Finnish forest reindeer or the narrow-nosed reindeer, an eastern Siberia forest form. Canadian examples include the Caribou Inuit, the inland-dwelling Inuit of the Kivalliq Region in northern Canada, the Caribou Clan in the Yukon, the Iñupiat, the Inuvialuit, the Hän, the Northern Tutchone, and the Gwichʼin (who followed the Porcupine caribou herd for millennia). Hunting wild reindeer and herding of semi-domesticated reindeer are important to several Arctic and sub-Arctic peoples such as the Duhalar for meat, , antlers, , and transportation. Reindeer have been domesticated at least two and probably three times, in each case from wild Eurasian tundra reindeer after the Last Glacial Maximum (LGM). Recognizably different domestic reindeer breeds include those of the Evenk, Even, and Chukotka-Khargin people of Yakutia and the Nenets breed from the Nenets Autonomous district and Murmansk region; the Tuvans, Todzhans, Tofa (Tofalars in the Irkutsk Region), the Soyots (the Republic of Buryatia), and the Dukha (also known as Tsaatan, the Khubsugul) in the Province of Mongolia. The Sámi (Sápmi) have also depended on reindeer herding and fishing for centuries. In Sápmi, reindeer are used to pull a pulk, a Nordic sled. In traditional British and United States Christmas legend, Santa Claus's reindeer pull a sleigh through the night sky to help Santa Claus deliver gifts to good children on Christmas Eve. The reindeer has an important economic role for all circumpolar peoples, including the Sámi, the Swedes, the Norwegians, the Finns and the Northwestern Russians in Europe, the Nenets, the Khanty, the Evenks, the Yukaghirs, the Chukchi and the Koryaks in Asia and the Inuit in North America. It is believed that domestication started between the Bronze and Iron Ages. Siberian reindeer owners also use the reindeer to ride on (Siberian reindeer are larger than their Scandinavian relatives). For breeders, a single owner may own hundreds or even thousands of animals. The numbers of Russian and Scandinavian reindeer herders have been drastically reduced since 1990. The sale of fur and meat is an important source of income. Reindeer were introduced into Alaska near the end of the 19th century; they interbred with the native caribou subspecies there. Reindeer herders on the Seward Peninsula have experienced significant losses to their herds from animals (such as wolves) following the wild caribou during their migrations. Reindeer meat is popular in the Scandinavian countries. Reindeer meatballs are sold canned. Sautéed reindeer is the best-known dish in Sápmi. In Alaska and Finland, reindeer sausage is sold in supermarkets and grocery stores. Reindeer meat is very tender and lean. It can be prepared fresh, but also dried, salted and hot- and cold-smoked. In addition to meat, almost all of the internal organs of reindeer can be eaten, some being traditional dishes. Furthermore, Lapin Poron liha, fresh reindeer meat completely produced and packed in Finnish Sápmi, is protected in Europe with PDO classification. Reindeer antlers are powdered and sold as an aphrodisiac, or as a nutritional or medicinal supplement, to Asian markets. The blood of the caribou was supposedly mixed with alcohol as drink by hunters and loggers in colonial Quebec to counter the cold. This drink is now enjoyed without the blood as a wine and whiskey drink known as Caribou. Indigenous North Americans Caribou are still hunted in Greenland and in North America. In the traditional lifestyles of some of Canada's Inuit peoples and northern First Nations peoples, Alaska Natives, and the Kalaallit of Greenland, caribou is an important source of food, clothing, shelter and tools. The Caribou Inuit are inland-dwelling Inuit in present-day Nunavut's Kivalliq Region (formerly the Keewatin Region, Northwest Territories), Canada. They subsisted on caribou year-round, eating dried caribou meat in the winter. The Ahiarmiut are Caribou Inuit that followed the Qamanirjuaq barren-ground caribou herd. There is an Inuit saying in the Kivalliq Region: Elder Chief of Koyukuk and chair for the Western Arctic Caribou Herd Working Group Benedict Jones, or Kʼughtoʼoodenoolʼoʼ, represents the Middle Yukon River, Alaska. His grandmother was a member of the Caribou Clan, who travelled with the caribou as a means to survive. In 1939, they were living their traditional lifestyle at one of their hunting camps in Koyukuk near the location of what is now the Koyukuk National Wildlife Refuge. His grandmother made a pair of new mukluks in one day. Kʼughtoʼoodenoolʼoʼ recounted a story told by an elder, who "worked on the steamboats during the gold rush days out on the Yukon." In late August, the caribou migrated from the Alaska Range up north to Huslia, Koyukuk and the Tanana area. One year when the steamboat was unable to continue, they ran into a caribou herd estimated to number 1 million animals, migrating across the Yukon. "They tied up for seven days waiting for the caribou to cross. They ran out of wood for the steamboats, and had to go back down 40 miles to the wood pile to pick up some more wood. On the tenth day, they came back and they said there was still caribou going across the river night and day." The Gwichʼin, an indigenous people of northwestern Canada and northeastern Alaska, have been dependent on the international migratory Porcupine caribou herd for millennia. To them, caribou — vadzaih — is the cultural symbol and a keystone subsistence species of the Gwich'in, just as the American buffalo is to the Plains Native Americans. Innovative language revitalisation projects are underway to document the language and to enhance the writing and translation skills of younger Gwich'in speakers. In one project, lead research associate and fluent speaker Gwich'in elder Kenneth Frank works with linguists who include young Gwich'in speakers affiliated with the Alaska Native Language Center at the University of Alaska in Fairbanks to document traditional knowledge of caribou anatomy. The main goal of the research was to "elicit not only what the Gwich'in know about caribou anatomy, but how they see caribou and what they say and believe about caribou that defines themselves, their dietary and nutritional needs, and their subsistence way of life." Elders have identified at least 150 descriptive Gwich'in names for all of the bones, organs and tissues. Associated with the caribou's anatomy are not just descriptive Gwich'in names for all of the body parts, including bones, organs, and tissues, but also "an encyclopedia of stories, songs, games, toys, ceremonies, traditional tools, skin clothing, personal names and surnames, and a highly developed ethnic cuisine." In the 1980s, Gwich'in Traditional Management Practices were established to protect the Porcupine caribou, upon which the Gwich'in depend. They "codified traditional principles of caribou management into tribal law" which include "limits on the harvest of caribou and procedures to be followed in processing and transporting caribou meat" and limits on the number of caribou to be taken per hunting trip. Indigenous Eurasians Reindeer herding has been vital for the subsistence of several Eurasian nomadic indigenous peoples living in the circumpolar Arctic zone such as the Sámi, Nenets, and Komi. Reindeer are used to provide renewable sources and reliable transportation. In Mongolia, the Dukha are known as the reindeer people. They are credited as one of the world's earliest domesticators. The Dukha diet consists mainly of reindeer dairy products. Reindeer husbandry is common in northern Fennoscandia (northern Norway, Sweden and Finland) and the Russian North. In some human groups such as the Eveny, wild reindeer and domestic reindeer are treated as different kinds of beings. Husbandry The reindeer is the only successfully semi-domesticated deer on a large scale in the world. Reindeer in northern Fennoscandia (northern Norway, Sweden and Finland) as well in the Kola Peninsula and Yakutia in Russia, are mostly semi-domesticated reindeer, ear-marked by their owners. Some reindeer in the area are truly domesticated, mostly used as draught animals (nowadays commonly for tourist entertainment and races, traditionally important for the nomadic Sámi). Domestic reindeer have also been used for milk, e.g., in Norway. There are only two genetically pure populations of wild reindeer in Northern Europe: wild mountain reindeer (R. t. tarandus) that live in central Norway, with a population in 2007 of between 6,000 and 8,400 animals; and wild Finnish forest reindeer (R. t. fennicus) that live in central and eastern Finland and in Russian Karelia, with a population of about 4,350, plus 1,500 in Arkhangelsk Oblast and 2,500 in Komi. East of Arkhangelsk, both wild Siberian tundra reindeer (R. t. sibiricus) (some herds are very large) and domestic reindeer (R. t. domesticus) occur with almost no interbreeding by wild reindeer into domestic clades and none the other way (Kharzinova et al. 2018; Rozhkov et al. 2020). DNA analysis indicates that reindeer were independently domesticated at least twice: in Fennoscandia and Western Russia (and possibly also Eastern Russia). Reindeer have been herded for centuries by several Arctic and sub-Arctic peoples, including the Sámi, the Nenets and the Yakuts. They are raised for their meat, hides and antlers and, to a lesser extent, for milk and transportation. Reindeer are not considered fully domesticated, as they generally roam free on pasture grounds. In traditional nomadic herding, reindeer herders migrate with their herds between coastal and inland areas according to an annual migration route and herds are keenly tended. However, reindeer were not bred in captivity, though they were tamed for milking as well as for use as draught animals or beasts of burden. Millais (1915), for example, shows a photograph (Plate LXXX) of an "Okhotsk Reindeer" saddled for riding (the rider standing behind it) beside an officer astride a steppe pony that is only slightly larger. Domestic reindeer are shorter-legged and heavier than their wild counterparts. In Scandinavia, management of reindeer herds is primarily conducted through siida, a traditional Sámi form of cooperative association. The use of reindeer for transportation is common among the nomadic peoples of the Russian North (but not anymore in Scandinavia). Although a sled drawn by 20 reindeer will cover no more than a day (compared to on foot, by a dog sled loaded with cargo and by a dog sled without cargo), it has the advantage that the reindeer will discover their own food, while a pack of 5–7 sled dogs requires of fresh fish a day. The use of reindeer as semi-domesticated livestock in Alaska was introduced in the late 19th century by the United States Revenue Cutter Service, with assistance from Sheldon Jackson, as a means of providing a livelihood for Alaska Natives. Reindeer were imported first from Siberia and later also from Norway. A regular mail run in Wales, Alaska, used a sleigh drawn by reindeer. In Alaska, reindeer herders use satellite telemetry to track their herds, using online maps and databases to chart the herd's progress. Domestic reindeer are mostly found in northern Fennoscandia and the Russian North, with a herd of approximately 150–170 reindeer living around the Cairngorms region in Scotland. The last remaining wild tundra reindeer in Europe are found in portions of southern Norway. The International Centre for Reindeer Husbandry (ICR), a circumpolar organisation, was established in 2005 by the Norwegian government. ICR represents over 20 indigenous reindeer peoples and about 100,000 reindeer herders in nine different national states. In Finland, there are about 6,000 reindeer herders, most of whom keep small herds of less than 50 reindeer to raise additional income. With 185,000 reindeer (), the industry produces of reindeer meat and generates 35 million euros annually. 70% of the meat is sold to slaughterhouses. Reindeer herders are eligible for national and EU agricultural subsidies, which constituted 15% of their income. Reindeer herding is of central importance for the local economies of small communities in sparsely populated rural Sápmi. Currently, many reindeer herders are heavily dependent on diesel fuel to provide for electric generators and snowmobile transportation, although solar photovoltaic systems can be used to reduce diesel dependency. History Reindeer hunting by humans has a very long history. Both Aristotle and Theophrastus have short accounts – probably based on the same source – of an ox-sized deer species, named tarandos, living in the land of the Bodines in Scythia, which was able to change the colour of its fur to obtain camouflage. The latter is probably a misunderstanding of the seasonal change in reindeer fur colour. The descriptions have been interpreted as being of reindeer living in the southern Ural Mountains in c. 350 BC. A deer-like animal described by Julius Caesar in his Commentarii de Bello Gallico (chapter 6.26) from the Hercynian Forest in the year 53 BC is most certainly to be interpreted as a reindeer: According to Olaus Magnus's Historia de Gentibus Septentrionalibus – printed in Rome in the year 1555 – Gustav I of Sweden sent 10 reindeer to Albert, Duke of Prussia, in the year 1533. It may be these animals that Conrad Gessner had seen or heard of. During World War II, the Soviet Army used reindeer as pack animals to transport food, ammunition and post from Murmansk to the Karelian front and bring wounded soldiers, pilots and equipment back to the base. About 6,000 reindeer and more than 1,000 reindeer herders were part of the operation. Most herders were Nenets, who were mobilised from the Nenets Autonomous Okrug, but reindeer herders from the Murmansk, Arkhangelsk and Komi regions also participated. Santa Claus Around the world, public interest in reindeer peaks during the Christmas season. According to folklore, Santa Claus's sleigh is pulled by flying reindeer. These reindeer were first named in the 1823 poem "A Visit from St. Nicholas". Mythology and art Among the Inuit, there is a story of the origin of the caribou: Inuit artists from the Barrenlands incorporate depictions of caribou — and items made from caribou antlers and skin — in carvings, drawings, prints and sculpture. Contemporary Canadian artist Brian Jungen, of Dane-zaa First Nations ancestry, commissioned an installation entitled "The ghosts on top of my head" (2010–11) in Banff, Alberta, which depicts the antlers of caribou, elk and moose. Tomson Highway, CM is a Canadian and Cree playwright, novelist, and children's author, who was born in a remote area north of Brochet, Manitoba. His father, Joe Highway, was a caribou hunter. His 2001 children's book entitled Caribou Song/atíhko níkamon was selected as one of the "Top 10 Children's Books" by the Canadian newspaper The Globe and Mail. The young protagonists of Caribou Song, like Tomson himself, followed the caribou herd with their families. Heraldry and symbols Several Norwegian municipalities have one or more reindeer depicted in their coats-of-arms: Eidfjord Municipality, Porsanger Municipality, Rendalen Municipality, Tromsø Municipality, Vadsø Municipality, and Vågå Municipality. The historic province of Västerbotten in Sweden has a reindeer in its coat of arms. The present Västerbotten County has very different borders and uses the reindeer combined with other symbols in its coat-of-arms. The city of Piteå also has a reindeer. The logo for Umeå University features three reindeer. The Canadian 25-cent coin or "quarter" features a depiction of a caribou on one face. The caribou is the official provincial animal of Newfoundland and Labrador, Canada, and appears on the coat of arms of Nunavut. A caribou statue was erected at the centre of the Beaumont-Hamel Newfoundland Memorial, marking the spot in France where hundreds of soldiers from Newfoundland were killed and wounded in World War I. There is a replica in Bowring Park in St. John's, Newfoundland's capital city. Two municipalities in Finland have reindeer motifs in their coats-of-arms: Kuusamo has a running reindeer; and Inari has a fish with reindeer antlers.
Biology and health sciences
Artiodactyla
null
144056
https://en.wikipedia.org/wiki/Fluorescent%20lamp
Fluorescent lamp
A fluorescent lamp, or fluorescent tube, is a low-pressure mercury-vapor gas-discharge lamp that uses fluorescence to produce visible light. An electric current in the gas excites mercury vapor, to produce ultraviolet and make a phosphor coating in the lamp glow. Fluorescent lamps convert electrical energy into useful light much more efficiently than incandescent lamps, but are less efficient than most LED lamps. The typical luminous efficacy of fluorescent lamps is 50–100 lumens per watt, several times the efficacy of incandescent bulbs with comparable light output (e.g. the luminous efficacy of an incandescent lamp may only be 16 lm/w). Fluorescent lamp fixtures are more costly than incandescent lamps because, among other things, they require a ballast to regulate current through the lamp, but the initial cost is offset by a much lower running cost. Compact fluorescent lamps made in the same sizes as incandescent lamp bulbs are used as an energy-saving alternative to incandescent lamps in homes. In the United States, fluorescent lamps are classified as universal waste. The United States Environmental Protection Agency recommends that fluorescent lamps be segregated from general waste for recycling or safe disposal, and some jurisdictions require recycling of them. History Physical discoveries Fluorescence The fluorescence of certain rocks and other substances had been observed for hundreds of years before its nature was understood. One of the first to explain it was Irish scientist Sir George Stokes from the University of Cambridge in 1852, who named the phenomenon "fluorescence" after fluorite, a mineral many of whose samples glow strongly because of impurities. Discharge tubes By mid-19th century, experimenters had observed a radiant glow emanating from partially evacuated glass vessels through which an electric current passed. The explanation relied on the nature of electricity and light phenomena as developed by the British scientists Michael Faraday in the 1840s and James Clerk Maxwell in the 1860s. Little more was done with this phenomenon until 1856 when German glassblower Heinrich Geissler created a mercury vacuum pump that evacuated a glass tube to an extent not previously possible. Geissler invented the first gas-discharge lamp, the Geissler tube, consisting of a partially evacuated glass tube with a metal electrode at either end. When a high voltage was applied between the electrodes, the inside of the tube illuminated with a glow discharge. By putting different chemicals inside, the tubes could be made to produce a variety of colors, and elaborate Geissler tubes were sold for entertainment. More important was its contribution to scientific research. One of the first scientists to experiment with a Geissler tube was Julius Plücker, who systematically described in 1858 the luminescent effects that occurred in a Geissler tube. He also made the important observation that the glow in the tube shifted position when in proximity to an electromagnetic field. Alexandre Edmond Becquerel observed in 1859 that certain substances gave off light when they were placed in a Geissler tube. He went on to apply thin coatings of luminescent materials to the surfaces of these tubes. Fluorescence occurred, but the tubes were inefficient and had a short operating life. Inquiries that began with the Geissler tube continued as better vacuums were produced. The most famous was the evacuated tube used for scientific research by William Crookes. That tube was evacuated by the highly effective mercury vacuum pump created by Hermann Sprengel. Research conducted by Crookes and others ultimately led to the discovery of the electron in 1897 by J. J. Thomson and X-rays in 1895 by Wilhelm Röntgen. The Crookes tube, as it came to be known, produced little light because the vacuum in it was too great and thus lacked the trace amounts of gas that are needed for electrically stimulated luminescence. Early discharge lamps Thomas Edison briefly pursued fluorescent lighting for its commercial potential. He invented a fluorescent lamp in 1896 that used a coating of calcium tungstate as the fluorescing substance, excited by X-rays. Although it received a patent in 1907, it was not put into production. As with a few other attempts to use Geissler tubes for illumination, it had a short operating life, and given the success of the incandescent light, Edison had little reason to pursue an alternative means of electrical illumination. Nikola Tesla made similar experiments in the 1890s, devising high-frequency powered fluorescent bulbs that gave a bright greenish light, but as with Edison's devices, no commercial success was achieved. One of Edison's former employees created a gas-discharge lamp that achieved a measure of commercial success. In 1895 Daniel McFarlan Moore demonstrated lamps in length that used carbon dioxide or nitrogen to emit white or pink light, respectively. They were considerably more complicated than an incandescent bulb, requiring both a high-voltage power supply and a pressure-regulating system for the fill gas. Moore invented an electromagnetically controlled valve that maintained a constant gas pressure within the tube, to extend the working life. Although Moore's lamp was complicated, expensive, and required very high voltages, it was considerably more efficient than incandescent lamps, and it produced a closer approximation to natural daylight than contemporary incandescent lamps. From 1904 onwards Moore's lighting system was installed in a number of stores and offices. Its success contributed to General Electric's motivation to improve the incandescent lamp, especially its filament. GE's efforts came to fruition with the invention of a tungsten-based filament. The extended lifespan and improved efficacy of incandescent bulbs negated one of the key advantages of Moore's lamp, but GE purchased the relevant patents in 1912. These patents and the inventive efforts that supported them were of considerable value when the firm took up fluorescent lighting more than two decades later. At about the same time that Moore was developing his lighting system, Peter Cooper Hewitt invented the mercury-vapor lamp, patented in 1901 (). Hewitt's lamp glowed when an electric current was passed through mercury vapor at a low pressure. Unlike Moore's lamps, Hewitt's were manufactured in standardized sizes and operated at low voltages. The mercury-vapor lamp was superior to the incandescent lamps of the time in terms of energy efficiency, but the blue-green light it produced limited its applications. It was, however, used for photography and some industrial processes. Mercury vapor lamps continued to be developed at a slow pace, especially in Europe. By the early 1930s they received limited use for large-scale illumination. Some of them employed fluorescent coatings, but these were used primarily for color correction and not for enhanced light output. Mercury vapor lamps also anticipated the fluorescent lamp in their incorporation of a ballast to maintain a constant current. Cooper-Hewitt had not been the first to use mercury vapor for illumination, as earlier efforts had been mounted by Way, Rapieff, Arons, and Bastian and Salisbury. Of particular importance was the mercury-vapor lamp invented by Küch and Retschinsky in Germany. The lamp used a smaller bore bulb and higher current operating at higher pressures. As a consequence of the current, the bulb operated at a higher temperature which necessitated the use of a quartz bulb. Although its light output relative to electrical consumption was better than that of other sources of light, the light it produced was similar to that of the Cooper-Hewitt lamp in that it lacked the red portion of the spectrum, making it unsuitable for ordinary lighting. Due to difficulties in sealing the electrodes to the quartz, the lamp had a short life. Neon lamps The next step in gas-based lighting took advantage of the luminescent qualities of neon, an inert gas that had been discovered in 1898 by isolation from the atmosphere. Neon glowed a brilliant red when used in Geissler tubes. By 1910, Georges Claude, a Frenchman who had developed a technology and a successful business for air liquefaction, was obtaining enough neon as a byproduct to support a neon lighting industry. While neon lighting was used around 1930 in France for general illumination, it was no more energy-efficient than conventional incandescent lighting. Neon tube lighting, which also includes the use of argon and mercury vapor as alternative gases, came to be used primarily for eye-catching signs and advertisements. Neon lighting was relevant to the development of fluorescent lighting, however, as Claude's improved electrode (patented in 1915) overcame "sputtering", a major source of electrode degradation. Sputtering occurred when ionized particles struck an electrode and tore off bits of metal. Although Claude's invention required electrodes with a lot of surface area, it showed that a major impediment to gas-based lighting could be overcome. The development of the neon light also was significant for the last key element of the fluorescent lamp, its fluorescent coating. In 1926 Jacques Risler received a French patent for the application of fluorescent coatings to neon light tubes. The main use of these lamps, which can be considered the first commercially successful fluorescents, was for advertising, not general illumination. This, however, was not the first use of fluorescent coatings; Becquerel had earlier used the idea and Edison used calcium tungstate for his unsuccessful lamp. Other efforts had been mounted, but all were plagued by low efficiency and various technical problems. Of particular importance was the invention in 1927 of a low-voltage “metal vapor lamp” by Friedrich Meyer, Hans-Joachim Spanner, and Edmund Germer, who were employees of a German firm in Berlin. A German patent was granted but the lamp never went into commercial production. Commercialization of fluorescent lamps All the major features of fluorescent lighting were in place at the end of the 1920s. Decades of invention and development had provided the key components of fluorescent lamps: economically manufactured glass tubing, inert gases for filling the tubes, electrical ballasts, long-lasting electrodes, mercury vapor as a source of luminescence, effective means of producing a reliable electrical discharge, and fluorescent coatings that could be energized by ultraviolet light. At this point, intensive development was more important than basic research. In 1934, Arthur Compton, a renowned physicist and GE consultant, reported to the GE lamp department on successful experiments with fluorescent lighting at General Electric Co., Ltd. in Great Britain (unrelated to General Electric in the United States). Stimulated by this report, and with all of the key elements available, a team led by George E. Inman built a prototype fluorescent lamp in 1934 at General Electric's Nela Park (Ohio) engineering laboratory. This was not a trivial exercise; as noted by Arthur A. Bright, "A great deal of experimentation had to be done on lamp sizes and shapes, cathode construction, gas pressures of both argon and mercury vapor, colors of fluorescent powders, methods of attaching them to the inside of the tube, and other details of the lamp and its auxiliaries before the new device was ready for the public." In addition to having engineers and technicians along with facilities for R&D work on fluorescent lamps, General Electric controlled what it regarded as the key patents covering fluorescent lighting, including the patents originally issued to Hewitt, Moore, and Küch. More important than these was a patent covering an electrode that did not disintegrate at the gas pressures that ultimately were employed in fluorescent lamps. Albert W. Hull of GE's Schenectady Research Laboratory filed for a patent on this invention in 1927, which was issued in 1931. General Electric used its control of the patents to prevent competition with its incandescent lights and probably delayed the introduction of fluorescent lighting by 20 years. Eventually, war production required 24-hour factories with economical lighting, and fluorescent lights became available. While the Hull patent gave GE a basis for claiming legal rights over the fluorescent lamp, a few months after the lamp went into production the firm learned of a U.S. patent application that had been filed in 1927 for the aforementioned "metal vapor lamp" invented in Germany by Meyer, Spanner, and Germer. The patent application indicated that the lamp had been created as a superior means of producing ultraviolet light, but the application also contained a few statements referring to fluorescent illumination. Efforts to obtain a U.S. patent had met with numerous delays, but were it to be granted, the patent might have caused serious difficulties for GE. At first, GE sought to block the issuance of a patent by claiming that priority should go to one of their employees, Leroy J. Buttolph, who according to their claim had invented a fluorescent lamp in 1919 and whose patent application was still pending. GE also had filed a patent application in 1936 in Inman's name to cover the “improvements” wrought by his group. In 1939 GE decided that the claim of Meyer, Spanner, and Germer had some merit, and that in any event a long interference procedure was not in their best interest. They therefore dropped the Buttolph claim and paid $180,000 to acquire the Meyer, et al. application, which at that point was owned by a firm known as Electrons, Inc. The patent was duly awarded in December 1939. This patent, along with the Hull patent, put GE on what seemed to be firm legal ground, although it faced years of legal challenges from Sylvania Electric Products, Inc., which claimed infringement on patents that it held. Even though the patent issue was not completely resolved for many years, General Electric's strength in manufacturing and marketing gave it a pre-eminent position in the emerging fluorescent light market. Sales of "fluorescent lumiline lamps" commenced in 1938 when four different sizes of tubes were put on the market. They were used in fixtures manufactured by three leading corporations: Lightolier, Artcraft Fluorescent Lighting Corporation, and Globe Lighting. The Slimline fluorescent ballast's public introduction in 1946 was by Westinghouse and General Electric and Showcase/Display Case fixtures were introduced by Artcraft Fluorescent Lighting Corporation in 1946. During the following year, GE and Westinghouse publicized the new lights through exhibitions at the New York World's Fair and the Golden Gate International Exposition in San Francisco. Fluorescent lighting systems spread rapidly during World War II as wartime manufacturing intensified lighting demand. By 1951 more light was produced in the United States by fluorescent lamps than by incandescent lamps. In the first years zinc orthosilicate with varying content of beryllium was used as greenish phosphor. Small additions of magnesium tungstate improved the blue portion of the spectrum, yielding acceptable white. After the discovery that beryllium was toxic, halophosphate-based phosphors dominated. Principles of operation The fundamental mechanism for the conversion of electrical energy to light is the emission of a photon when an electron in a mercury atom falls from an excited state into a lower energy level. Electrons flowing in the arc collide with the mercury atoms. If the incident electron has enough kinetic energy, it transfers energy to the atom's outer electron, causing that electron to temporarily jump up to a higher energy level that is not stable. The atom will emit an ultraviolet photon as the atom's electron reverts to a lower, more stable, energy level. Most of the photons that are released from the mercury atoms have wavelengths in the ultraviolet (UV) region of the spectrum, predominantly at wavelengths of 253.7 and 185 nanometers (nm). These are not visible to the human eye, so ultraviolet energy is converted to visible light by the fluorescence of the inner phosphor coating. The difference in energy between the absorbed ultra-violet photon and the emitted visible light photon heats the phosphor coating. Electric current flows through the tube in a low-pressure arc discharge. Electrons collide with and ionize noble gas atoms inside the bulb surrounding the filament to form a plasma by the process of impact ionization. As a result of avalanche ionization, the conductivity of the ionized gas rapidly rises, allowing higher currents to flow through the lamp. The fill gas helps determine the electrical characteristics of the lamp but does not give off light itself. The fill gas effectively increases the distance that electrons travel through the tube, which allows an electron a greater chance of interacting with a mercury atom. Additionally, argon atoms, excited to a metastable state by the impact of an electron, can impart energy to a mercury atom and ionize it, described as the Penning effect. This lowers the breakdown and operating voltage of the lamp, compared to other possible fill gases such as krypton. Construction A fluorescent lamp tube is filled with a mix of argon, xenon, neon, or krypton, and mercury vapor. The pressure inside the lamp is around 0.3% of atmospheric pressure. The partial pressure of the mercury vapor alone is about 0.8 Pa (8 millionths of atmospheric pressure), in a T12 40-watt lamp. The inner surface of the lamp is coated with a fluorescent coating made of varying blends of metallic and rare-earth phosphor salts. The lamp's electrodes are typically made of coiled tungsten and are coated with a mixture of barium, strontium and calcium oxides to improve thermionic emission. Fluorescent lamp tubes are often straight and range in length from about for miniature lamps, to for high-output lamps. Some lamps have a circular tube, used for table lamps or other places where a more compact light source is desired. Larger U-shaped lamps are used to provide the same amount of light in a more compact area, and are used for special architectural purposes. Compact fluorescent lamps have several small-diameter tubes joined in a bundle of two, four, or six, or a small diameter tube coiled in a helix, to provide a high amount of light output in minimal volume. Light-emitting phosphors are applied as a paint-like coating to the inside of the tube. The organic solvents are allowed to evaporate, then the tube is heated to nearly the melting point of glass to drive off remaining organic compounds and fuse the coating to the lamp tube. Careful control of the grain size of the suspended phosphors is necessary; large grains lead to weak coatings, and small particles lead to poor light maintenance and efficiency. Most phosphors perform best with a particle size around 10 micrometers. The coating must be thick enough to capture all the ultraviolet light produced by the mercury arc, but not so thick that the phosphor coating absorbs too much visible light. The first phosphors were synthetic versions of naturally occurring fluorescent minerals, with small amounts of metals added as activators. Later other compounds were discovered, allowing differing colors of lamps to be made. Fluorescent tubes can have an outer silicone coating applied by dipping the tube into a solution of water and silicone, and then drying the tube. This coating gives the tube a silky surface finish, and protects against moisture, guaranteeing a predictable surface resistance on the tube when starting it. Ballasts Fluorescent lamps are negative differential resistance devices, so as more current flows through them, the electrical resistance of the fluorescent lamp drops, allowing for even more current to flow. Connected directly to a constant-voltage power supply, a fluorescent lamp would rapidly self-destruct because of the uncontrolled current flow. To prevent this, fluorescent lamps must use a ballast to regulate the current flow through the lamp. The terminal voltage across an operating lamp varies depending on the arc current, tube diameter, temperature, and fill gas. A general lighting service T12 lamp operates at 430 mA, with 100 volts drop. High-output lamps operate at 800 mA, and some types operate up to 1.5 A. The power level varies from 33 to 82 watts per meter of tube length (10 to 25 W/ft) for T12 lamps. The simplest ballast for alternating current (AC) use is an inductor placed in series, consisting of a winding on a laminated magnetic core. The inductance of this winding limits the flow of AC current. This type of ballast is common in 220–240V countries (And in North America, up to 30W lamps). Ballasts are rated for the size of lamp and power frequency. In North America, the AC voltage is insufficient to start long fluorescent lamps, so the ballast is often a step-up autotransformer with substantial leakage inductance (to limit current flow). Either form of inductive ballast may also include a capacitor for power factor correction. Fluorescent lamps can run directly from a direct current (DC) supply of sufficient voltage to strike an arc. The ballast must be resistive, and would consume about as much power as the lamp. When operated from DC, the starting switch is often arranged to reverse the polarity of the supply to the lamp each time it is started; otherwise, the mercury accumulates at one end of the tube. Fluorescent lamps are (almost) never operated directly from DC for those reasons. Instead, an inverter converts the DC into AC and provides the current-limiting function as described below for electronic ballasts. Effect of temperature The performance of fluorescent lamps is critically affected by the temperature of the bulb wall and its effect on the partial pressure of the mercury vapor within. Since mercury condenses at the coolest spot in the lamp, careful design is required to maintain that spot at the optimum temperature, around . Using an amalgam with some other metal reduces the vapor pressure and increases the optimum temperature range. The bulb wall "cold spot" temperature must still be controlled to prevent condensing. High-output fluorescent lamps have features such as a deformed tube or internal heat-sinks to control cold spot temperature and mercury distribution. Heavily loaded small lamps, such as compact fluorescent lamps, also include heat-sink areas in the tube to maintain mercury vapor pressure at the optimum value. Losses Only a fraction of the electrical energy input into a lamp is converted to useful light. The ballast dissipates some heat; electronic ballasts may be around 90% efficient. A fixed voltage drop occurs at the electrodes, which also produces heat. Some of the energy in the mercury vapor column is also dissipated, but about 85% is turned into visible and ultraviolet light. Not all the UV radiation striking the phosphor coating is converted to visible light; some energy is lost. The largest single loss in modern lamps is due to the lower energy of each photon of visible light, compared to the energy of the UV photons that generated them (a phenomenon called Stokes shift). Incident photons have an energy of 5.5 electron volts but produce visible light photons with energy around 2.5 electron volts, so only 45% of the UV energy is used; the rest is dissipated as heat. Cold-cathode fluorescent lamps Most fluorescent lamps use electrodes that emit electrons into the tube by heat, known as hot cathodes. However, cold cathode tubes have cathodes that emit electrons only due to the large voltage between the electrodes. The cathodes will be warmed by current flowing through them, but are not hot enough for significant thermionic emission. Because cold cathode lamps have no thermionic emission coating to wear out, they can have much longer lives than hot cathode tubes. This makes them desirable for long-life applications (such as backlights in liquid crystal displays). Sputtering of the electrode may still occur, but electrodes can be shaped (e.g. into an internal cylinder) to capture most of the sputtered material so it is not lost from the electrode. Cold cathode lamps are generally less efficient than thermionic emission lamps because the cathode fall voltage is much higher. Power dissipated due to cathode fall voltage does not contribute to light output. However, this is less significant with longer tubes. The increased power dissipation at tube ends also usually means cold cathode tubes have to be run at a lower loading than their thermionic emission equivalents. Given the higher tube voltage required anyway, these tubes can easily be made long, and even run as series strings. They are better suited for bending into special shapes for lettering and signage, and can also be instantly switched on or off. Starting The gas used in the fluorescent tube must be ionized before the arc can "strike" . For small lamps, it does not take much voltage to strike the arc and starting the lamp presents no problem, but larger tubes require a substantial voltage (in the range of a thousand volts). Many different starting circuits have been used. The choice of circuit is based on cost, AC voltage, tube length, instant versus non-instant starting, temperature ranges and parts availability. Preheating Preheating, also called switchstart, uses a combination filament–cathode at each end of the lamp in conjunction with a mechanical or automatic (bi-metallic) switch (see circuit diagram to the right) that initially connect the filaments in series with the ballast to preheat them; after a short preheating time the starting switch opens. If timed correctly relative to the phase of the supply AC, this causes the ballast to induce a voltage over the tube high enough to initiate the starting arc. These systems are standard equipment in 200–240 V countries (and in the United States lamps up to about 30 watts). Before the 1960s, four-pin thermal starters and manual switches were used. A glow switch starter automatically preheats the lamp cathodes. It consists of a normally open bi-metallic switch in a small sealed gas-discharge lamp containing inert gas (neon or argon). The glow switch will cyclically warm the filaments and initiate a pulse voltage to strike the arc; the process repeats until the lamp is lit. Once the tube strikes, the impinging main discharge keeps the cathodes hot, permitting continued electron emission. The starter switch does not close again because the voltage across the lit tube is insufficient to start a glow discharge in the starter. With glow switch starters a failing tube will cycle repeatedly. Some starter systems used a thermal over-current trip to detect repeated starting attempts and disable the circuit until manually reset. A power factor correction (PFC) capacitor draws leading current from the mains to compensate for the lagging current drawn by the lamp circuit. Electronic starters use a different method to preheat the cathodes. They may be plug-in interchangeable with glow starters. They use a semiconductor switch and "soft start" the lamp by preheating the cathodes before applying a starting pulse which strikes the lamp first time without flickering; this dislodges a minimal amount of material from the cathodes during starting, giving longer lamp life. This is claimed to prolong lamp life by a factor of typically 3 to 4 times for a lamp frequently switched on as in domestic use, and to reduce the blackening of the ends of the lamp typical of fluorescent tubes. While the circuit is complex, the complexity is built into an integrated circuit chip. Electronic starters may be optimized for fast starting (typical start time of 0.3 seconds), or for most reliable starting even at low temperatures and with low supply voltages, with a startup time of 2–4 seconds. The faster-start units may produce audible noise during start-up. Electronic starters only attempt to start a lamp for a short time when power is initially applied, and do not repeatedly attempt to restrike a lamp that is dead and unable to sustain an arc; some automatically stop trying to start a failed lamp. This eliminates the re-striking of a lamp and the continuous flashing of a failing lamp with a glow starter. Electronic starters are not subject to wear and do not need replacing periodically, although they may fail like any other electronic circuit. Manufacturers typically quote lives of 20 years, or as long as the light fitting. Instant start Instant start fluorescent tubes were invented in 1944. Instant start simply uses a high enough voltage to break down the gas column and thereby start arc conduction. Once the high-voltage spark "strikes" the arc, the current is boosted until a glow discharge forms. As the lamp warms and pressure increases, the current continues to rise and both resistance and voltage falls, until mains or line-voltage takes over and the discharge becomes an arc. These tubes have no filaments and can be identified by a single pin at each end of the tube (for common lamps; compact cold-cathode lamps may also have a single pin, but operate from a transformer rather than a ballast). The lamp holders have a "disconnect" socket at the low-voltage end which disconnects the ballast when the tube is removed, to prevent electric shock. Instant-start lamps are slightly more energy efficient than rapid start, because they do not constantly send a heating current to the cathodes during operation, but the cold cathodes starting increases sputter, and they take much longer to transition from a glow discharge to an arc during warm up, thus the lifespan is typically about half of those seen in comparable rapid-start lamps. Rapid start Because the formation of an arc requires the thermionic emission of large quantities of electrons from the cathode, rapid start ballast designs provide windings within the ballast that continuously warm the cathode filaments. Usually operating at a lower arc voltage than the instant start design; no inductive voltage spike is produced for starting, so the lamps must be mounted near a grounded (earthed) reflector to allow the glow discharge to propagate through the tube and initiate the arc discharge via capacitive coupling. In some lamps a grounded "starting aid" strip is attached to the outside of the lamp glass. This ballast type is incompatible with the European energy saver T8 fluorescent lamps because these lamps require a higher starting voltage than that of the open circuit voltage of rapid start ballasts. Quick-start Quick-start ballasts use a small auto-transformer to heat the filaments when power is first applied. When an arc strikes, the filament heating power is reduced and the tube will start within half a second. The auto-transformer is either combined with the ballast or may be a separate unit. Tubes need to be mounted near an earthed metal reflector in order for them to strike. Quick-start ballasts are more common in commercial installations because of lower maintenance costs. A quick-start ballast eliminates the need for a starter switch, a common source of lamp failures. Nonetheless, Quick-start ballasts are also used in domestic (residential) installations because of the desirable feature that a Quick-start ballast light turns on nearly immediately after power is applied (when a switch is turned on). Quick-start ballasts are used only on 240 V circuits and are designed for use with the older, less efficient T12 tubes. Semi-resonant start The semi-resonant start circuit was invented by Thorn Lighting for use with T12 fluorescent tubes. This method uses a double wound transformer and a capacitor. With no arc current, the transformer and capacitor resonate at line frequency and generate about twice the supply voltage across the tube, and a small electrode heating current. This tube voltage is too low to strike the arc with cold electrodes, but as the electrodes heat up to thermionic emission temperature, the tube striking voltage falls below that of the ringing voltage, and the arc strikes. As the electrodes heat, the lamp slowly, over three to five seconds, reaches full brightness. As the arc current increases and tube voltage drops, the circuit provides current limiting. Semi-resonant start circuits are mainly restricted to use in commercial installations because of the higher initial cost of circuit components. However, there are no starter switches to be replaced and cathode damage is reduced during starting making lamps last longer, reducing maintenance costs. Because of the high open circuit tube voltage, this starting method is particularly good for starting tubes in cold locations. Additionally, the circuit power factor is almost 1.0, and no additional power factor correction is needed in the lighting installation. As the design requires that twice the supply voltage must be lower than the cold-cathode striking voltage (or the tubes would erroneously instant-start), this design cannot be used with AC power unless the tubes are at least length. Semi-resonant start fixtures are generally incompatible with energy saving T8 retrofit tubes, because such tubes have a higher starting voltage than T12 lamps and may not start reliably, especially in low temperatures. Recent proposals in some countries to phase out T12 tubes will reduce the application of this starting method. Electronic ballasts Electronic ballasts employ transistors to change the supply frequency into high-frequency AC while regulating the current flow in the lamp. These ballasts take advantage of the higher efficacy of lamps, which rises by almost 10% at , compared to efficacy at normal power frequency. When the AC period is shorter than the relaxation time to de-ionize mercury atoms in the discharge column, the discharge stays closer to optimum operating condition. Electronic ballasts convert supply frequency AC power to variable frequency AC. The conversion can reduce lamp brightness modulation at twice the power supply frequency. Low cost ballasts contain only a simple oscillator and series resonant LC circuit. This principle is called the current resonant inverter circuit. After a short time the voltage across the lamp reaches about 1 kV and the lamp instant-starts in cold cathode mode. The cathode filaments are still used for protection of the ballast from overheating if the lamp does not ignite. A few manufacturers use positive temperature coefficient (PTC) thermistors to disable instant starting and give some time to preheat the filaments. More complex electronic ballasts use programmed start. The output frequency is started above the resonance frequency of the output circuit of the ballast; and after the filaments are heated, the frequency is rapidly decreased. If the frequency approaches the resonant frequency of the ballast, the output voltage will increase so much that the lamp will ignite. If the lamp does not ignite, an electronic circuit stops the operation of the ballast. Many electronic ballasts are controlled by a microcontroller, and these are sometimes called digital ballasts. Digital ballasts can apply quite complex logic to lamp starting and operation. This enables functions such as testing for broken electrodes and missing tubes before attempting to start, detection of tube replacement, and detection of tube type, such that a single ballast can be used with several different tubes. Features such as dimming can be included in the embedded microcontroller software, and can be found in various manufacturers' products. Since introduction in the 1990s, high-frequency ballasts have been used in general lighting fixtures with either rapid start or pre-heat lamps. These ballasts convert the incoming power to an output frequency in excess of . This increases lamp efficiency. These ballasts operate with voltages that can be almost 600 volts, requiring some consideration in housing design, and can cause a minor limitation in the length of the wire leads from the ballast to the lamp ends. End of life The life expectancy of a fluorescent lamp is primarily limited by the life of the cathode electrodes. To sustain an adequate current level, the electrodes are coated with an emission mixture of metal oxides. Every time the lamp is started, and during operation, a small amount of the cathode coating is sputtered off the electrodes by the impact of electrons and heavy ions within the tube. The sputtered material collects on the walls of the tube, darkening it. The starting method and frequency affect cathode sputtering. A filament may also break, disabling the lamp. Low-mercury designs of lamps may fail when mercury is absorbed by the glass tube, phosphor, and internal components, and is no longer available to vaporize in the fill gas. Loss of mercury initially causes an extended warm-up time to full light output, and finally causes the lamp to glow a dim pink when the argon gas takes over as the primary discharge. Subjecting the tube to asymmetric current flow, effectively operates it under a DC bias, and causes asymmetric distribution of mercury ions along the tube. The localized depletion of mercury vapor pressure manifests itself as pink luminescence of the base gas in the vicinity of one of the electrodes, and the operating lifetime of the lamp may be dramatically shortened. This can be an issue with some poorly designed inverters. The phosphors lining the lamp degrade with time as well, until a lamp no longer produces an acceptable fraction of its initial light output. Failure of the integral electronic ballast of a compact fluorescent bulb will also end its usable life. Phosphors and the spectrum of emitted light The spectrum of light emitted from a fluorescent lamp is the combination of light directly emitted by the mercury vapor, and light emitted by the phosphorescent coating. The spectral lines from the mercury emission and the phosphorescence effect give a combined spectral distribution of light that is different from those produced by incandescent sources. The relative intensity of light emitted in each narrow band of wavelengths over the visible spectrum is in different proportions compared to that of an incandescent source. Colored objects are perceived differently under light sources with differing spectral distributions. For example, some people find the color rendition produced by some fluorescent lamps to be harsh and displeasing. A healthy person can sometimes appear to have an unhealthy skin tone under fluorescent lighting. The extent to which this phenomenon occurs is related to the light's spectral composition, and may be gauged by its color rendering index (CRI). Color temperature Correlated color temperature (CCT) is a measure of the "shade" of whiteness of a light source compared with a blackbody. Typical incandescent lighting is 2700 K, which is yellowish-white. Halogen lighting is 3000 K. Fluorescent lamps are manufactured to a chosen CCT by altering the mixture of phosphors inside the tube. Warm-white fluorescents have CCT of 2700 K and are popular for residential lighting. Neutral-white fluorescents have a CCT of 3000 K or 3500 K. Cool-white fluorescents have a CCT of 4100 K and are popular for office lighting. Daylight fluorescents have a CCT of 6500 K, which is bluish-white. Color rendering index Color rendering index (CRI) is an attempt to measure the ability of a light source to reveal the colors of various objects faithfully in comparison to a black body radiator. Colors can be perceived using light from a source, relative to light from a reference source such as daylight or a blackbody of the same color temperature. By definition, an incandescent lamp has a CRI of 100. Real-life fluorescent tubes achieve CRIs of anywhere from 50 to 98. Fluorescent lamps with low CRI have phosphors that emit too little red light. Skin appears less pink, and hence "unhealthy" compared with incandescent lighting. Colored objects appear muted. For example, a low CRI 6800 K halophosphate tube (an extreme example) will make reds appear dull red or even brown. Since the eye is relatively less efficient at detecting red light, an improvement in color rendering index, with increased energy in the red part of the spectrum, may reduce the overall luminous efficacy. Lighting arrangements use fluorescent tubes in an assortment of tints of white. Mixing tube types within fittings can improve the color reproduction of lower quality tubes. Phosphor composition Some of the least pleasant light comes from tubes containing the older, calcium halophosphate phosphors (chemical formula Ca5(PO4)3(F, Cl):Sb3+, Mn2+). This phosphor mainly emits yellow and blue light, and relatively little green and red. In the absence of a reference, this mixture appears white to the eye, but the light has an incomplete spectrum. The color rendering index (CRI) of such lamps is around 60. Since the 1990s, higher-quality fluorescent lamps use rare-earths tri-phosphors mixture, based on europium and terbium ions, which have emission bands more evenly distributed over the spectrum of visible light, but with peaks in the red, green and blue. Triphosphor tubes give a more natural color reproduction to the human eye. The CRI of such lamps is typically 85. Applications Fluorescent lamps come in many shapes and sizes. Many compact fluorescent lamps integrate the auxiliary electronics into the base of the lamp, allowing them to fit into a regular light bulb socket. In US residences, fluorescent lamps are mostly found in kitchens, basements, or garages. Schools and businesses find the cost savings of fluorescent lamps to be significant and rarely use incandescent lights. Electricity costs, tax incentives and building codes result in greater use in locales such as California. Fluorescent use is declining, supplanted by LED lighting, which is more energy efficient and does not contain mercury. In other countries, residential use of fluorescent lighting varies depending on the price of energy, financial and environmental concerns of the local population, and acceptability of the light output. In East and Southeast Asia incandescent bulbs are rare in buildings. Many countries are encouraging the phase-out of incandescent light bulbs and substitution with other types of energy-efficient lamps. In addition to general lighting, special fluorescent lights are often used in stage lighting for film and video production. They are cooler than traditional halogen light sources, and use high-frequency ballasts to prevent video flickering and high color-rendition index lamps to approximate daylight color temperatures. Comparison to incandescent lamps Luminous efficacy Fluorescent lamps convert more of the input power to visible light than incandescent lamps. A typical 100 watt tungsten filament incandescent lamp may convert only 5% of its power input to visible white light (400–700 nm wavelength), whereas typical fluorescent lamps convert about 22% of the power input to visible white light. The efficacy of fluorescent tubes ranges from about 16 lumens per watt for a 4 watt tube with an ordinary ballast to over 100 lumens per watt with a modern electronic ballast, commonly averaging 50 to 67 lm/W overall. Ballast loss can be about 25% of the lamp power with magnetic ballasts, and around 10% with electronic ballasts. Fluorescent lamp efficacy is dependent on lamp temperature at the coldest part of the lamp. In T8 lamps this is in the center of the tube. In T5 lamps this is at the end of the tube with the text stamped on it. The ideal temperature for a T8 lamp is while the T5 lamp is ideally at . Life Typically a fluorescent lamp will last 10 to 20 times as long as an equivalent incandescent lamp when operated several hours at a time. Under standard test conditions fluorescent lamps last 6,000 to 90,000 hours (2 to 31 years at 8 hours per day). The higher initial cost of a fluorescent lamp compared with an incandescent lamp is usually compensated for by lower energy consumption over its life. Lower luminance Compared with an incandescent lamp, a fluorescent tube is a more diffuse and physically larger light source. In suitably designed lamps, light can be more evenly distributed without point source of glare such as seen from an undiffused incandescent filament; the lamp is large compared to the typical distance between lamp and illuminated surfaces. Lower heat Fluorescent lamps give off about one-fifth the heat of equivalent incandescent lamps. This greatly reduces the size, cost and energy consumption by air conditioning for office buildings that typically have many lights and few windows. Disadvantages Frequent switching Frequent switching (more than every 3 hours) will shorten the life of lamps. Each start cycle slightly erodes the electron-emitting surface of the cathodes; when all the emission material is gone, the lamp cannot start with the available ballast voltage. Fixtures for flashing lights (such as for advertising) use a ballast that maintains cathode temperature when the arc is off, preserving the life of the lamp. The extra energy used to start a fluorescent lamp is equivalent to a few seconds of normal operation; it is more energy-efficient to switch off lamps when not required for several minutes. Mercury content If a fluorescent lamp is broken, a very small amount of mercury can contaminate the surrounding environment. About 99% of the mercury is typically contained in the phosphor, especially on lamps that are near the end of their life. Broken lamps may release mercury if not cleaned with correct methods. Due to the mercury content, discarded fluorescent lamps must be treated as hazardous waste. For large users of fluorescent lamps, recycling services are available in some areas, and may be required by regulation. In some areas, recycling is also available to consumers. Ultraviolet emission Fluorescent lamps emit a small amount of ultraviolet (UV) light. A 1993 study in the US found that ultraviolet exposure from sitting under fluorescent lights for eight hours is equivalent to one minute of sun exposure. Ultraviolet radiation from compact fluorescent lamps may exacerbate symptoms in photosensitive individuals. Museum artifacts may need protection from UV light to prevent degradation of pigments or textiles. Ballast Fluorescent lamps require a ballast to stabilize the current through the lamp, and to provide the initial striking voltage required to start the arc discharge. Often one ballast is shared between two or more lamps. Electromagnetic ballasts can produce an audible humming or buzzing noise. In North America, magnetic ballasts are usually filled with a tar-like potting compound to reduce emitted noise. Hum is eliminated in lamps with a high-frequency electronic ballast. Energy lost in magnetic ballasts is around 10% of lamp input power according to GE literature from 1978. Electronic ballasts reduce this loss. Power quality and radio interference Simple inductive fluorescent lamp ballasts have a power factor of less than unity. Inductive ballasts can be connected to, or may include, power factor correction capacitors. Simple electronic ballasts may also have low power factor due to their rectifier input stage. Fluorescent lamps are a non-linear load and generate harmonic currents in the electrical power supply. The arc within the lamp may generate radio frequency noise, which can be conducted through power wiring. Suppression of radio interference is possible. Very good suppression is possible, but adds to the cost of the fluorescent fixtures. Fluorescent lamps near end of life can present a serious radio frequency interference hazard. Oscillations are generated from the negative differential resistance of the arc, and the current flow through the tube can form a tuned circuit whose frequency depends on path length. Operating temperature Fluorescent lamps operate best around room temperature. At lower or higher temperatures, efficacy decreases. At below-freezing temperatures standard lamps may not start. Special lamps may be used for reliable service outdoors in cold weather. Lamp shape Fluorescent tubes are long, low-luminance sources compared with high intensity discharge lamps, incandescent and halogen lamps and high power LEDs. However, low luminous intensity of the emitting surface is useful because it reduces glare. Lamp fixture design must control light from a long tube instead of a compact globe. The compact fluorescent lamp (CFL) replaces regular incandescent bulbs in many light fixtures where space permits. Flicker Fluorescent lamps with magnetic ballasts flicker at a normally unnoticeable frequency of 100 or 120 Hz and this flickering can cause problems for some individuals with light sensitivity; they are listed as problematic for some individuals with autism, epilepsy, lupus, chronic fatigue syndrome, Lyme disease, and vertigo. A stroboscopic effect can be noticed, where something spinning at just the right speed may appear stationary if illuminated solely by a single fluorescent lamp. This effect is eliminated by paired lamps operating on a lead-lag ballast. Unlike a true strobe lamp, the light level drops in appreciable time and so substantial "blurring" of the moving part would be evident. Fluorescent lamps may produce flicker at the power supply frequency (50 or 60 Hz), which is noticeable by more people. This happens if a damaged or failed cathode results in slight rectification and uneven light output in positive and negative going AC cycles. Power frequency flicker can be emitted from the ends of the tubes, if each tube electrode produces a slightly different light output pattern on each half-cycle. Flicker at power frequency is more noticeable in the peripheral vision than it is when viewed directly. Near the end of life, fluorescent lamps can start flickering at a frequency lower than the power frequency. This is due to instability in the negative resistance of arc discharge, which can be from a bad lamp or ballast or poor connection. New fluorescent lamps may show a twisting spiral pattern of light in a part of the lamp. This effect is due to loose cathode material and usually disappears after a few hours of operation. Electromagnetic ballasts may also cause problems for video recording as there can be a so-called beat effect between the video frame rate and the fluctuations in intensity of the fluorescent lamp. Fluorescent lamps with electronic ballasts do not flicker, since above about 5 kHz, the excited electron state half-life is longer than a half cycle, and light production becomes continuous. Operating frequencies of electronic ballasts are selected to avoid interference with infrared remote controls. Poor quality or faulty electronic ballasts may have considerable 100/120 Hz modulation of the light. Dimming Fluorescent light fixtures cannot be connected to dimmer switches intended for incandescent lamps. Two effects are responsible for this: the waveform of the voltage emitted by a standard phase-control dimmer interacts badly with many ballasts, and it becomes difficult to sustain an arc in the fluorescent tube at low power levels. Dimming installations require a compatible dimming ballast. Some models of compact fluorescent lamps can be dimmed; in the United States, such lamps are identified as complying with UL standard 1993. Lamp sizes and designations Systematic nomenclature identifies mass-market lamps as to general shape, power rating, length, color, and other electrical and illuminating characteristics. In the United States and Canada, lamps are typically identified by a code such as FxxTy, where F is for fluorescent, the first number (xx) indicates either the power in watts or length in inches, the T indicates that the shape of the bulb is tubular, and the last number (y) is the diameter in eighths of an inch (sometimes in millimeters, rounded-up to the nearest millimeter). Typical diameters are T12 or T38 ( inches or 38 mm) for residential lamps, T8 or T26 (1 inch or 25 mm) for commercial energy-saving lamps. Overdriving Overdriving a fluorescent lamp is a method of getting more light from each tube than is obtained under rated conditions. ODNO (Overdriven Normal Output) fluorescent tubes are generally used when there is not enough room to put in more bulbs to increase the light. The method is effective, but generates some additional issues. This technique has become popular among aquatic gardeners as a cost-effective way to add more light to their aquariums. Overdriving is done by rewiring lamp fixtures to increase lamp current; however, lamp life is reduced. Other fluorescent lamps Black light Blacklights are a subset of fluorescent lamps that are used to provide UVA light (at about 360 nm wavelength). They are built in the same fashion as conventional fluorescent lamps but the glass tube is coated with a phosphor that converts the short-wave UV within the tube to long-wave UV rather than to visible light. They are used to provoke fluorescence (to provide dramatic effects using blacklight paint and to detect materials such as urine and certain dyes that would be invisible in visible light) as well as to attract insects to bug zappers. So-called blacklite blue lamps are also made from more expensive deep purple glass known as Wood's glass rather than clear glass. The deep purple glass filters out most of the visible colors of light directly emitted by the mercury-vapor discharge, producing proportionally less visible light compared with UV light. This allows UV-induced fluorescence to be seen more easily (thereby allowing blacklight posters to seem much more dramatic). The blacklight lamps used in bug zappers do not require this refinement so it is usually omitted in the interest of cost; they are called simply blacklite (and not blacklite blue). Tanning lamp The lamps used in tanning beds contain a different phosphor blend (typically 3 to 5 or more phosphors) that emits both UVA and UVB, provoking a tanning response in most human skin. Typically, the output is rated as 3–10% UVB (5% most typical) with the remaining UV as UVA. These are mainly high output 100W lamps, although 160W very high output are somewhat common. One common phosphor used in these lamps is lead-activated barium disilicate, but a europium-activated strontium fluoroborate is also used. Early lamps used thallium as an activator, but emissions of thallium during manufacture were toxic. UVB medical lamps The lamps used in phototherapy contain a phosphor that emits only UVB ultraviolet light. There are two types: broadband UVB that gives 290–320 nanometer with peak wavelength of 306 nm, and narrowband UVB that gives 311–313 nanometer. Because of the longer wavelength, the narrowband UVB bulbs do not cause erythema in the skin like the broadband. They requires a 10–20 times higher dose to the skin and they require more bulbs and longer exposure time. The narrowband is good for psoriasis, eczema (atopic dermatitis), vitiligo, lichen planus, and some other skin diseases. The broadband is better for increasing Vitamin D3 in the body. Grow lamp Grow lamps contain phosphor blends that encourage photosynthesis, growth, or flowering in plants, algae, photosynthetic bacteria, and other light-dependent organisms. These often emit light primarily in the red and blue color range, which is absorbed by chlorophyll and used for photosynthesis in plants. Infrared lamps Lamps can be made with a lithium metaluminate phosphor activated with iron. This phosphor has peak emissions between 675 and 875 nanometers, with lesser emissions in the deep red part of the visible spectrum. Bilirubin lamps Deep blue light generated from a europium-activated phosphor is used in the light therapy treatment of jaundice; light of this color penetrates skin and helps in the breakup of excess bilirubin. Germicidal lamp Germicidal lamps contain no phosphor at all, making them mercury vapor gas discharge lamps rather than fluorescent. Their tubes are made of fused quartz transparent to the UVC light emitted by the mercury discharge. The 254 nm UVC emitted by these tubes will kill germs and the 184.45 nm far UV will ionize oxygen to ozone. Lamps labeled OF block the 184.45 nm far UV and do not produce significant ozone. In addition the UVC can cause eye and skin damage. They are sometimes used by geologists to identify certain species of minerals by the color of their fluorescence when fitted with filters that pass the short-wave UV and block visible light produced by the mercury discharge. They are also used in some EPROM erasers. Germicidal lamps have designations beginning with G, for example G30T8 for a 30-watt, diameter, long germicidal lamp (as opposed to an F30T8, which would be the fluorescent lamp of the same size and rating). Electrodeless lamp Electrodeless induction lamps are fluorescent lamps without internal electrodes. They have been commercially available since 1990. A current is induced into the gas column using electromagnetic induction. Because the electrodes are usually the life-limiting element of fluorescent lamps, such electrodeless lamps can have a very long service life, although they also have a higher purchase price. Cold-cathode fluorescent lamp Cold-cathode fluorescent lamps were used as backlighting for LCDs in computer monitors and televisions before the use of LED-backlit LCDs. They were also popular with computer case modders. Science demonstrations Fluorescent lamps can be illuminated by means other than a proper electrical connection. These other methods, however, result in very dim or very short-lived illumination, and so are seen mostly in science demonstrations. Static electricity or a Van de Graaff generator will cause a lamp to flash momentarily as it discharges a high-voltage capacitance. A Tesla coil will pass high-frequency current through the tube, and since it has a high voltage as well, the gases within the tube will ionize and emit light. This also works with plasma globes. Capacitive coupling with high-voltage power lines can light a lamp continuously at low intensity, depending on the intensity of the electric field.
Technology
Lighting
null
144147
https://en.wikipedia.org/wiki/Miscarriage
Miscarriage
Miscarriage, also known in medical terms as a spontaneous abortion, is an end to pregnancy resulting in the loss and expulsion of an embryo or fetus from the womb before it can survive independently. Miscarriage before 6 weeks of gestation is defined as biochemical loss by ESHRE. Once ultrasound or histological evidence shows that a pregnancy has existed, the term used is clinical miscarriage, which can be "early" (before 12 weeks) or "late" (between 12 and 21 weeks). Spontaneous fetal termination after 20 weeks of gestation is known as a stillbirth. The term miscarriage is sometimes used to refer to all forms of pregnancy loss and pregnancy with abortive outcomes before 20 weeks of gestation. The most common symptom of a miscarriage is vaginal bleeding, with or without pain. Tissue and clot-like material may leave the uterus and pass through and out of the vagina. Risk factors for miscarriage include being an older parent, previous miscarriage, exposure to tobacco smoke, obesity, diabetes, thyroid problems, and drug or alcohol use. About 80% of miscarriages occur in the first 12 weeks of pregnancy (the first trimester). The underlying cause in about half of cases involves chromosomal abnormalities. Diagnosis of a miscarriage may involve checking to see if the cervix is open or sealed, testing blood levels of human chorionic gonadotropin (hCG), and an ultrasound. Other conditions that can produce similar symptoms include an ectopic pregnancy and implantation bleeding. Prevention is occasionally possible with good prenatal care. Avoiding drugs (including alcohol), infectious diseases, and radiation may decrease the risk of miscarriage. No specific treatment is usually needed during the first 7 to 14 days. Most miscarriages will be completed without additional interventions. Occasionally the medication misoprostol or a procedure such as vacuum aspiration is used to remove the remaining tissue. Women who have a blood type of rhesus negative (Rh negative) may require Rho(D) immune globulin. Pain medication may be beneficial. Emotionally, afterwards, sadness, anxiety or guilt may occur. Emotional support may help with processing the loss. Miscarriage is the most common complication of early pregnancy. Among women who know they are pregnant, the miscarriage rate is roughly 10% to 20%, while rates among all fertilisation is around 30% to 50%. In those under the age of 35, the risk is about 10% while in those over the age of 40, the risk is about 45%. Risk begins to increase around the age of 30. About 5% of women have two miscarriages in a row. Recurrent miscarriage (also referred to medically as Recurrent Spontaneous Abortion or RSA) may also be considered a form of infertility. Terminology Some recommend not using the term "abortion" in discussions with those experiencing a miscarriage to decrease distress. In Britain, the term "miscarriage" has replaced any use of the term "spontaneous abortion" for pregnancy loss and in response to complaints of insensitivity towards women who had suffered such loss. An additional benefit of this change is reducing confusion among medical laymen, who may not realize that the term "spontaneous abortion" refers to a naturally occurring medical phenomenon and not the intentional termination of pregnancy. The medical terminology applied to experiences during early pregnancy has changed over time. Before the 1980s, health professionals used the phrase spontaneous abortion for a miscarriage and induced abortion for a termination of the pregnancy. By the 1940s, the popular assumption that an abortion was an intentional and immoral or criminal action was sufficiently ingrained that pregnancy books had to explain that abortion was the then-popular technical jargon for miscarriages. In the 1960s, the use of the word miscarriage in Britain (instead of spontaneous abortion) occurred after changes in legislation. In the late 1980s and 1990s, doctors became more conscious of their language about early pregnancy loss. Some medical authors advocated a change to the use of miscarriage instead of spontaneous abortion because they argued this would be more respectful and help ease a distressing experience. The change was being recommended in Britain in the late 1990s. In 2005 the European Society for Human Reproduction and Embryology (ESHRE) published a paper aiming to facilitate a revision of nomenclature used to describe early pregnancy events. Most affected women and family members refer to miscarriage as the loss of a baby, rather than an embryo or fetus, and healthcare providers are expected to respect and use the language that the person chooses. Clinical terms can suggest blame, increase distress, and even cause anger. Terms that are known to cause distress in those experiencing miscarriage include: abortion (including spontaneous abortion) rather than miscarriage, habitual aborter rather than a woman experiencing recurrent pregnancy loss, products of conception rather than baby, blighted ovum rather than early pregnancy loss or delayed miscarriage, cervical incompetence rather than cervical weakness, and evacuation of retained products of conception (ERPC) rather than surgical management of miscarriage. Using the word abortion for an involuntary miscarriage is generally considered confusing, "a dirty word", "stigmatized", and "an all-around hated term". Pregnancy loss is a broad term that is used for miscarriage, ectopic and molar pregnancies. The term foetal death applies variably in different countries and contexts, sometimes incorporating weight, and gestational age from 16 weeks in Norway, 20 weeks in the US and Australia, 24 weeks in the UK to 26 weeks in Italy and Spain. A foetus that died before birth after this gestational age may be referred to as a stillbirth. Signs and symptoms Signs of a miscarriage include vaginal spotting, abdominal pain, cramping, fluid, blood clots, and tissue passing from the vagina. Bleeding can be a symptom of miscarriage, but many women also have bleeding in early pregnancy and do not miscarry. Bleeding during the first half of pregnancy may be referred to as a threatened miscarriage. Of those who seek treatment for bleeding during pregnancy, about half will miscarry. Miscarriage may be detected during an ultrasound exam or through serial human chorionic gonadotropin (HCG) testing. Risk factors Miscarriage may occur for many reasons, not all of which can be identified. Risk factors are those things that increase the likelihood of having a miscarriage but do not necessarily cause a miscarriage. Up to 70 conditions, infections, medical procedures, lifestyle factors, occupational exposures, chemical exposure, and shift work are associated with increased risk for miscarriage. Some of these risks include endocrine, genetic, uterine, or hormonal abnormalities, reproductive tract infections, and tissue rejection caused by an autoimmune disorder. Trimesters First trimester Most clinically apparent miscarriages (two-thirds to three-quarters in various studies) occur during the first trimester. About 30% to 40% of all fertilised eggs miscarry, often before the pregnancy is known. The embryo typically dies before the pregnancy is expelled; bleeding into the decidua basalis and tissue necrosis cause uterine contractions to expel the pregnancy. Early miscarriages can be due to a developmental abnormality of the placenta or other embryonic tissues. In some instances, an embryo does not form but other tissues do. This has been called a "blighted ovum". Successful implantation of the zygote into the uterus is most likely eight to ten days after fertilization. If the zygote has not been implanted by day ten, implantation becomes increasingly unlikely in subsequent days. A chemical pregnancy is a pregnancy that was detected by testing but ends in miscarriage before or around the time of the next expected period. Chromosomal abnormalities are found in more than half of embryos miscarried in the first 13 weeks. Half of embryonic miscarriages (25% of all miscarriages) have an aneuploidy (abnormal number of chromosomes). Common chromosome abnormalities found in miscarriages include an autosomal trisomy (22–32%), monosomy X (5–20%), triploidy (6–8%), tetraploidy (2–4%), or other structural chromosomal abnormalities (2%). Genetic problems are more likely to occur with older parents; this may account for the higher rates observed in older women. Luteal phase progesterone deficiency may or may not be a contributing factor to miscarriage. Second and third trimesters Second-trimester losses may be due to maternal factors such as uterine malformation, growths in the uterus (fibroids), or cervical problems. These conditions also may contribute to premature birth. Unlike first-trimester miscarriages, second-trimester miscarriages are less likely to be caused by a genetic abnormality; chromosomal aberrations are found in a third of cases. Infection during the third trimester can cause a miscarriage. Age Miscarriage is least common for mothers in their twenties, for whom around 12% of known pregnancies end in miscarriage. Risk rises with age: around 14% for women aged 30–34; 18% for those 35–39; 37% for those 40–44; and 65% for those over 45. Women younger than 20 have slightly increased miscarriage risk, with around 16% of known pregnancies ending in miscarriage. Miscarriage risk also rises with paternal age, although the effect is less pronounced than for maternal age. The risk is lowest for men under 40 years old. For men aged 40-44, the risk is around 23% higher. For men over 45, the risk is 43% higher. Obesity, eating disorders and caffeine Not only is obesity associated with miscarriage; it can result in sub-fertility and other adverse pregnancy outcomes. Recurrent miscarriage is also related to obesity. Women with bulimia nervosa and anorexia nervosa may have a greater risk for miscarriage. Nutrient deficiencies have not been found to impact miscarriage rates but hyperemesis gravidarum sometimes precedes a miscarriage. Caffeine consumption also has been correlated to miscarriage rates, at least at higher levels of intake. However, such higher rates are statistically significant only in certain circumstances. Vitamin supplementation has generally not shown to be effective in preventing miscarriage. Chinese traditional medicine has not been found to prevent miscarriage. Endocrine disorders Disorders of the thyroid may affect pregnancy outcomes. Related to this, iodine deficiency is strongly associated with an increased risk of miscarriage. The risk of miscarriage is increased in those with poorly controlled insulin-dependent diabetes mellitus. Women with well-controlled diabetes have the same risk of miscarriage as those without diabetes. Food poisoning Ingesting food that has been contaminated with listeriosis, toxoplasmosis, and salmonella is associated with an increased risk of miscarriage. Amniocentesis and chorionic villus sampling Amniocentesis and chorionic villus sampling (CVS) are procedures conducted to assess the fetus. A sample of amniotic fluid is obtained by the insertion of a needle through the abdomen and into the uterus. Chorionic villus sampling is a similar procedure with a sample of tissue removed rather than fluid. These procedures are not associated with pregnancy loss during the second trimester but they are associated with miscarriages and birth defects in the first trimester. Miscarriage caused by invasive prenatal diagnosis (chorionic villus sampling (CVS) and amniocentesis) is rare (about 1%). Surgery The effects of surgery on pregnancy are not well-known including the effects of bariatric surgery. Abdominal and pelvic surgery are not risk factors for miscarriage. Ovarian tumours and cysts that are removed have not been found to increase the risk of miscarriage. The exception to this is the removal of the corpus luteum from the ovary. This can cause fluctuations in the hormones necessary to maintain the pregnancy. Medications There is no significant association between antidepressant medication exposure and miscarriage. The risk of miscarriage is not likely decreased by discontinuing SSRIs before pregnancy. Some available data suggest that there is a small increased risk of miscarriage for women taking any antidepressant, though this risk becomes less statistically significant when excluding studies of poor quality. Medicines that increase the risk of miscarriage include: retinoids nonsteroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen misoprostol methotrexate statins Immunisations Immunisations have not been found to cause miscarriage. Live vaccinations, like the MMR vaccine, can theoretically cause damage to the fetus as the live virus can cross the placenta and potentially increase the risk for miscarriage. Therefore, the Center for Disease Control (CDC) recommends against pregnant women receiving live vaccinations. However, there is no clear evidence that has shown live vaccinations increase the risk of miscarriage or fetal abnormalities. Some live vaccinations include: MMR, varicella, certain types of the influenza vaccine, and rotavirus. Treatments for cancer Ionising radiation levels given to a woman during cancer treatment cause miscarriage. Exposure can also impact fertility. The use of chemotherapeutic drugs to treat childhood cancer increases the risk of future miscarriage. Pre-existing diseases Several pre-existing diseases in pregnancy can potentially increase the risk of miscarriage, including diabetes, endometriosis, polycystic ovary syndrome (PCOS), hypothyroidism, certain infectious diseases, and autoimmune diseases. Women with endometriosis report a 76% to 298% increase in miscarriages versus their non-afflicted peers, the range affected by the severity of their disease. PCOS may increase the risk of miscarriage. Two studies suggested treatment with the drug metformin significantly lowers the rate of miscarriage in women with PCOS, but the quality of these studies has been questioned. Metformin treatment in pregnancy is not safe. In 2007, the Royal College of Obstetricians and Gynaecologists also recommended against the use of the drug to prevent miscarriage. Thrombophilias or defects in coagulation and bleeding were once thought to be a risk of miscarriage but have been subsequently questioned. Severe cases of hypothyroidism increase the risk of miscarriage. The effect of milder cases of hypothyroidism on miscarriage rates has not been established. A condition called luteal phase defect (LPD) is a failure of the uterine lining to be fully prepared for pregnancy. This can keep a fertilised egg from implanting or result in miscarriage. Mycoplasma genitalium infection is associated with an increased risk of preterm birth and miscarriage. Infections can increase the risk of a miscarriage: rubella (German measles), cytomegalovirus, bacterial vaginosis, HIV, chlamydia, gonorrhoea, syphilis, and malaria. Immune status Autoimmunity is a possible cause of recurrent or late-term miscarriages. In the case of an autoimmune-induced miscarriage, the woman's body attacks the growing fetus or prevents normal pregnancy progression. Autoimmune disease may cause abnormalities in embryos, which in turn may lead to miscarriage. As an example, coeliac disease increases the risk of miscarriage by an odds ratio of approximately 1.4. A disruption in normal immune function can lead to the formation of antiphospholipid antibody syndrome. This will affect the ability to continue the pregnancy, and if a woman has repeated miscarriages, she can be tested for it. Approximately 15% of recurrent miscarriages are related to immunologic factors. The presence of anti-thyroid autoantibodies is associated with an increased risk with an odds ratio of 3.73 and 95% confidence interval 1.8–7.6. Having lupus also increases the risk of miscarriage. Immunohistochemical studies on decidual basalis and chorionic villi found that the imbalance of the immunological environment could be associated with recurrent pregnancy loss. Anatomical defects and trauma Fifteen per cent of women who have experienced three or more recurring miscarriages have some anatomical defect that prevents the pregnancy from being carried for the entire term. The structure of the uterus affects the ability to carry a child to term. Anatomical differences are common and can be congenital. In some women, cervical incompetence or cervical insufficiency occurs with the inability of the cervix to stay closed during the entire pregnancy. It does not cause first-trimester miscarriages. In the second trimester, it is associated with an increased risk of miscarriage. It is identified after a premature birth has occurred at about 16–18 weeks into the pregnancy. During the second trimester, major trauma can result in a miscarriage. Smoking Tobacco (cigarette) smokers have an increased risk of miscarriage. There is an increased risk regardless of which parent smokes, though the risk is higher when the gestational mother smokes. Morning sickness Nausea and vomiting of pregnancy (NVP, or morning sickness) are associated with a decreased risk. Several possible causes have been suggested for morning sickness but there is still no agreement. NVP may represent a defence mechanism which discourages the mother's ingestion of foods that are harmful to the fetus; according to this model, a lower frequency of miscarriage would be an expected consequence of the different food choices made by women experiencing NVP. Chemicals and occupational exposure Chemical and occupational exposures may have some effect on pregnancy outcomes. A cause-and-effect relationship can almost never be established. Those chemicals that are implicated in increasing the risk for miscarriage are DDT, lead, formaldehyde, arsenic, benzene and ethylene oxide. Video display terminals and ultrasound have not been found affect the rates of miscarriage. In dental offices where nitrous oxide is used with the absence of anaesthetic gas scavenging equipment, there is a greater risk of miscarriage. For women who work with cytotoxic antineoplastic chemotherapeutic agents, there is a small increased risk of miscarriage. No increased risk for cosmetologists has been found. Other Alcohol increases the risk of miscarriage. Cocaine use increases the rate of miscarriage. Some infections have been associated with miscarriage. These include Ureaplasma urealyticum, Mycoplasma hominis, group B streptococci, HIV-1, and syphilis. Chlamydia trachomatis may increase the risk of miscarriage. Toxoplasmosis can cause a miscarriage. Subclinical infections of the lining of the womb, commonly known as chronic endometritis are also associated with poor pregnancy outcomes, compared to women with treated chronic endometritis or no chronic endometritis. Diagnosis In the case of blood loss, pain, or both, transvaginal ultrasound is performed. If a viable intrauterine pregnancy is not found with ultrasound, blood tests (serial βHCG tests) can be performed to rule out ectopic pregnancy, which is a life-threatening situation. If hypotension, tachycardia, and anaemia are discovered, the exclusion of an ectopic pregnancy is important. A miscarriage may be confirmed by an obstetric ultrasound and by the examination of the passed tissue. When looking for microscopic pathologic symptoms, one looks for the products of conception. Microscopically, these include villi, trophoblast, fetal parts, and background gestational changes in the endometrium. When chromosomal abnormalities are found in more than one miscarriage, genetic testing of both parents may be done. Ultrasound criteria A review article in The New England Journal of Medicine based on a consensus meeting of the Society of Radiologists in Ultrasound in America (SRU) has suggested that miscarriage should be diagnosed only if any of the following criteria are met upon ultrasonography visualisation: Classification A threatened miscarriage is any bleeding during the first half of pregnancy. At investigation, it may be found that the fetus remains viable and the pregnancy continues without further problems. An anembryonic pregnancy (also called an "empty sac" or "blighted ovum") is a condition where the gestational sac develops normally, while the embryonic part of the pregnancy is either absent or stops growing very early. This accounts for approximately half of miscarriages. All other miscarriages are classified as embryonic miscarriages, meaning that there is an embryo present in the gestational sac. Half of embryonic miscarriages have aneuploidy (an abnormal number of chromosomes). An inevitable miscarriage occurs when the cervix has already dilated, but the foetus has yet to be expelled. This usually will progress to a complete miscarriage. The foetus may or may not have cardiac activity. A complete miscarriage is when all products of conception have been expelled; these may include the trophoblast, chorionic villi, gestational sac, yolk sac, and fetal pole (embryo); or later in pregnancy the foetus, umbilical cord, placenta, amniotic fluid, and amniotic membrane. The presence of a pregnancy test that is still positive, as well as an empty uterus upon transvaginal ultrasonography, does, however, fulfil the definition of pregnancy of unknown location. Therefore, there may be a need for follow-up pregnancy tests to ensure that there is no remaining pregnancy, including ectopic pregnancy. An incomplete miscarriage occurs when some products of conception have been passed, but some remain inside the uterus. However, an increased distance between the uterine walls on transvaginal ultrasonography may also simply be an increased endometrial thickness and/or a polyp. The use of a Doppler ultrasound may be better in confirming the presence of significant retained products of conception in the uterine cavity. In cases of uncertainty, ectopic pregnancy must be excluded using techniques like serial beta-hCG measurements.A missed miscarriage is when the embryo or fetus has died, but a miscarriage has not yet occurred. It is also referred to as delayed miscarriage, silent miscarriage, or missed abortion. A septic miscarriage occurs when the tissue from a missed or incomplete miscarriage becomes infected, which carries the risk of spreading infection (septicaemia) and can be fatal. Recurrent miscarriage ("recurrent pregnancy loss" (RPL), "recurrent spontaneous abortion (RSA), or "habitual abortion") is the occurrence of multiple consecutive miscarriages; the exact number used to diagnose recurrent miscarriage varies; however, two is the minimum threshold to meet the criteria. If the proportion of pregnancies ending in miscarriage is 15% and assuming that miscarriages are independent events, then the probability of two consecutive miscarriages is 2.25% and the probability of three consecutive miscarriages is 0.34%. The occurrence of recurrent pregnancy loss is 1%. A large majority (85%) of those who have had two miscarriages will conceive and carry normally afterwards. The physical symptoms of a miscarriage vary according to the length of pregnancy, though most miscarriages cause pain or cramping. The size of blood clots and pregnancy tissue that are passed become larger with longer gestations. After 13 weeks' gestation, there is a higher risk of placenta retention. Prevention Prevention of a miscarriage can sometimes be accomplished by decreasing risk factors. This may include good prenatal care, avoiding drugs and alcohol, preventing infectious diseases, and avoiding X-rays. Identifying the cause of the miscarriage may help prevent future pregnancy loss, especially in cases of recurrent miscarriage. Often there is little a person can do to prevent a miscarriage. Vitamin supplementation before or during pregnancy has not been found to affect the risk of miscarriage. Progesterone has been shown to prevent miscarriage in women with 1) vaginal bleeding early in their current pregnancy and 2) a previous history of miscarriage. Non-modifiable risk factors Preventing a miscarriage in subsequent pregnancies may be enhanced with assessments of: Immune status Chemical and occupational exposures Anatomical defects Pre-existing or acquired disease in pregnancy Polycystic ovary syndrome Previous exposure to chemotherapy and radiation Medications Surgical history Endocrine disorders Genetic abnormalities Modifiable risk factors Maintaining a healthy weight and good prenatal care can reduce the risk of miscarriage. Some risk factors can be minimized by avoiding the following: Smoking Cocaine use Alcohol Poor nutrition Occupational exposure to agents that can cause miscarriage Medications associated with miscarriage Drug abuse Management Women who miscarry early in their pregnancy usually do not require any subsequent medical treatment but they can benefit from support and counseling. Most early miscarriages will be completed on their own; in other cases, medication treatment or aspiration of the products of conception can be used to remove the remaining tissue. While bed rest has been advocated to prevent miscarriage, this is not of benefit. Those who are experiencing or who have experienced a miscarriage benefit from the use of careful medical language. Significant distress can often be managed by the ability of the clinician to clearly explain terms without suggesting that the woman or couple are somehow to blame. Evidence to support Rho(D) immune globulin after a spontaneous miscarriage is unclear. In the UK, Rho(D) immune globulin is recommended in Rh-negative women after 12 weeks gestational age and before 12 weeks gestational age in those who need surgery or medication to complete the miscarriage. Methods No treatment is necessary for a diagnosis of complete miscarriage (so long as ectopic pregnancy is ruled out). In cases of an incomplete miscarriage, empty sac, or missed abortion there are three treatment options: watchful waiting, medical management, and surgical treatment. With no treatment (watchful waiting), most miscarriages (65–80%) will pass naturally within two to six weeks. This treatment avoids the possible side effects and complications of medications and surgery, but increases the risk of mild bleeding, the need for unplanned surgical treatment, and incomplete miscarriage. Medical treatment usually consists of using misoprostol (a prostaglandin) alone or in combination with mifepristone pre-treatment. These medications help the uterus to contract and expel the remaining tissue out of the body. This works within a few days in 95% of cases. Vacuum aspiration or sharp curettage can be used, with vacuum aspiration being lower-risk and more common. Delayed and incomplete miscarriage In delayed or incomplete miscarriage, treatment depends on the amount of tissue remaining in the uterus. Treatment can include surgical removal of the tissue with vacuum aspiration or misoprostol. Studies looking at the methods of anaesthesia for surgical management of incomplete miscarriage have not shown that any adaptation from normal practice is beneficial. Induced miscarriage An induced abortion may be performed by a qualified healthcare provider for women who cannot continue the pregnancy. Self-induced abortion performed by a woman or non-medical personnel can be dangerous and is still a cause of maternal mortality in some countries. In some locales, it is illegal or a carries heavy social stigma. Sex Some organisations recommend delaying sex after a miscarriage until the bleeding has stopped to decrease the risk of infection. However, there is not sufficient evidence for the routine use of antibiotics to try to avoid infection in incomplete abortion. Others recommend delaying attempts at pregnancy until one period has occurred to make it easier to determine the dates of a subsequent pregnancy. No evidence getting pregnant in that first cycle affects outcomes and an early subsequent pregnancy may improve outcomes. Support Organisations exist that provide information and counselling to help those who have had a miscarriage. Family and friends often conduct a memorial or burial service. Hospitals also can provide support and help memorialise the event. Depending on the locale others desire to have a private ceremony. Providing appropriate support with frequent discussions and sympathetic counselling is part of evaluation and treatment. Those who experience unexplained miscarriages can be treated with emotional support. Miscarriage leave Miscarriage leave is a leave of absence concerning miscarriage. The following countries offer paid or unpaid leave to women who have had a miscarriage. The Philippines – 60 days' fully paid leave for miscarriages (before 20 weeks of gestation) or emergency termination of the pregnancy (on the 20th week or after) The husband of the mother gets seven days' fully paid leave up to the 4th pregnancy. India – six weeks' leave New Zealand – three days' bereavement leave for both parents Mauritius – two weeks' leave Indonesia – six weeks' leave Taiwan – five days, one week or four weeks, depending on how advanced the pregnancy was Outcomes Psychological and emotional effects Every woman's personal experience of miscarriage is different, and women who have more than one miscarriage may react differently to each event. In Western cultures since the 1980s, medical providers assume that experiencing a miscarriage "is a major loss for all pregnant women". A miscarriage can result in anxiety, depression or stress for those involved. It can affect the whole family. Many of those experiencing a miscarriage go through a grieving process. "Prenatal attachment" often exists that can be seen as parental sensitivity, love and preoccupation directed towards the unborn child. Serious emotional impact is usually experienced immediately after the miscarriage. Some may go through the same loss when an ectopic pregnancy is terminated. In some, the realisation of the loss can take weeks. Providing family support to those experiencing the loss can be challenging because some find comfort in talking about the miscarriage while others may find the event painful to discuss. The father can have the same sense of loss. Expressing feelings of grief and loss can sometimes be harder for men. Some women can begin planning their next pregnancy after a few weeks of having the miscarriage. For others, planning another pregnancy can be difficult. Some facilities acknowledge the loss. Parents can name and hold their infant. They may be given mementoes such as photos and footprints. Some conduct a funeral or memorial service. They may express the loss by planting a tree. Some health organizations recommend that sexual activity be delayed after the miscarriage. The menstrual cycle should resume after about three to four months. Women reported that they were dissatisfied with the care they received from physicians and nurses. Subsequent pregnancies Some parents want to try to have a baby very soon after the miscarriage. The decision of trying to become pregnant again can be difficult. Reasons exist that may prompt parents to consider another pregnancy. For older mothers, there may be some sense of urgency. Other parents are optimistic that future pregnancies are likely to be successful. Many are hesitant and want to know about the risk of having another or more miscarriages. Some clinicians recommend that the women have one menstrual cycle before attempting another pregnancy. This is because the date of conception may be hard to determine. Also, the first menstrual cycle after a miscarriage can be much longer or shorter than expected. Parents may be advised to wait even longer if they have experienced late miscarriage or molar pregnancy, or are undergoing tests. Some parents wait for six months based on recommendations from their healthcare provider. Research shows that depression after a miscarriage or stillbirth can continue for years, even after the birth of a subsequent child. Medical professionals are advised to take previous loss of a pregnancy into account when assessing risks for postnatal depression following the birth of a subsequent infant. It is believed that supportive interventions may improve the health outcomes of both the mother and the child. The risks of having another miscarriage vary according to the cause. The risk of having another miscarriage after a molar pregnancy is very low. The risk of another miscarriage is highest after the third miscarriage. Pre-conception care is available in some locales. Later cardiovascular disease There is a significant association between miscarriage and later development of coronary artery disease, but not cerebrovascular disease. Epidemiology Around 15% of known pregnancies end in miscarriage, totaling around 23 million miscarriages per year worldwide. Miscarriage rates among all fertilized zygotes are around 30% to 50%. A 2012 review found the risk of miscarriage between 5 and 20 weeks from 11% to 22%. Up to the 13th week of pregnancy, the risk of miscarriage each week was around 2%, dropping to 1% in week 14 and reducing slowly between 14 and 20 weeks. The precise rate is not known because a large number of miscarriages occur before pregnancies become established and before the woman is aware she is pregnant. Additionally, those with bleeding in early pregnancy may seek medical care more often than those not experiencing bleeding. Although some studies attempt to account for this by recruiting women who are planning pregnancies and testing for very early pregnancy, they still are not representative of the wider population. In 2010, 50,000 inpatient admissions for miscarriage occurred in the UK. Society and culture Society's reactions to miscarriage have changed over time. In the early 20th century, the focus was on the mother's physical health and the difficulties and disabilities that miscarriage could produce. Other reactions, such as the expense of medical treatments and relief at ending an unwanted pregnancy, were also heard. In the 1940s and 1950s, people were more likely to express relief, not because the miscarriage ended an unwanted or mistimed pregnancy, but because people believed that miscarriages were primarily caused by birth defects, and miscarrying meant that the family would not raise a child with disabilities. The dominant attitude in the mid-century was that a miscarriage, although temporarily distressing, was a blessing in disguise for the family and that another pregnancy and a healthier baby would soon follow, especially if women trusted physicians and reduced their anxieties. Media articles were illustrated with pictures of babies, and magazine articles about miscarriage ended by introducing the healthy baby—usually a boy—that shortly followed it. Beginning in the 1980s, miscarriage in the US was primarily framed in terms of the individual woman's emotional reaction, especially her grief over a tragic outcome. The subject was portrayed in the media with images of an empty crib or an isolated, grieving woman, and stories about miscarriage were published in general-interest media outlets, not just women's magazines or health magazines. Family members were encouraged to grieve, to memorialize their losses through funerals and other rituals, and to think of themselves as being parents. This shift to recognizing these emotional responses was partly due to medical and political successes, which created an expectation that pregnancies are typically planned and safe, and to women's demands that their emotional reactions no longer be dismissed by the medical establishments. It also reinforces the anti-abortion movement's belief that human life begins at conception or early in pregnancy, and that motherhood is a desirable life goal. The modern one-size-fits-all model of grief does not fit every woman's experience, and an expectation to perform grief creates unnecessary burdens for some women. The reframing of miscarriage as a private emotional experience brought less awareness of miscarriage and a sense of silence around the subject, especially compared to the public discussion of miscarriage during campaigns for access to birth control during the early 20th century, or the public campaigns to prevent miscarriages, stillbirths, and infant deaths by reducing industrial pollution during the 1970s. In places where induced abortion is illegal or carries a social stigma, suspicion may surround miscarriage, complicating an already sensitive issue. Developments in ultrasound technology (in the early 1980s) allowed them to identify earlier miscarriages. Legal registration Miscarriages may be tracked for purposes of health statistics, but they are not usually recorded individually. For example, under UK law, all stillbirths should be registered, although this does not apply to miscarriages. According to French statutes, an infant born before the age of viability, determined to be 28 weeks, is not registered as a 'child'. If birth occurs after this, the infant is granted a certificate that allows the parents to have a symbolic record of that child. This certificate can include a registered and given name to allow a funeral and acknowledgement of the event. Other animals Miscarriage occurs in all animals that experience pregnancy, though in such contexts it is more commonly referred to as a spontaneous abortion (the two terms are synonymous). There are a variety of known risk factors in non-human animals. For example, in sheep, miscarriage may be caused by crowding through doors or being chased by dogs. In cows, spontaneous abortion may be caused by contagious diseases, such as brucellosis or Campylobacter, but often can be controlled by vaccination. In many species of sharks and rays, stress-induced miscarriage occurs frequently on capture. Other diseases are also known to make animals susceptible to miscarriage. Spontaneous abortion occurs in pregnant prairie voles when their mate is removed and they are exposed to a new male, an example of the Bruce effect, although this effect is seen less in wild populations than in the laboratory. Female mice who had spontaneous abortions showed a sharp rise in the amount of time spent with unfamiliar males preceding the abortion than those who did not.
Biology and health sciences
Human reproduction
Biology
144156
https://en.wikipedia.org/wiki/Mussel
Mussel
Mussel () is the common name used for members of several families of bivalve molluscs, from saltwater and freshwater habitats. These groups have in common a shell whose outline is elongated and asymmetrical compared with other edible clams, which are often more or less rounded or oval. The word "mussel" is frequently used to mean the bivalves of the marine family Mytilidae, most of which live on exposed shores in the intertidal zone, attached by means of their strong byssal threads ("beard") to a firm substrate. A few species (in the genus Bathymodiolus) have colonised hydrothermal vents associated with deep ocean ridges. In most marine mussels the shell is longer than it is wide, being wedge-shaped or asymmetrical. The external colour of the shell is often dark blue, blackish, or brown, while the interior is silvery and somewhat nacreous. The common name "mussel" is also used for many freshwater bivalves, including the freshwater pearl mussels. Freshwater mussel species inhabit lakes, ponds, rivers, creeks, canals, and they are classified in a different subclass of bivalves, despite some very superficial similarities in appearance. Freshwater zebra mussels and their relatives in the family Dreissenidae are not related to previously mentioned groups, even though they resemble many Mytilus species in shape, and live attached to rocks and other hard surfaces in a similar manner, using a byssus. They are classified with the Heterodonta, the taxonomic group which includes most of the bivalves commonly referred to as "clams". General anatomy The mussel's external shell is composed of two hinged halves or "valves". The valves are joined on the outside by a ligament, and are closed when necessary by strong internal muscles (anterior and posterior adductor muscles). Mussel shells carry out a variety of functions, including support for soft tissues, protection from predators and protection against desiccation. The shell has three layers. In the pearly mussels there is an inner iridescent layer of nacre (mother-of-pearl) composed of calcium carbonate, which is continuously secreted by the mantle; the prismatic layer, a middle layer of chalky white crystals of calcium carbonate in a protein matrix; and the periostracum, an outer pigmented layer resembling a skin. The periostracum is composed of a protein called conchin, and its function is to protect the prismatic layer from abrasion and dissolution by acids (especially important in freshwater forms where the decay of leaf materials produces acids). Like most bivalves, mussels have a large organ called a foot. In freshwater mussels, the foot is large, muscular, and generally hatchet-shaped. It is used to pull the animal through the substrate (typically sand, gravel, or silt) in which it lies partially buried. It does this by repeatedly advancing the foot through the substrate, expanding the end so it serves as an anchor, and then pulling the rest of the animal with its shell forward. It also serves as a fleshy anchor when the animal is stationary. In marine mussels, the foot is smaller, tongue-like in shape, with a groove on the ventral surface which is continuous with the byssus pit. In this pit, a viscous secretion is exuded, entering the groove and hardening gradually upon contact with sea water. This forms extremely tough, strong, elastic, byssal threads that secure the mussel to its substrate allowing it to remain sessile in areas of high flow. The byssal thread is also sometimes used by mussels as a defensive measure, to tether predatory molluscs, such as dog whelks, that invade mussel beds, immobilising them and thus starving them to death. In cooking, the byssus of the mussel is known as the "beard" and is removed during preparation, often after cooking when the mussel has opened. Life habits Feeding Both marine and freshwater mussels are filter feeders; they feed on plankton and other microscopic sea creatures which are free-floating in seawater. A mussel draws water in through its incurrent siphon. The water is then brought into the branchial chamber by the actions of the cilia located on the gills for ciliary-mucus feeding. The wastewater exits through the excurrent siphon. The labial palps finally funnel the food into the mouth, where digestion begins. Marine mussels are usually found clumping together on wave-washed rocks, each attached to the rock by its byssus. The clumping habit helps hold the mussels firm against the force of the waves. At low tide mussels in the middle of a clump will undergo less water loss because of water capture by the other mussels. Reproduction Both marine and freshwater mussels are gonochoristic, with separate male and female individuals. In marine mussels, fertilization occurs outside the body, with a larval stage that drifts for three weeks to six months, before settling on a hard surface as a young mussel. There, it is capable of moving slowly by means of attaching and detaching byssal threads to attain a better life position. Freshwater mussels reproduce sexually. Sperm is released by the male directly into the water and enters the female via the incurrent siphon. After fertilization, the eggs develop into a larval stage called a glochidium (plural glochidia), which temporarily parasitizes fish, attaching themselves to the fish's fins or gills. Prior to their release, the glochidia grow in the gills of the host fish where they are constantly flushed with oxygen-rich water. In some species, release occurs when a fish attempts to attack the mussel's mantle flaps, which are shaped like minnows or other prey, an example of aggressive mimicry. Glochidia are generally species-specific, and will only live if they find the correct fish host. Once the larval mussels attach to the fish, the fish body reacts to cover them with cells forming a cyst, where the glochidia remain for two to five weeks (depending on temperature). They grow, break free from the host, and drop to the bottom of the water to begin an independent life. Predators Marine mussels are eaten by humans, starfish, seabirds, and by numerous species of predatory marine gastropods in the family Muricidae, such as the dog whelk, Nucella lapillus. Freshwater mussels are eaten by muskrats, otters, raccoons, ducks, baboons, humans, and geese. Distribution and habitat Marine mussels are abundant in the low and mid intertidal zone in temperate seas globally. Other species of marine mussel live in tropical intertidal areas, but not in the same huge numbers as in temperate zones. Certain species of marine mussels prefer salt marshes or quiet bays, while others thrive in pounding surf, completely covering wave-washed rocks. Some species have colonized abyssal depths near hydrothermal vents. The South African white mussel exceptionally does not bind itself to rocks but burrows into sandy beaches extending two tubes above the sand surface for ingestion of food and water and exhausting wastes. Freshwater mussels inhabit permanent lakes, rivers, canals and streams throughout the world except in the polar regions. They require a constant source of cool, clean water. They prefer water with a substantial mineral content, using calcium carbonate to build their shells. Aquaculture In 2005, China accounted for 40% of the global mussel catch according to a FAO study. Within Europe, where mussels have been cultivated for centuries, Spain remained the industry leader. Aquaculture of mussels in North America began in the 1970s. In the US, the northeast and northwest have significant mussel aquaculture operations, where Mytilus edulis (blue mussel) is most commonly grown. While the mussel industry in the US has increased, in North America, 80% of cultured mussels are produced in Prince Edward Island in Canada. In Washington state, an estimated 2.9 million pounds of mussels were harvested in 2010, valued at roughly $4.3M. In New Zealand, Perna canaliculus (the New Zealand green-lipped mussel), industry produces over 140,000 metric tons (150,000 short tons) annually and in 2009 was valued in excess of NZ$250 million. Culture methods Freshwater mussels are used as host animals for the cultivation of freshwater pearls. Some species of marine mussel, including the blue mussel (Mytilus edulis) and the New Zealand green-lipped mussel (Perna canaliculus), are also cultivated as a source of food.In some areas of the world, mussel farmers collect naturally occurring marine mussel seed for transfer to more appropriate growing areas, however, most North American mussel farmers rely on hatchery-produced seed. Growers typically purchase seed after it has set (about 1mm in size) or after it has been nursed in upwellers for 3-6 additional weeks and is 2-3mm. The seed is then typically reared in a nursery environment, where it is transferred to a material with a suitable surface for later relocation to the growing area. After about three months in the nursery, mussel seed is "socked" (placed in a tube-like mesh material) and hung on longlines or rafts for grow-out. Within a few days, the mussels migrate to the outside of the sock for better access food sources in the water column. Mussels grow quickly and are usually ready for harvest in less than two years. Unlike other cultured bivalves, mussels use byssus threads (beard) to attach themselves to any firm substrate, which makes them suitable for a number of culture methods. There are a variety of techniques for growing mussels. Bouchot culture: Intertidal growth technique, or bouchot technique: pilings, known in French as bouchots, are planted at sea; ropes, on which the mussels grow, are tied in a spiral on the pilings; some mesh netting prevents the mussels from falling away. This method needs an extended tidal zone. On-bottom culture: On-bottom culture is based on the principle of transferring mussel seed (spat) from areas where they have settled naturally to areas where they can be placed in lower densities to increase growth rates, facilitate harvest, and control predation (Mussel farmers must remove predators and macroalgae during the growth cycle). Raft culture: Raft culture is a commonly used method throughout the world. Lines of rope mesh socks are seeded with young mussels and suspended vertically from a raft. The specific length of the socks depends on depth and food availability. Longline culture (rope culture): Mussels are cultivated extensively in New Zealand, where the most common method is to attach mussels to ropes which are hung from a rope back-bone supported by large plastic floats. The most common species cultivated in New Zealand is the New Zealand green-lipped mussel. Longline culture is the most recent development for mussel culture and are often used as an alternative to raft culture in areas that are more exposed to high wave energy. A long-line is suspended by a series of small anchored floats and ropes or socks of mussels are then suspended vertically from the line. Harvest In roughly 12–15 months, mussels reach marketable size (40mm) and are ready for harvest. Harvesting methods depend on the grow-out area and the rearing method being used. Dredges are currently used for on-bottom culture. Mussels grown on wooden poles can be harvested by hand or with a hydraulic powered system. For raft and longline culture, a platform is typically lowered under the mussel lines, which are then cut from the system and brought to the surface and dumped into containers on a nearby vessel. After harvest, mussels are typically placed in seawater tanks to rid them of impurities before marketing. Mussel-inspired materials Byssal threads, used to anchor mussels to substrates, are now recognized as superior bonding agents. A number of studies have investigated mussel "glues" for industrial and surgical applications. Further, mussel adhesive proteins inspired the design of peptide mimics that were well studied for surface bioengineering of medical implants. Self-assembling mussel-inspired peptides were also shown to form functional nanostructures. Also, a peptide derived from mussel foot protein-5, a key protein in mussel adhesion, displayed antibacterial properties and served as inspiration for the design of a new class of peptide-based antibacterial adhesive hydrogels, which are active against drug-resistant Gram-positive bacteria. Additionally byssal threads have provided insight into the construction of artificial tendons. Environmental applications Mussels are widely used as bio-indicators to monitor the health of aquatic environments in both fresh water and the marine environments. They are particularly useful since they are distributed worldwide and they are sessile. These characteristics ensure that they are representative of the environment where they are sampled or placed. Their population status or structure, physiology, behaviour or the level of contamination with elements or compounds can indicate the status of the ecosystem. Transplanted caged mussel were used in a study to monitor heavy metal contamination in coastal waters. Mussels and nutrient mitigation Marine nutrient bioextraction is the practice of farming and harvesting marine organisms such as shellfish and seaweed for the purpose of reducing nutrient pollution. Mussels and other bivalve shellfish consume phytoplankton containing nutrients such as nitrogen (N) and phosphorus (P). On average, one live mussel is 1.0% N and 0.1% P. When the mussels are harvested and removed, these nutrients are also removed from the system and recycled in the form of seafood or mussel biomass, which can be used as an organic fertilizer or animal feed-additive. These ecosystem services provided by mussels are of particular interest to those hoping to mitigate excess anthropogenic marine nutrients, particularly in eutrophic marine systems. While mussel aquaculture is actually promoted in some countries such as Sweden as a water management strategy to address coastal eutrophication, mussel farming as a nutrient mitigation tool is still in its infancy in most parts of the world. Ongoing efforts in the Baltic Sea (Denmark, Sweden, Germany, Poland) and Long Island Sound and Puget Sound in the U.S. are currently examining nutrient uptake, cost-effectiveness, and potential environmental impacts of mussel farming as a means to mitigate excess nutrients and complement traditional wastewater treatment programs. Conservation Freshwater mussels Out of 511 species assessed globally, 44% of freshwater mussels listed on the IUCN Red List are classified at some level of threatened. There are 297 known freshwater mussel taxa in the United States and Canada, which are home to the most diverse freshwater mussel fauna in the world, especially in the southeastern United States. Of the 297 known species, 213 (71.7%) taxa are listed as endangered, threatened, or of special concern. Approximately 37 North American species were considered extinct in 2004. Out of 16 recognized freshwater mussel species in Europe, 12 are considered threatened, with varying statuses from Near Threatened to Critically Endangered. 8 species are protected by the European Union Habitats Directive across all annexes. There are approximately 85 known species in Africa, 102 in Central America, 74 in South America, 228 in Asia (with the highest species diversity in Southeast Asia), and 33 in Australasia. The species in these areas are not as well researched as in North America and Europe. Approximately 61% of freshwater mussels in Asia had not been assessed and conservation efforts were almost non-existent. No Asian mussels were protected internationally under legislation such as CITES. The main factors contributing to the decline of freshwater mussels include destruction by dams, increased siltation, channel alteration and the introduction of invasive species such as the zebra mussel. As food Humans have used mussels as food for thousands of years. About 17 species are edible, of which the most commonly eaten are Mytilus edulis, M. galloprovincialis, M. trossulus and Perna canaliculus. Although freshwater mussels are edible, today they are widely considered unpalatable and are rarely consumed. Freshwater mussels were once eaten extensively by native peoples of North America and some still do today. In the United States during the Second World War, mussels were commonly served in diners and restaurants across the country. This was due to wartime rationing and shortages of red meat, such as beef and pork. Mussels became a popular substitute for most meats (with the exception of poultry). In Belgium, the Netherlands, and France, mussels are consumed with French fries (mosselen met friet or moules-frites) or bread. In Belgium, mussels are sometimes served with fresh herbs and flavorful vegetables in a stock of butter and white wine. Fries and Belgian beer sometimes are accompaniments. A similar style of preparation is commonly found in the Rhineland where mussels are customarily served in restaurants with a side of dark bread in "months containing an R", that is between September and April. In the Netherlands, mussels are sometimes served fried in batter or breadcrumbs, particularly at take-out food outlets or informal settings. In France, the Éclade des Moules, or, locally, Terré de Moules, is a mussel bake that can be found along the beaches of the Bay of Biscay. In Italy, mussels are mixed with other seafood; they are most commonly eaten steamed, sometimes with white wine, herbs, and served with the remaining water and some lemon. In Spain, they are consumed mostly steamed, sometimes boiling white wine, onion and herbs, and served with the remaining water and some lemon. They can also be eaten as tigres, a sort of croquette using the mussel meat, shrimps and other pieces of fish in a thick bechamel then breaded and fried in the clean mussel shell. They are used in other sort of dishes such as rices or soups or commonly eaten canned in a pickling brine made of oil, vinegar, peppercorns, bay leaves and paprika. In Turkey, mussels are either covered with flour and fried on skewers (midye tava), or filled with rice and served cold (midye dolma) and are usually consumed after alcohol (mostly raki or beer). They are used in Ireland boiled and seasoned with vinegar, with the "bray" or boiling water as a supplementary hot drink. In Cantonese cuisine, mussels are cooked in a broth of garlic and fermented black bean. In New Zealand, they are served in a chilli or garlic-based vinaigrette, processed into fritters and fried, or used as the base for a chowder. In Brazil, it is common to see mussels being cooked and served with olive oil, usually accompanied by onion, garlic and other herbs. The plate is very popular among tourists and low classes, probably because of the hot climate that favours mussels reproduction. In India, mussels are popular in Kerala, Maharashtra, Karnataka-Bhatkal, and Goa. They are either prepared with drumsticks, breadfruit or other vegetables, or filled with rice and coconut paste with spices and served hot. Fried mussels ('Kadukka' കടുക്ക in Malayalam) of north Kerala especially in Thalassery are a spicy, favored delicacy. In coastal Karnataka Bearys prepare special rice balls stuffed with spicy fried mussels and steamed, locally known as "pachilede pindi". Preparation Mussels can be smoked, boiled, steamed, roasted, barbecued or fried in butter or vegetable oil. They can be used in soups, salads and sauces. As with all shellfish, except shrimp, mussels should be checked to ensure they are still alive just before they are cooked; enzymes quickly break down the meat and make them unpalatable or poisonous after dying or uncooked. Some mussels might contain toxins. A simple criterion is that live mussels, when in the air, will shut tightly when disturbed. Open, unresponsive mussels are dead, and must be discarded. Unusually heavy, wild-caught, closed mussels may be discarded as they may contain only mud or sand. (They can be tested by slightly opening the shell halves.) A thorough rinse in water and removal of "the beard" is suggested. Mussel shells usually open when cooked, revealing the cooked soft parts. Historically, it has been believed that after cooking all the mussels should have opened and those that have not are not safe to eat and should be discarded. However, according to marine biologist Nick Ruello, this advice may have arisen from an old, poorly researched cookbook's advice, which has now become an assumed truism for all shellfish. Ruello found 11.5% of all mussels failed to open during cooking, but when forced open, 100% were "both adequately cooked and safe to eat." Although mussels are valued as food, mussel poisoning due to toxic planktonic organisms can be a danger along some coastlines. For instance, mussels should be avoided during the warmer months along the west coast of the United States. This poisoning is usually due to a bloom of dinoflagellates (red tides), which contain toxins. The dinoflagellates and their toxin are harmless to mussels, even when concentrated by the mussel's filter feeding, but the concentrated toxins cause serious illness if the mussels are consumed by humans, including paralytic shellfish poisoning. Nutrition highlights Excellent source of: selenium (44.8 μg), and vitamin B12 (12 μg) Good source of: zinc (1.6 mg), and folate (42 μg) Foods that are an "excellent source" of a particular nutrient provide 20% or more of the recommended daily value. Foods that are a "good source" of a particular nutrient provide between 10 and 20% of the recommended daily value.
Biology and health sciences
Mollusks
null
144192
https://en.wikipedia.org/wiki/Koko%20%28gorilla%29
Koko (gorilla)
Hanabiko, nicknamed "Koko" (July 4, 1971 – June 19, 2018) was a female western lowland gorilla born in the San Francisco Zoo and cross-fostered by Francine Patterson for use in ape language experiments. Koko gained public attention as the subject of two National Geographic cover stories and, in 1980, the best-selling children's picture book, Koko's Kitten. Koko became the world's most famous representative of her critically endangered species. Koko's communication skills were hotly debated. Koko used many signs adapted from American Sign Language, but the scientific consensus to date remains that she did not demonstrate the syntax or grammar required of true language. Patterson was widely criticized for misrepresenting Koko's skills, and, in the 1990s, for her care of Koko and Gorilla Foundation staff. Despite such controversies, Koko's story changed the public image of gorillas, previously assumed to be brainless and violent. Science noted in its obituary that Koko "helped transform how the human world viewed animal emotion—and intelligence." Early life and popularity Koko was born on July 4, 1971, at the San Francisco Zoo to her mother Jacqueline and father Bwana. (The name , , is of Japanese origin and is a reference to her date of birth, the Fourth of July.) Koko remained with her mother until December, when she was hospitalized due to malnutrition, then hand-tended in the zookeeper's home. Patterson originally cared for Koko at the San Francisco Zoo as part of her doctoral research at Stanford University. Up through June 1973, she conducted sign language lessons with Koko from the Children's Zoo exhibit. The environment was noisy and distracting, so Patterson and her life partner Ron Cohn purchased a trailer in which they could conduct Koko's signing sessions. Around this time, Patterson realized that conflict with the zoo was "inevitable." She started the project on the condition that Koko would be reunited with her gorilla colony after a few years. Gorillas are social animals and suffer when isolated from their species. And, as gorillas are endangered, the zoo expected to breed Koko. But Patterson felt that she had become Koko's "mother" and convinced the zoo to let her move the gorilla to Stanford. Once at Stanford, Patterson worked to wrest custody of Koko from San Francisco Zoo. Patterson found an exotics species dealer who sold her two infant gorillas that she suspected were illegally "harvested" (a process that involves killing the mother and any surrounding adults). Her plan was to give the female to the zoo as a replacement for Koko and keep the male as a playmate. But the female died within a month. Only the male, Michael, survived. Stuck without a viable trade for the zoo, Patterson launched a "Save Koko" press campaign, telling reporters that if Koko had to go back to the zoo, she may sink into depression, refuse to eat, and possibly die. The Save Koko campaign generated $3,000 in donations and, with additional funds from a wealthy benefactor, allowed Patterson to maintain custody of Koko. Around this time, Patterson founded (with Ron Cohn and lawyer Edward Fitzsimmons) the nonprofit Gorilla Foundation. Koko's Kitten In 1978, Koko gained worldwide attention when she was pictured on the cover of National Geographic magazine. The cover was a photo of Koko taking her own picture in the mirror. Koko was later featured on the cover of National Geographic in 1985 with a picture of her and her kitten, All Ball. In 1985, Scholastic Inc. published Koko's Kitten, a children's picture book based on the National Geographic story. The book was favorably reviewed and became one of Scholastic's best sellers. Written by Patterson, it describes Koko's yearning for a cat, her adoption of All Ball, and Koko's sadness after the kitten is hit by a car and killed. The story is peppered throughout with Koko's signs such as "cry," "sleep" and "cat." Koko's Kitten is still in print. Characteristics Use of language and controversy Francine Patterson published a few peer-reviewed studies on her work with Koko in the late 1970s. She demonstrated that Koko was able to communicate using a number of signs adapted from American Sign Language. Gorillas have thick, stubby fingers and hands that move differently than humans, so Koko was unable to make some ASL signs. Francine Patterson used the term "Gorilla Sign Language" to refer to Koko's adaptations. Patterson reported that Koko invented new signs to communicate novel thoughts. For example, she said that nobody taught Koko the word for "ring," so Koko combined the words "finger" and "bracelet", hence "finger-bracelet". This type of claim was seen as a typical problem with Patterson's methodology, as it relies on a human interpreter of Koko's intentions. In 1979, Herbert S. Terrace published the negative results of his Nim Chimpsky study, which presented evidence that Koko was mimicking her trainers. Terrace's article ignited intense debate over the ape language experiments (see "Scientific criticism" below), culminating in a 1980 "Clever Hans" conference that mocked the other researchers involved. Funding for the ape language experiments disappeared seemingly overnight. Though other scientists severed ties with their apes after funding dried up, Patterson maintained responsibility for Koko. Most of the chimps who worked with Terrace, Allen and Beatrix Gardner were sold to medical labs for testing. Though Patterson had initially defended her scientific work, she turned her focus away from science and toward securing revenue for the upkeep of Koko and Michael. Her work involved fund raising, PR campaigns, and managing Gorilla Foundation caregiving staff. Since 1978, Patterson and Koko have had no affiliation with any university or government funding. Scientific criticism Francine Patterson's published research received a variety of criticisms from the scientific community. Herbert S. Terrace and Laura-Ann Petitto, researchers who worked with Nim Chimpsky, issued critical evaluations of Patterson's reports and suggested that Koko was simply being prompted by her trainers' unconscious cues to display specific signs. Terrace and Petitto questioned Patterson's interpretations of Koko's signing and her claims of grammatical competency, asking for more rigorous testing. (Terrace and Petitto reported negative results in their Nim study, which was itself criticized on methodological grounds.) Other researchers argued that Koko did not understand the meaning behind what she was doing and learned to complete the signs simply because the researchers rewarded her for doing so (indicating that her actions were the product of operant conditioning). Another concern was that interpretation of the gorilla's conversation was left to the handler, who may have seen improbable concatenations of signs as meaningful; for example, when Koko signed "sad" there was no way to tell whether she meant it with the connotation of "How sad." Patterson defended her research, stating that blind and double-blind experiments had been administered to evaluate the gorillas' comprehension, that the gorillas were able to sign spontaneously to each other and to strangers without the prompting of a trainer, and that they signed meaningfully the majority of the time. Later critics noted that Patterson used Koko in deceptive ways in popular media. These concerns were echoed privately by staff at the Gorilla Foundation, where turnover was high. Some, like research assistant Anne Southecomb, expressed concerns that Patterson's exaggerated claims and "over-interpretation" undermined and disvalued their work. (Southcombe left to work with orangutan Chantek on a research project she preferred.) Sign language expert Sherman Wilcox, for example, characterized the Foundation's edited clips of Koko making a "climate speech" as deceptive and "disrespectful of ASL." Wilcox expressed concerned that the bit would reinforce the perception that ASL is "only words and no syntax." Eugene Linden, a journalist who spent years studying apes involved in language experiments and co-wrote (with Patterson) The Education of Koko, also expressed concerns about Patterson's practices. Linden reported that Koko's signing was more fluid and precise than that of Washoe and other Oklahoma chimpanzees. She was also by nature less impulsive; though, like the chimps, she frequently refused to participate in language drills. When not pushed to perform or stressed by strangers, "the amount of signing by Koko seemed to me to overwhelm Penny's capacity to digest and analyze it," Linden wrote. But in Linden's view, Patterson's exaggerated claims, "bunker mentality," refusal to provide researchers access to Koko, and unwillingness to open up the data she had collected minimized Koko's impact. Ultimately, critics of Patterson's claims acknowledged that Koko had learned a number of signs and used them to communicate her wants. But this did not mean that Koko "spoke" sign language, which requires a grasp of syntax and grammatical sentences. Experts generally agreed that Koko's use of sentences was unsupported by evidence. Care practices criticism Former employees of The Gorilla Foundation criticized the methods used to care for Koko and her male companion Ndume. In 2012, nine staff members including caregivers and researchers out of "roughly a dozen" resigned, and several submitted a letter to the board to explain their concerns. Former caregiver John Safkow stated that all members of the board left after the walkout, except for Betty White. A pseudonymous source, "Sarah," told Slate that Koko's diet included an excess of processed meat and candy, and that Koko was given a traditional Thanksgiving dinner yearly. The source stated that the official diet they were told to give Koko was appropriate, but that Patterson would visit and feed her "chocolates and meats." Koko's weight of was higher than would be normal for a female gorilla in the wild, approximately ; the foundation stated that Koko "is, like her mother, a larger frame Gorilla." Multiple employees corroborated the claim that both Koko and Ndume were given "massive" numbers of supplements on the recommendation of a naturopath; Safkow recalled that the number was between 70 and 100 pills per day, and "Sarah" claimed that various inappropriate foods like smoked turkey, pea soup, non-alcoholic beer, and candy were used as treats to coax Koko to take the pills. The Gorilla Foundation stated that Koko took "between 5 to 15 types of nutritional supplements" and acknowledged their use of homeopathic remedies. Several former caregivers at The Gorilla Foundation also raised concerns that Koko's companion Ndume was being neglected. In 2012, a group of former employees reached out to a blogger who focused on the ape caregiver community, who in turn asked the USDA Animal and Plant Health Inspection Service (APHIS) to follow up on the claims. After an investigation, APHIS reported that Ndume had been neglected in some aspects; for instance, he had not been Tuberculosis tested in 20 years, despite the recommendation being to test gorillas for Tuberculosis yearly. In the 2010s, as Koko neared the end of her life, anthropologist and primatologist Barbara J. King questioned the ethics of Patterson's caretaking decisions, and criticized the foundation for excessively anthropomorphizing Koko. Nipple fixation and lawsuit Like other apes raised by humans (Lucy, Washoe), Koko did not develop the sexual instincts of an ape raised in the wild. According to Patterson, she developed several crushes on human men. For example, Koko "maintained a near-constant vigil by the trailer window" when a favorite workman was expected to show -- and blew him kisses after he arrived. Though Patterson secured male gorillas Michael and Ndume for Koko to mate with, she was not sexually interested in them. (As a result, Ndume was caged separately, in isolation.) Koko was reported to have a preoccupation with human nipples, likely a result of her disconnect from other gorillas. In 2005, three female staff members at The Gorilla Foundation, where Koko resided, filed lawsuits against the organization, alleging that they were pressured to reveal their nipples to Koko by the organization's executive director, Francine Patterson (Penny), among other violations of labor law. The lawsuit alleged that in response to signing from Koko, Patterson pressured Keller and Alperin (two of the female staff) to flash the ape. "Oh, yes, Koko, Nancy has nipples. Nancy can show you her nipples," Patterson reportedly said on one occasion. And on another: "Koko, you see my nipples all the time. You are probably bored with my nipples. You need to see new nipples. I will turn my back so Kendra can show you her nipples." Shortly thereafter, a third woman filed suit, alleging that upon being first introduced to Koko, Patterson told her that Koko was communicating that she wanted to see the woman's nipples, pressuring her to submit to Koko's demands and informing her that "everyone does it for her around here." When the woman briefly lifted her t-shirt, flashing her undergarments, Patterson admonished the woman and reiterated that Koko wanted to see her nipples. When the woman relented and showed her breasts to Koko, Patterson commented "Oh look, Koko, she has big nipples." On another occasion, one of the gorilla's handlers told the woman that Koko wanted to be alone with her. When the woman went to Koko's enclosure, Koko begin to squat and breathe heavily. The lawsuits were settled out of court. When asked to comment on the matter, gorilla expert Kristen Lukas said that other gorillas are not known to have had a similar nipple fixation. A former caregiver stated that Patterson would interpret the sign for "nipple" as a sound-alike, "people," when notable donors were present. Later life and death After Patterson's research with Koko was completed, the gorilla moved to a reserve in Woodside, California. At the reserve, Koko lived with another gorilla, Michael, but he died in 2000. She then lived with another male gorilla, Ndume, until her death. At the preserve, Koko also met and interacted with a variety of celebrities including Robin Williams, Fred Rogers, Betty White, William Shatner, Flea, Leonardo DiCaprio, Peter Gabriel, and Sting. Koko died in her sleep during the morning of June 19, 2018, at the Gorilla Foundation's preserve in Woodside, California, at the age of 46. The Gorilla Foundation released a statement that "The impact has been profound and what she has taught us about the emotional capacity of gorillas and their cognitive abilities will continue to shape the world." Despite her comparatively old age, her death took staff members of the Gorilla Foundation by surprise. Ndume was transferred to the Cincinnati Zoo after a lengthy legal battle. In popular culture Books and documentaries 1978 Koko: A Talking Gorilla, a documentary film by Barbet Schroeder 1978 cover of National Geographic magazine that Koko photographed, as well as feature article 1981 The Education of Koko, a book by Patterson and naturalist Eugene Linden () 1985 Koko's Kitten, a picture book by Patterson and photographer Ronald Cohn () 1986 Silent Partners: The Legacy of the Ape Language Experiments, a book by Eugene Linden () 1987 Koko's Story, a children's book by Patterson for Scholastic Corporation () 1990 Koko's Kitten, a 15-minute re-enactment of the story of the gorilla's adoption of a kitten, featured in the PBS children's show Reading Rainbow 1999 A Conversation with Koko, a PBS documentary for Nature, narrated by Martin Sheen 1999 The Parrot's Lament, by Eugene Linden () 2000 Koko-Love!, a picture book by Patterson and photographer Ronald Cohn () 2001 Koko and Robin Williams, a short featurette on Robin Williams meeting Koko 2008 Little Beauty, a picture book by Anthony Browne inspired by Koko's adoption of a pet kitten () 2016 Koko: The Gorilla Who Talks to People, a BBC documentary also shown on PBS 2019 A Wish for Koko, a children's book in honor of Koko's life 2019 Koko the Gorilla, The Musers commentary on Koko's life Movies and television shows 1998 Seinfeld, Season 9, Episode 19 (The Maid); George is nicknamed "Koko the monkey" after co-workers witness him yelling and flailing his arms with a banana in his hand 1998 Mr Rogers' Neighborhood, Episode 1727 (You and I Together); Mister Rogers visits with Koko who has learned how to communicate in sign language 2009 The Big Bang Theory, Season 3, Episode 10 (The Gorilla Experiment); Sheldon makes an attempt to teach physics to Penny, like when Koko learned Sign Language.
Biology and health sciences
Individual animals
Animals
144219
https://en.wikipedia.org/wiki/Wildlife
Wildlife
Wildlife refers to undomesticated animals and uncultivated plant species which can exist in their natural habitat, but has come to include all organisms that grow or live wild in an area without being introduced by humans. Wildlife was also synonymous to game: those birds and mammals that were hunted for sport. Wildlife can be found in all ecosystems. Deserts, plains, grasslands, woodlands, forests, and other areas including the most developed urban areas, all have distinct forms of wildlife. While the term in popular culture usually refers to animals that are untouched by human factors, most scientists agree that much wildlife is affected by human activities. Some wildlife threaten human safety, health, property and quality of life. However, many wild animals, even the dangerous ones, have value to human beings. This value might be economic, educational, or emotional in nature. Humans have historically tended to separate civilization from wildlife in a number of ways, including the legal, social and moral senses. Some animals, however, have adapted to suburban environments. This includes urban wildlife such as feral cats, dogs, mice, and rats. Some religions declare certain animals to be sacred, and in modern times, concern for the natural environment has provoked activists to protest against the exploitation of wildlife for human benefit or entertainment. Global wildlife populations have decreased significantly by 68% since 1970 as a result of human activity, particularly overconsumption, population growth, and intensive farming, according to a 2020 World Wildlife Fund's Living Planet Report and the Zoological Society of London's Living Planet Index measure, which is further evidence that humans have unleashed a sixth mass extinction event. According to CITES, it has been estimated that annually the international wildlife trade amounts to billions of dollars and it affects hundreds of millions of animal and plant specimen. Interactions with humans Trade For food Stone Age people and hunter-gatherers relied on wildlife, both plants and animals, for their food. In fact, some species may have been hunted to extinction by early human hunters. Today, hunting, fishing, and gathering wildlife is still a significant food source in some parts of the world. In other areas, hunting and non-commercial fishing are mainly seen as a sport or recreation. Meat sourced from wildlife that is not traditionally regarded as game is known as bushmeat. The increasing demand for wildlife as a source of traditional food in East Asia is decimating populations of sharks, primates, pangolins and other animals, which they believe have aphrodisiac properties. A November 2008 report from biologist and author Sally Kneidel, PhD, documented numerous wildlife species for sale in informal markets along the Amazon River, including wild-caught marmosets sold for as little as $1.60 (5 Peruvian soles). Many Amazon species, including peccaries, agoutis, turtles, turtle eggs, anacondas, armadillos are sold primarily as food. Media Wildlife has long been a common subject for educational television shows. National Geographic Society specials appeared on CBS since 1965, later moving to American Broadcasting Company and then Public Broadcasting Service. In 1963, NBC debuted Wild Kingdom, a popular program featuring zoologist Marlin Perkins as host. The BBC natural history unit in the United Kingdom was a similar pioneer, the first wildlife series LOOK presented by Sir Peter Scott, was a studio-based show, with filmed inserts. David Attenborough first made his appearance in this series, which was followed by the series Zoo Quest during which he and cameraman Charles Lagus went to many exotic places looking for and filming elusive wildlife—notably the Komodo dragon in Indonesia and lemurs in Madagascar. Since 1984, the Discovery Channel and its spinoff Animal Planet in the US have dominated the market for shows about wildlife on cable television, while on Public Broadcasting Service the NATURE strand made by WNET-13 in New York and NOVA by WGBH in Boston are notable. Wildlife television is now a multimillion-dollar industry with specialist documentary film-makers in many countries including UK, US, New Zealand, Australia, Austria, Germany, Japan, and Canada. There are many magazines and websites which cover wildlife including National Wildlife, Birds & Blooms, Birding, wildlife.net, and Ranger Rick for children. Religion Many animal species have spiritual significance in different cultures around the world, and they and their products may be used as sacred objects in religious rituals. For example, eagles, hawks and their feathers have great cultural and spiritual value to Native Americans as religious objects. In Hinduism the cow is regarded as sacred. Muslims conduct sacrifices on Eid al-Adha, to commemorate the sacrificial spirit of Ibrāhīm in Islam ( Arabic-Abraham) in love of God. Camels, sheep, goats may be offered as sacrifice during the three days of Eid. In Christianity the Bible has a variety of animal symbols, the Lamb is a famous title of Jesus. In the New Testament the Gospels Mark, Luke and John have animal symbols: "Mark is a lion, Luke is a bull and John is an eagle." Tourism Suffering Loss and extinction This subsection focuses on anthropogenic forms of wildlife destruction. The loss of animals from ecological communities is also known as defaunation. Exploitation of wild populations has been a characteristic of modern man since our exodus from Africa 130,000 – 70,000 years ago. The rate of extinctions of entire species of plants and animals across the planet has been so high in the last few hundred years that it is widely believed that a sixth great extinction event ("the Holocene Mass Extinction") is currently ongoing. The 2019 Global Assessment Report on Biodiversity and Ecosystem Services, published by the United Nations' Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, says that roughly one million species of plants and animals face extinction within decades as the result of human actions. Subsequent studies have discovered that the destruction of wildlife is "significantly more alarming" than previously believed, with some 48% of 70,000 monitored animal species experiencing population declines as the result of human industrialization. According to a 2023 study published in PNAS, "immediate political, economic, and social efforts of an unprecedented scale are essential if we are to prevent these extinctions and their societal impacts." The four most general reasons that lead to destruction of wildlife include overkill, habitat destruction and fragmentation, impact of introduced species and chains of extinction. Overkill Overkill happens whenever hunting occurs at rates greater than the reproductive capacity of the population is being exploited. The effects of this are often noticed much more dramatically in slow-growing populations such as many larger species of fish. Initially when a portion of a wild population is hunted, an increased availability of resources (food, etc.) is experienced increasing growth and reproduction as density dependent inhibition is lowered. Hunting, fishing and so on, have lowered the competition between members of a population. However, if this hunting continues at rate greater than the rate at which new members of the population can reach breeding age and produce more young, the population will begin to decrease in numbers. Populations that are confined to islands, whether literal islands or just areas of habitat that are effectively an "island" for the species concerned, have also been observed to be at greater risk of dramatic population rise of deaths declines following unsustainable hunting. Habitat destruction and fragmentation The habitat of any given species is considered its preferred area or territory. Many processes associated with human habitation of an area cause loss of this area and decrease the carrying capacity of the land for that species. In many cases these changes in land use cause a patchy break-up of the wild landscape. Agricultural land frequently displays this type of extremely fragmented, or relictual habitat. Farms sprawl across the landscape with patches of uncleared woodland or forest dotted in-between occasional paddocks. Examples of habitat destruction include grazing of bushland by farmed animals, changes to natural fire regimes, forest clearing for timber production and wetland draining for city expansion. This is particularly challenging since wild animals cannot drink tap water, which means they cannot autonomously survive in those habitats where there is no surface water access. Impact of introduced species Mice, cats, rabbits, dandelions and poison ivy are all examples of species that have become invasive threats to wild species in various parts of the world. Frequently species that are uncommon in their home range become out-of-control invasions in distant but similar climates. The reasons for this have not always been clear and Charles Darwin felt it was unlikely that exotic species would ever be able to grow abundantly in a place in which they had not evolved. The reality is that the vast majority of species exposed to a new habitat do not reproduce successfully. Occasionally, however, some populations do take hold and after a period of acclimation can increase in numbers significantly, having destructive effects on many elements of the native environment of which they have become part. Chains of extinction This final group is one of secondary effects. All wild populations of living things have many complex intertwining links with other living things around them. Large herbivorous animals such as the hippopotamus have populations of insectivorous birds that feed off the many parasitic insects that grow on the hippo. Should the hippo die out, so too will these groups of birds, leading to further destruction as other species dependent on the birds are affected. Also referred to as a domino effect, this series of chain reactions is by far the most destructive process that can occur in any ecological community. Another example is the black drongos and the cattle egrets found in India. These birds feed on insects on the back of cattle, which helps to keep them disease-free. Destroying the nesting habitats of these birds would cause a decrease in the cattle population because of the spread of insect-borne diseases.
Biology and health sciences
Ecology
Biology
144355
https://en.wikipedia.org/wiki/Galliformes
Galliformes
Galliformes is an order of heavy-bodied ground-feeding birds that includes turkeys, chickens, quail, and other landfowl. Gallinaceous birds, as they are called, are important in their ecosystems as seed dispersers and predators, and are often reared by humans for their meat and eggs, or hunted as game birds. The order contains about 290 species, inhabiting every continent except Antarctica, and divided into five families: Phasianidae (including chicken, quail, partridges, pheasants, turkeys, peafowl (peacocks) and grouse), Odontophoridae (New World quail), Numididae (guinea fowl), Cracidae (including chachalacas and curassows), and Megapodiidae (incubator birds like malleefowl and brush-turkeys). They adapt to most environments except for innermost deserts and perpetual ice. Many gallinaceous species are skilled runners and escape predators by running rather than flying. Males of most species are more colorful than the females, with often elaborate courtship behaviors that include strutting, fluffing of tail or head feathers, and vocal sounds. They are mainly nonmigratory. Several species have been domesticated during their long and extensive relationships with humans. The name galliformes derives from "gallus", Latin for "rooster". Common names are gamefowl or gamebirds, landfowl, gallinaceous birds, or galliforms. Galliforms and waterfowl (order Anseriformes) are collectively called fowl. Systematics and evolution The living Galliformes were once divided into seven or more families. Despite their distinctive appearance, grouse and turkeys probably do not warrant separation as families due to their recent origin from partridge- or pheasant-like birds. The turkeys became larger after their ancestors colonized temperate and subtropical North America, where pheasant-sized competitors were absent. The ancestors of grouse, though, adapted to harsh climates and could thereby colonize subarctic regions. Consequently, the Phasianidae are expanded in current taxonomy to include the former Tetraonidae and Meleagrididae as subfamilies. The Anseriformes (waterfowl) and the Galliformes together make up the Galloanserae. They are basal among the living neognathous birds, and normally follow the Paleognathae (ratites and tinamous) in modern bird classification systems. This was first proposed in the Sibley-Ahlquist taxonomy and has been the one major change of that proposed scheme that was almost universally adopted. However, the Galliformes as they were traditionally delimited are called Gallomorphae in the Sibley-Ahlquist taxonomy, which splits the Cracidae and Megapodiidae as an order "Craciformes". This is not a natural group, however, but rather an erroneous result of the now-obsolete phenetic methodology employed in the Sibley-Ahlquist taxonomy. Phenetic studies do not distinguish between plesiomorphic and apomorphic characters, which leads to basal lineages appearing as monophyletic groups. Historically, the buttonquails (Turnicidae), mesites (Mesitornithidae) and the hoatzin (Opisthocomus hoazin) were placed in the Galliformes, too. The former are now known to be shorebirds adapted to an inland lifestyle, whereas the mesites are probably closely related to pigeons and doves. The relationships of the hoatzin are entirely obscure, and it is usually treated as a monotypic order Opisthocomiformes to signify this. The fossil record for the Galliformes is incomplete. Evolution Galloanserae-like birds were one of the main survivors of the K-T Event, that killed off the rest of the dinosaurs. The dominant birds of the dinosaur era were the enantiornithes, toothed birds that dominated the trees and skies. Unlike those enantiornithes, the ancestors of the galliformes were a niche group that were toothless and ground-dwelling. When the asteroid impact killed off all non-avian dinosaurs, and the dominant birds, it destroyed all creatures that lived in trees and on open ground. The enantiornithes were wiped out, but the ancestors of galliformes were small and lived in the ground (unlike water for Anseriformes) which protected them from the blast and destruction. Fossils of these galliform-like birds originate in the Late Cretaceous, most notably those of Austinornis lentus. Its partial left tarsometatarsus was found in the Austin Chalk near Fort McKinney, Texas, dating to about 85 million years ago (Mya). This bird was quite certainly closely related to Galliformes, but whether it was a part of these or belongs elsewhere in the little-known galliform branch of Galloanserae is not clear. However, in 2004, Clarke classified it as a member of the larger group Pangalliformes, more closely related to chickens than to ducks, but not a member of the crown group that includes all modern galliformes. Another specimen, PVPH 237, from the Late Cretaceous Portezuelo Formation (Turonian-Coniacian, about 90 Mya) in the Sierra de Portezuelo (Argentina) has also been suggested to be an early galliform relative. This is a partial coracoid of a neornithine bird, which in its general shape and particularly the wide and deep attachment for the muscle joining the coracoid and the humerus bone resembles the more basal lineages of galliforms. Additional galliform-like pangalliformes are represented by extinct families from the Paleogene, namely the Gallinuloididae, Paraortygidae and Quercymegapodiidae. In the early Cenozoic, some additional birds may or may not be early Galliformes, though even if they are, they are unlikely to belong to extant families: †Argillipes (London Clay Early Eocene of England) †Coturnipes (Early Eocene of England, and Virginia, USA?) †Palaeophasianus (Willwood Early Eocene of Bighorn County, USA) †Percolinus (London Clay Early Eocene of England) †Amitabha (Bridger middle Eocene of Forbidden City, USA) – phasianid? †"Palaeorallus" alienus (middle Oligocene of Tatal-Gol, Mongolia) †Anisolornis (Santa Cruz Middle Miocene of Karaihen, Argentina) From the mid-Eocene onwards – about 45 Mya or so, true galliforms are known, and these completely replace their older relatives in the early Neogene. Since the earliest representatives of living galliform families apparently belong to the Phasianidae – the youngest family of galliforms, the other families of Galliformes must be at least of Early Eocene origin but might even be as old as the Late Cretaceous. The ichnotaxon Tristraguloolithus cracioides is based on fossil eggshell fragments from the Late Cretaceous Oldman Formation of southern Alberta, Canada, which are similar to chachalaca eggs, but in the absence of bone material, their relationships cannot be determined except that they are apparently avian in origin. Modern genera of phasianids start appearing around the Oligocene-Miocene boundary, roughly 25–20 Mya. It is not well known whether the living genera of the other, older, galliform families originated around the same time or earlier, though at least in the New World quail, pre-Neogene forms seem to belong to genera that became entirely extinct later on. A number of Paleogene to mid-Neogene fossils are quite certainly Galliformes, but their exact relationships in the order cannot be determined: †Galliformes gen. et sp. indet. (Oligocene) – formerly in Gallinuloides; phasianid? †Palaealectoris (Agate Fossil Beds Early Miocene of Sioux County, USA) – tetraonine? List of major taxa For a long time, the pheasants, partridges, and relatives were indiscriminately lumped in the Phasianidae, variously including or excluding turkeys, grouse, New World quail, and guineafowl, and divided into two subfamilies – the Phasianinae (pheasant-like forms) and the Perdicinae (partridge-like forms). This crude arrangement was long considered to be in serious need of revision, but even with modern DNA sequence analyses and cladistic methods, the phylogeny of the Phasianidae has resisted complete resolution. A tentative list of the higher-level galliform taxa, listed in evolutionary sequence, is: †Archaeophasianus Lambrecht 1933 (Oligocene? – Late Miocene) †Argillipes Harrison & Walker 1977 †Austinornis Clarke 2004 [Pedioecetes Baird 1858] (Austin Chalk Late Cretaceous of Fort McKinney, USA) †Chambiortyx Mourer-Chauviré et al. 2013 †Coturnipes Harrison & Walker 1977 †Cyrtonyx tedfordi (Barstow Late Miocene of Barstow, USA) †Linquornis Yeh 1980 (middle Miocene) †Namaortyx Mourer-Chauviré, Pickford & 2011 †Palaeorallus alienus Kuročkin 1968 nomen dubium †Sobniogallus Tomek et al. 2014 †Tristraguloolithus Zelenitsky, Hills & Curri 1996 [ootaxa- cracid?] †Procrax Tordoff & Macdonald 1957 (middle Eocene? – Early Oligocene) †Paleophasianus Wetmore 1940 †Taoperdix Milne-Edwards 1869 (Late Oligocene) Family †Gastornithidae? Fürbringer, 1888 Gastornis Hébert, 1855 (vide Prévost, 1855) [Diatryma Cope, 1876] (Paleocene-Eocene) Family †Sylviornithidae? Mourer-Chauviré & Balouet, 2005 †Sylviornis Poplin, 1980 (Holocene) †Megavitiornis Worthy, 2000 (Holocene) Family †Paraortygidae Mourer-Chauviré 1992 †Pirortyx Brodkorb 1964 †Scopelortyx Mourer-Chauviré, Pickford & Senut 2015 †Paraortyx Gaillard 1908 sensu Brodkorb 1964 †Xorazmortyx Zelenkov & Panteleyev 2019 Family †Quercymegapodiidae Mourer-Chauviré 1992 †Taubacrex Alvarenga 1988 †Ameripodius Alvarenga 1995 †Quercymegapodius Mourer-Chauviré 1992 Family Megapodiidae – mound-builders and scrubfowl, or megapodes †Mwalau Worthy et al. 2015 (Lini's megapode) †Ngawupodius & Ivison 1999 Brushturkey group Talegalla Lesson 1828 Leipoa Gould 1840 [Progura de Vis 1889; Chosornis de Vis 1889; Palaeopelargus de Vis 1892] (Malleefowl) Alectura Gray 1831 [Catheturus Swainson 1837] (Australian Brushturkeys) Aepypodius Oustalet 1880 Scrubfowl group Macrocephalon Müller 1846 [Megacephalon Gray 1846; Megacephalon Gray 1844 nomen nudum; Galeocephala Mathews 1926] (Maleos) Eulipoa Ogilvie-Grant 1893 (Moluccan Megapodes) Megapodius Gaimard 1823 non (sic) Mathews 1913 [Megathelia Mathews 1914; Amelous Gloger 1841] Family Cracidae – chachalacas, guans and curassows †Archaealectrornis Crowe & Short 1992 (Oligocene) †Boreortalis Brodkorb 1954 †Palaeonossax Wetmore 1956 (Brule Late Oligocene of South Dakota, USA) Penelopinae Bonaparte 1851 (Guans) Chamaepetes Wagler 1832 (black & sickle-winged guan) Penelopina Reichenbach 1861 (Highland Guans) Aburria Reichenbach 1853 [Opetioptila Sundevall 1873; Pipile Bonaparte 1856 non Pipilo Vieillot 1816; Cumana Coues 1900] Penelope Merrem 1786 [Penelopsis Bonaparte 1856] Cracinae Rafinesque 1815 Ortalis Merrem 1786 [Ganix Rafinesque 1815] {Ortalidini Donegan 2012} (Chachalacas) Oreophasis Gray 1844 {Oreophasini Bonaparte 1853} (Horned Guans) Cracini Rafinesque 1815 (Curassows) Nothocrax Burmeister 1856 (Nocturnal Curassows) Pauxi Temminck 1813 [Ourax Cuvier 1817; Lophocerus Swainson 1837 non Hemprich & Ehrenberg 1833; Urax Reichenbach 1850] Mitu Lesson 1831 (razor-billed curassows) Crax Linnaeus 1758 Suborder Phasiani Family †Gallinuloididae – tentatively placed here †Gallinuloides Eastman 1900 [Palaeobonasa Shufeldt 1915] †Paraortygoides Mayr 2000 Family Numididae – guineafowl Guttera Wagler 1832 Numida Linnaeus 1764 [Querelea Reichenbach 1852] (Helmeted Guineafowl) Acryllium Gray 1840 (Vulturine Guineafowl) Agelastes Bonaparte 1850 Family Odontophoridae – New World quail †Miortyx Miller 1944 †Nanortyx Weigel 1963 †Neortyx Holman 1961 Ptilopachinae Bowie, Coehn & Crowe 2013 Ptilopachus Swainson 1837 Odontophorinae Gould 1844 Rhynchortyx Ogilvie-Grant 1893 (Tawny-faced Quail) Oreortyx Baird 1858 [Orortyx Coues 1882] (Mountain Quail) Dendrortyx Gould 1844 (Wood Partridges) Philortyx Gould 1846 non Des Murs 1854 (Banded Quail) Colinus Goldfuss 1820 [Eupsychortyx Gould 1844; Gnathodon 1842; Ortygia Boie 1826; Philortyx Des Murs 1854 non Gould 1846] (Bobwhites) Callipepla Wagler 1832 [Lophortyx Bonaparte 1838] () Cyrtonyx Gould 1844 () Dactylortyx Ogilvie-Grant 1893 (Singing Quail) Odontophorus Vieillot 1816 [Dentophorus Boie 1828] (Wood Quail) Family Phasianidae – pheasants, partridges and relatives †Alectoris” pliocaena Tugarinov 1940b †Bantamyx Kuročkin 1982 †Centuriavis lioae Ksepka et al., 2022 †Diangallus Hou 1985 †"Gallus" beremendensis Jánossy 1976b †"Gallus" europaeus Harrison 1978 †Lophogallus Zelenkov & Kuročkin 2010 †Megalocoturnix Sánchez Marco 2009 †Miophasianus Brodkorb 1952 [Miophasianus Lambrecht 1933 nomen nudum ; Miogallus Lambrecht 1933 ] †Palaeocryptonyx Depéret 1892 [Chauvireria Boev 1997; Pliogallus Tugarinov 1940b non Gaillard 1939; Lambrechtia Janossy 1974 ] †Palaeortyx Milne-Edwards 1869 [Palaeoperdix Milne-Edwards 1869] †Plioperdix Kretzoi 1955 [Pliogallus Tugarinov 1940 nec Gaillard 1939] †Rustaviornis Burchak-Abramovich & Meladze 1972 †Schaubortyx Brodkorb 1964 †Shandongornis Yeh 1997 †Shanxiornis Wang et al. 2006 †Tologuica Zelenkov & Kuročkin 2009 Subfamily Rollulinae Bonaparte, 1850 Subfamily Phasianinae Tribe Lerwini von Boetticher, 1939 – snow partridge Tribe Ithaginini Wolters 197 – blood pheasant Tribe Lophophorini Gray, 1841 – monals, monal-partridges, and tragopans Tribe Pucrasiini Wolters 1976 – koklass pheasant Tribe Meleagridini – turkey Tribe Tetraonini Leach 1820 – grouse Tribe Rhizotherini – long-billed partridges Tribe Phasianini Horsfield 1821 – true pheasants and partridges Subfamily Pavoninae Tribe Pavonini Rafinesque 1815 – peafowl, arguses, and Tropicoperdix partridges Tribe Polyprectronini Blyth 1852 – peacock-pheasants, Asian spurfowl, and crimson-headed partridge Tribe Gallini Brehm 1831 – junglefowl, bamboo partridges, and true francolins Tribe Coturnicini Reichenbach, 1848 - Old World quail, snowcocks, and allies The relationships of many pheasants and partridges were formerly very badly resolved and much confounded by adaptive radiation (in the former) and convergent evolution (in the latter). Thus, the bulk of the Phasianidae was alternatively treated as a single subfamily Phasianinae. The grouse, turkeys, true pheasants, etc., would then become tribes of this subfamily, similar to how the Coturnicinae are commonly split into a quail and a spurfowl tribe. In 2021, Kimball et al. found the family to comprise three distinct subfamilies, with two containing multiple genera; these results were followed by the International Ornithological Congress. The partridge of Europe is not closely related to other partridge-like Galliformes, as already indicated by its sexually dimorphic coloration and possession of more than 14 rectrices, traits it shares with the other advanced phasianids. However, among these its relationships are obscure; it is unclear whether it is closer to the turkeys or to certain short-tailed pheasants like Ithaginis, Lophophorus, Pucrasia, and Tragopan. In 2021, Kimball et al. found it to belong to the subfamily Phasianini, alongside the true pheasants. Phylogeny Living Galliformes based on the work by John Boyd.<ref name="Boyd">John Boyd's website {{cite web |last=Boyd |first=John |year=2007 |title=GALLIFORMES- Landfowl |url=http://jboyd.net/Taxo/List2.html |access-date=30 December 2015}}</ref> Description As their name suggests they are chicken-like in appearance, with rounded bodies and blunt wings, and range in size from small at 15 cm (6 inches) to large at 120 cm (4 feet). They are mainly terrestrial birds and their wings are short and rounded for short-distance flight. Galliforms are anisodactyl like passerines, but some of the adult males grow spurs that point backwards. Gallinaceous birds are arboreal or terrestrial animals; many prefer not to fly, but instead walk and run for locomotion. They live 5–8 years in the wild and up to 30 years in captivity. They can be found worldwide and in a variety of habitats, including forests, deserts, and grasslands. They use visual displays and vocalizations for communication, courtship, fighting, territoriality, and brooding. They have diverse mating strategies: some are monogamous, while others are polygamous or polygynandrous. Male courtship behavior includes elaborate visual displays of plumage. They breed seasonally in accordance with the climate and lay three to 16 eggs per year in nests built on the ground or in trees. Gallinaceous birds feed on a variety of plant and animal material, which may include fruits, seeds, leaves, shoots, flowers, tubers, roots, insects, snails, worms, lizards, snakes, small rodents, and eggs. These birds vary in size from the diminutive king quail (Coturnix chinensis) (5 in) long and weighing 28–40 g (1–1.4 oz) to the largest extant galliform species, the North American wild turkey (Meleagris gallopavo), which may weigh as much as 14 kg (30.5 lb) and may exceed 120 cm (47 in). The galliform bird species with the largest wingspan and largest overall length (including a train of over 6 feet) is most likely the green peafowl (Pavo muticus). Most galliform genera are plump-bodied with thick necks and moderately long legs, with rounded and rather short wings. Grouse, pheasants, francolins, and partridges are typical in their outwardly corpulent silhouettes. Adult males of many galliform birds have one to several sharp horny spurs on the back of each leg, which they use for fighting. In several lineages, pronounced sexual dimorphism occurs, and among each galliform clade, the more apomorphic ("advanced") lineages tend to be more sexually dimorphic. Flightlessness While most galliformes are rather reluctant flyers, truly flightless forms are unknown among the extant members of the order. Though they are often mischaracterised as weak-flying, Galliformes are actually highly specialised for their particular flight style, bearing extremely powerful flight muscles, and some species are even migratory. Adult snowcocks are, however, flightless, requiring gravity to launch, although juveniles can still fly relatively well. Nonetheless, a few birds outside the Galliforme crown-group did produce flightlessness. The genus Sylviornis, a huge prehistorically extinct species of New Caledonia, was flightless, but as opposed to most other flightless birds like ratites or island rails which become flightless due to arrested development of their flight apparatus and subsequently evolve to larger size, Sylviornis seems to have become flightless simply due to its bulk, with the wing reduction following a consequence, not the reason for its flightlessness. The gigantic Australian mihirungs, which may be closer to Galliformes than to Anseriformes as traditionally expected, achieved flightlessness more traditionally, strongly reducing their wings and keel. They were massive herbivorous birds, among the largest avian dinosaurs of all time. By contrast, the stem-galliform Scopelortyx appears to have been more aerial than modern fowl, with a flight style more suited for gliding and soaring. Behaviour and ecology Most of the galliform birds are more or less resident, but some of the smaller temperate species (such as quail) do migrate over considerable distances. Altitudinal migration is evidently quite common amongst montane species, and a few species of subtropical and subarctic regions must reach their watering and/or foraging areas through sustained flight. Species known to make extensive flights include the ptarmigans, sage-grouse (Centrocercus), crested partridge, green peafowl, crested argus, mountain peacock-pheasant (Polyplectron inopinatum), koklass pheasant (Pucrasia macrolopha), Reeves's pheasant, and (Syrmaticus reevesii). Other species — most of the New World quail (also known as the ‘toothed quail’), the enigmatic stone partridge (Ptilopachus petrosus) of Africa, guineafowl, and eared pheasants (Crossoptilon) — are all notable for their daily excursions on foot which may take them many miles in a given day. Some Galliformes are adapted to grassland habitat, and these genera are remarkable for their long, thin necks, long legs, and large, wide wings. Fairly unrelated species like the crested fireback (Lophura ignita), vulturine guineafowl (Acryllium vulturinum), and malleefowl (Leipoa ocellata) are outwardly similar in their body types (see also convergent evolution). Most species that show only limited sexual dimorphism are notable for the great amount of locomotion required to find food throughout the majority of the year. Those species that are highly sedentary but with marked ecological transformations over seasons exhibit marked distinct differences between the sexes in size and/or appearance. Eared-pheasants, guineafowl, toothed quail, and the snow partridge (Lerwa lerwa) are examples of limited sexual differences and requirements for traveling over wide terrain to forage. Winter ecology Gallinaceous birds are well adapted to regions with cold winters. Their larger size, increased plumage, and lower activity levels help them to withstand the cold and conserve energy. Under such conditions, they are able to change their feeding strategy to that of a ruminant. This allows them to feed on and extract energy and nutrients from coarse, fibrous plant material, such as buds, twigs, and conifer needles. This provides a virtually unlimited source of accessible food and requires little energy to harvest. Food and feeding Herbivorous to slightly omnivorous galliforms, forming the majority of the group, are typically stoutly built and have short, thick bills primarily adapted for foraging on the ground for rootlets or the consumption of other plant material such as heather shoots. The young birds will also take insects. Peafowl, junglefowl and most of the subtropical pheasant genera have very different nutritional requirements from typical Palearctic genera. The Himalayan monal (Lophophorus impejanus) has been observed digging in the rotting wood of deadfall in a similar manner to woodpeckers to extract invertebrates, even bracing itself with aid of its squared tail. The cheer pheasant (Catreus wallichi), crested argus (Rheinardia ocellata), the crested partridge (Rollulus roulroul) and the crested guineafowl (Guttera pucherani) are similar ecologically to the Himalayan monal in that they too forage in rotting wood for termites, ant and beetle larvae, molluscs, crustaceans and young rodents. Typical peafowl (Pavo), most of the peacock-pheasants (Polyplectron), the Bulwer's pheasant (Lophura bulweri), the ruffed pheasants (Chrysolophus) and the hill partridges (Arborophila) have narrow, relatively delicate bills, poorly suited for digging. These galliform genera prefer instead to capture live invertebrates in leaf litter, in sand, or shallow pools or along stream banks. These genera are also outwardly similar in that they each have exceptionally long, delicate legs and toes and the tendency to frequent seasonally wet habitats to forage, especially during chick-rearing. The blue peafowl (Pavo cristatus) is famed in its native India for its appetite for snakes – even poisonous cobras – which it dispatches with its strong feet and sharp bill. The Lady Amherst's pheasant (Chrysolophus amherstiae), green peafowl (Pavo muticus), Bulwer's pheasant and the crestless fireback (Lophura erythrophthalma) are notable for their aptitude to forage for crustaceans such as crayfish and other aquatic small animals in shallow streams and amongst rushes in much the same manner as some members of the rail family (Rallidae). Similarly, although wild turkeys (Meleagris gallopavo) have a diet primarily of vegetation, they will eat insects, mice, lizards, and amphibians, wading in water to hunt for the latter. Domestic hens (Gallus domesticus) share this opportunistic behaviour and will eat insects, mice, worms, and amphibians. The tragopans (Tragopan), mikado pheasant (Syrmaticus mikado), and several species of grouse and ptarmigan are exceptional in their largely vegetarian and arboreal foraging habitats; grouse are especially notable for being able to feed on plants rich in terpenes and quinones—such as sagebrush or conifers—which are often avoided by other herbivores. Many species of moderate altitudes—for example the long-tailed pheasants of the genus Syrmaticus—also find a great deal of their daily nutritional requirements in the tree canopies, especially during the snowy and rainy periods when foraging on the ground is dangerous and less than fruitful for a variety of reasons. Although members of the genus Syrmaticus are capable of subsisting almost entirely on vegetarian materials for months at a time, this is not true for many of the subtropical genera. For example, the great argus (Argusianus argus) and crested argus may do most of their foraging during rainy months in the canopy of the jungle, as well. There they are known to forage on slugs, snails, ants, and amphibians to the exclusion of plant material. How they forage in the forest canopy during the rainy months is unknown. Reproduction Most galliforms are very prolific, with clutches regularly exceeding 10 eggs in many species. In contrast to most birds which are – at least for a particular breeding season – monogamous, galliforms are often polygynous or polygamous. Such species can be recognized by their pronounced sexual dimorphism. Galliform young are very precocious and roam with their mothers – or both parents in monogamous species – mere hours after hatching. The most extreme case are the Megapodiidae, where the adults do not brood, but leave incubation to mounds of rotting vegetation, volcanic ash, or hot sand. The young must dig out of the nest mounds after hatching, but they emerge from the eggs fully feathered, and upon leaving the mound, they are able to fly considerable distances. Common species Grouse and ptarmigans - Family Tetraonidae Grouse, ptarmigans, and prairie chickens are all chicken-like birds with short, curved, strong bills, part of the family Tetraonidae. This group includes 25 species residing mostly in North America. They are mainly ground-dwellers and have short, rounded wings for brief flights. They are well adapted to winter by growing feather "snowshoes" on their feet and roosting beneath the snow. They range in size from the white-tailed ptarmigan to the sage grouse. Their plumage is dense and soft and is most commonly found in shades of red, brown, and gray to camouflage to the ground. They are polygamous and male courtship behavior includes strutting and dancing and aggressive fighting for possession of females. The typical clutch size is between seven and 12 eggs. Turkeys - Family Meleagrididae Turkeys are large, long-legged birds that can grow up to in height and weigh up to in the wild. They have a long, broad, rounded tail with 14–19 blunt feathers. They have a naked, wrinkled head and feathered body. The North American wild turkey – Meleagris gallopavo – has five distinct subspecies (Eastern, Rio Grande, Florida [Osceola], Merriam's, and Gould's). Hybrids also exist where the ranges of these subspecies overlap. All are native only to North America, though transplanted populations exist elsewhere. Their plumage differs slightly by subspecies, but is generally dark to black for males, with buff to cream highlights, and generally drab brown for females. The feathers are quite iridescent and can take on distinct reddish/copper hues in sunlight. Their feathers are well defined with broad, square ends, giving the bird the appearance of being covered in scales. Males have a "beard" of coarse black bristles hanging from the center of their upper breasts and tend to have more vibrantly colored plumage than do females. They breed in the spring and their typical clutch size is between 10 and 12 eggs. The ocellated turkey (Meleagris ocellata), a different species of turkey, currently exists only in a portion of the Yucatán peninsula. After the 19th and early 20th centuries, wild turkey populations dropped significantly because of hunting and habitat loss. However, populations now flourish again due to hunting management and transplanting. The ocellated turkey, not commonly hunted, is currently threatened due to ongoing habitat loss in the Yucutan. Pheasants, quail, and partridges - Family Phasianidae The family is divided into four groups: 30 species of new world quail, residing between Paraguay and Canada, 11 species of Old World quail in Africa, Australia, and Asia, 94 species of partridges, and 48 species of pheasants. This family includes a wide range of bird sizes from a quail to pheasants up to almost . Pheasants and quail have heavy, round bodies and rounded wings. Though they have short legs, they are very fast runners when escaping predators. Chachalacas - Family Cracidae Chachalacas are found in the chaparral ecosystems from southern Texas through Mexico and Costa Rica. They are mainly arboreal and make their nests in trees above the ground. They are large, long-legged birds that can grow up to long. They have long tails and are chicken-like in appearance. Their frail-looking yet sturdy nests are made of sticks and leaves. Their clutch size is three or four eggs. The males make a unique, loud, mating call that give them their name. Chachalacas feed mainly on berries, but also eat insects. They are a popular game bird, as their flesh is good to eat. They are also commonly domesticated as pets.
Biology and health sciences
Galliformes
null
144386
https://en.wikipedia.org/wiki/Voyager%20Golden%20Record
Voyager Golden Record
The Voyager Golden Records are two identical phonograph records which were included aboard the two Voyager spacecraft launched in 1977. The records contain sounds and data to reconstruct raster scan images selected to portray the diversity of life and culture on Earth, and are intended for any intelligent extraterrestrial life form who may find them. The records are a time capsule. Although neither Voyager spacecraft is heading toward any particular star, Voyager 1 will pass within 1.6 light-years' distance of the star Gliese 445, currently in the constellation Camelopardalis, in about 40,000 years. Carl Sagan noted that "The spacecraft will be encountered and the record played only if there are advanced space-faring civilizations in interstellar space, but the launching of this 'bottle' into the cosmic 'ocean' says something very hopeful about life on this planet." Background The Voyager 1 probe is currently the farthest human-made object from Earth. Both Voyager 1 and Voyager 2 have reached interstellar space, the region between stars where the galactic plasma is present. Like their predecessors Pioneer 10 and 11, which featured a simple plaque, both Voyager 1 and Voyager 2 were launched by NASA with a message aboard—a kind of time capsule, intended to communicate to extraterrestrials a story of the world of humans on Earth. Contents The contents of the record were selected for NASA by a committee chaired by Carl Sagan of Cornell University. The selection of content for the record took almost a year. Sagan and his associates assembled 116 images (one used for calibration) and a variety of natural sounds, such as those made by surf, wind, thunder and animals (including the songs of birds and whales). To this they added audio content to represent humanity: spoken greetings in 55 ancient and modern languages, including a spoken greeting in English by U.N. Secretary-General Kurt Waldheim and a greeting by Sagan's six-year-old son, Nick; other human sounds, like footsteps and laughter (Sagan's); the inspirational message Per aspera ad astra in Morse code; and musical selections from different cultures and eras. The record also includes a printed message from U.S. president Jimmy Carter. The collection of images includes many photographs and diagrams both in black and white, and color. The first images are of scientific interest, showing mathematical and physical quantities, the Solar System and its planets, DNA, and human anatomy and reproduction. Care was taken to include not only pictures of humanity, but also some of animals, insects, plants and landscapes. Images of humanity depict a broad range of cultures. These images show food, architecture, and humans in portraits as well as going about their day-to-day lives. Many pictures are annotated with one or more indications of scales of time, size, or mass. Some images contain indications of chemical composition. All measures used on the pictures are defined in the first few images using physical references that are likely to be consistent anywhere in the universe. The musical selection is also varied, featuring works by composers such as J. S. Bach (interpreted by Glenn Gould), Mozart, Beethoven (played by the Budapest String Quartet), and Stravinsky. The disc also includes music by Guan Pinghu, Blind Willie Johnson, Chuck Berry, Kesarbai Kerkar, Valya Balkanska, and electronic composer Laurie Spiegel, as well as Azerbaijani folk music (Mugham) by oboe player Kamil Jalilov. The inclusion of Berry's "Johnny B. Goode" was controversial, with some claiming that rock music was "adolescent", to which Sagan replied, "There are a lot of adolescents on the planet." The selection of music for the record was completed by a team composed of Carl Sagan as project director, Linda Salzman Sagan, Frank Drake, Alan Lomax, Ann Druyan as creative director, artist Jon Lomberg, ethnomusicologist Robert E. Brown, Timothy Ferris as producer, and Jimmy Iovine as sound engineer. It also included the sounds of humpbacked whales from the 1970 album by Roger Payne, Songs of the Humpback Whale. The Golden Record also carries an hour-long recording of the brainwaves of Ann Druyan. During the recording of the brainwaves, Druyan thought of many topics, including Earth's history, civilizations and the problems they face, and what it was like to fall in love. After NASA had received criticism over the nudity on the Pioneer plaque (line drawings of a naked man and woman), the agency chose not to allow Sagan and his colleagues to include a photograph of a nude man and woman on the record. Instead, only a silhouette of the couple was included. However, the record does contain "Diagram of vertebrate evolution", by Jon Lomberg, with drawings of an anatomically correct naked male and naked female, showing external organs. The person waving on the diagram was also changed: on the Pioneer plaque, the man is waving, while on the "Vertebrate evolution" image, the woman is waving. The pulsar map and hydrogen molecule diagram are shared in common with the Pioneer plaque. The 116 images (one used for calibration) are encoded in analogue form and composed of 512 vertical lines. The remainder of the record is audio, designed to be played at revolutions per minute. Jimmy Iovine, who was still early in his career as a music producer, served as sound engineer for the project at the recommendation of John Lennon, who was contacted to contribute but was unable to take part. Sagan's team wanted to include the Beatles 1969 song "Here Comes the Sun" on the record, but the record company EMI, which held the copyrights, declined. In the 1978 book Murmurs of Earth, the failure to secure permission for the song is cited as one of the legal challenges faced by the team compiling the Voyager Golden Record. In the book, Sagan said that the Beatles favoured the idea, but "[they] did not own the copyright, and the legal status of the piece seemed too murky to risk." When asked about the obstacle presented by EMI with regard to "Here Comes the Sun", despite the artists' wishes, Ann Druyan said in 2015: "Yeah, that was one of those cases of having to see the tragedy of our planet. Here's a chance to send a piece of music into the distant future and distant time, and to give it this kind of immortality, and they're worried about money ... we got this telegram [from EMI] saying that it will be $50,000 per record for two records, and the entire Voyager record cost $18,000 to produce." However, this was denied in 2017 by Timothy Ferris; in his recollection, "Here Comes the Sun" was not seriously considered for inclusion. In July 2015, NASA uploaded the audio contents of the record to the audio streaming service SoundCloud. Images Playback In the upper left-hand corner of the record cover is a drawing of the phonograph record and the stylus carried with it. The stylus is in the correct position to play the record from the beginning. Written around it in binary notation is the correct time of one rotation of the record, 3.6 seconds, expressed in time units of 0.70 billionths of a second, the time period associated with a fundamental transition of the hydrogen atom. The drawing indicates that the record should be played from the outside in. Below this drawing is a side view of the record and stylus, with a binary number giving the time to play one side of the record—about an hour (more precisely, between 53 and 54 minutes). The information in the upper right-hand portion of the cover is designed to show how pictures are to be constructed from the recorded signals. The top drawing shows the typical signal that occurs at the start of a picture. The picture is made from this signal, which traces the picture as a series of vertical lines, similar to analog television (in which the picture is a series of horizontal lines). Picture lines 1, 2 and 3 are noted in binary numbers, and the duration of one of the "picture lines", about 8 milliseconds, is noted. The drawing immediately below shows how these lines are to be drawn vertically, with staggered "interlace" to give the correct picture rendition. Immediately below this is a drawing of an entire picture raster, showing that there are 512 (29) vertical lines in a complete picture. Immediately below this is a replica of the first picture on the record to permit the recipients to verify that they are decoding the signals correctly. A circle was used in this picture to ensure that the recipients use the correct ratio of horizontal to vertical height in picture reconstruction. Color images were represented by three images in sequence, one each for red, green, and blue components of the image. A color image of the spectrum of the sun was included for calibration purposes. The drawing in the lower left-hand corner of the cover is the pulsar map previously sent as part of the plaques on Pioneers 10 and 11. It shows the location of the Solar System with respect to 14 pulsars, whose precise periods are given. The drawing containing two circles in the lower right-hand corner is a drawing of the hydrogen atom in its two lowest states, with a connecting line and digit 1 to indicate that the time interval associated with the transition from one state to the other is to be used as the fundamental time scale, both for the time given on the cover and in the decoded pictures. Manufacturing Blank records were provided by the Pyral S.A. of Créteil, France. CBS Records contracted the JVC Cutting Center in Boulder, Colorado to cut the lacquer masters which were then sent to the James G. Lee record-processing center in Gardena, California to cut and gold-plate eight Voyager records. After the records were plated they were mounted in aluminum containers and delivered to JPL. The record is a copper disk in diameter plated first with nickel and then gold. The record's cover is aluminum and electroplated upon it is an ultra-pure sample of the isotope uranium-238. Uranium-238 has a half-life of 4.468 billion years. It is possible (e.g., via mass spectrometry) that a civilization that encounters the record will be able to use the ratio of remaining uranium to the other elements to determine the age of the record. The records also had the inscription "To the makers of music – all worlds, all times" hand-etched on its surface. The inscription was located in the "takeout grooves", an area of the record between the label and playable surface. Since this was not in the original specifications, the record was initially rejected, to be replaced with a blank disc. Sagan later convinced the administrator to include the record as is. Journey Voyager 1 was launched in 1977, passed the orbit of Pluto in 1990, and left the Solar System (in the sense of passing the termination shock) in November 2004. It is now in the Kuiper belt. In about 40,000 years, it and Voyager 2 will each come to within about 1.8 light-years of two separate stars: Voyager 1 will have approached star Gliese 445, located in the constellation Camelopardalis, and Voyager 2 will have approached star Ross 248, located in the constellation of Andromeda. In May 2005, it was reported that Voyager 1 had entered the heliosheath, the region beyond the termination shock. The termination shock is where the solar wind, a thin stream of electrically charged gas blowing continuously outward from the Sun, is slowed by pressure from gas between the stars. At the termination shock, the solar wind slows abruptly from its average speed of and becomes denser and hotter. In March 2012, Voyager 1 was over 17.9 billion km from the Sun and traveling at a speed of 3.6 AU per year (approximately ), while Voyager 2 was over 14.7 billion km away and moving at about 3.3 AU per year (approximately ). On September 12, 2013, NASA announced that Voyager 1 had left the heliosheath and entered interstellar space, although it still remains within the Sun's gravitational sphere of influence. Of the eleven instruments carried on Voyager 1, four are still operational and continue to send back data. It is expected that at least one science instrument will remain operational through 2025 and that engineering data could be transmitted for several more years afterward. Publications Most of the images used on the record (reproduced in black and white), together with information about its compilation, can be found in the 1978 book Murmurs of Earth: The Voyager Interstellar Record by Carl Sagan, F. D. Drake, Ann Druyan, Timothy Ferris, Jon Lomberg, and Linda Salzman. A CD-ROM version was issued by Warner New Media in 1992. Author Ann Druyan, who later married Carl Sagan, wrote about the Voyager Record in the epilogue of Sagan's final book Billions and Billions (1997). To celebrate the 40th anniversary of the record, Ozma Records launched a Kickstarter project to release the record contents in LP format as part of a box set also containing a hardcover book, turntable slipmat, and art print. The Kickstarter was successfully funded with over $1.4 million raised. Ozma Records then produced another edition of the three-disc LP vinyl record box set that also includes the audio content of the Golden Record, softcover book containing the images encoded on the record, images sent back by Voyager, commentary from Ferris, art print, turntable slipmat, and a collector's box. This edition was released in February 2018 along with a 2xCD-Book edition. In January 2018, Ozma Records' "Voyager Golden Record; 40th Anniversary Edition" won a Grammy Award for best boxed or limited-edition package. Track listing The track listing is as it appears on the 2017 edition released by Ozma Records. Disc one Disc two
Technology
Unmanned spacecraft
null
144417
https://en.wikipedia.org/wiki/Comoving%20and%20proper%20distances
Comoving and proper distances
In standard cosmology, comoving distance and proper distance (or physical distance) are two closely related distance measures used by cosmologists to define distances between objects. Comoving distance factors out the expansion of the universe, giving a distance that does not change in time except due to local factors, such as the motion of a galaxy within a cluster. Proper distance roughly corresponds to where a distant object would be at a specific moment of cosmological time, which can change over time due to the expansion of the universe. Comoving distance and proper distance are defined to be equal at the present time. At other times, the Universe's expansion results in the proper distance changing, while the comoving distance remains constant. Comoving coordinates Although general relativity allows the formulation of the laws of physics using arbitrary coordinates, some coordinate choices are more natural or easier to work with. Comoving coordinates are an example of such a natural coordinate choice. They assign constant spatial coordinate values to observers who perceive the universe as isotropic. Such observers are called "comoving" observers because they move along with the Hubble flow. A comoving observer is the only observer who will perceive the universe, including the cosmic microwave background radiation, to be isotropic. Non-comoving observers will see regions of the sky systematically blue-shifted or red-shifted. Thus isotropy, particularly isotropy of the cosmic microwave background radiation, defines a special local frame of reference called the comoving frame. The velocity of an observer relative to the local comoving frame is called the peculiar velocity of the observer. Most large lumps of matter, such as galaxies, are nearly comoving, so that their peculiar velocities (owing to gravitational attraction) are small compared to their Hubble-flow velocity seen by observers in moderately nearby galaxies, (i.e. as seen from galaxies just outside the group local to the observed "lump of matter"). The comoving time coordinate is the elapsed time since the Big Bang according to a clock of a comoving observer and is a measure of cosmological time. The comoving spatial coordinates tell where an event occurs while cosmological time tells when an event occurs. Together, they form a complete coordinate system, giving both the location and time of an event. Space in comoving coordinates is usually referred to as being "static", as most bodies on the scale of galaxies or larger are approximately comoving, and comoving bodies have static, unchanging comoving coordinates. So for a given pair of comoving galaxies, while the proper distance between them would have been smaller in the past and will become larger in the future due to the expansion of the universe, the comoving distance between them remains constant at all times. The expanding Universe has an increasing scale factor which explains how constant comoving distances are reconciled with proper distances that increase with time. Comoving distance and proper distance Comoving distance is the distance between two points measured along a path defined at the present cosmological time. For objects moving with the Hubble flow, it is deemed to remain constant in time. The comoving distance from an observer to a distant object (e.g. galaxy) can be computed by the following formula (derived using the Friedmann–Lemaître–Robertson–Walker metric): where a(t′) is the scale factor, te is the time of emission of the photons detected by the observer, t is the present time, and c is the speed of light in vacuum. Despite being an integral over time, this expression gives the correct distance that would be measured by a set of comoving local rulers at fixed time t, i.e. the "proper distance" (as defined below) after accounting for the time-dependent comoving speed of light via the inverse scale factor term in the integrand. By "comoving speed of light", we mean the velocity of light through comoving coordinates [] which is time-dependent even though locally, at any point along the null geodesic of the light particles, an observer in an inertial frame always measures the speed of light as in accordance with special relativity. For a derivation see "Appendix A: Standard general relativistic definitions of expansion and horizons" from Davis & Lineweaver 2004. In particular, see eqs. 16–22 in the referenced 2004 paper [note: in that paper the scale factor is defined as a quantity with the dimension of distance while the radial coordinate is dimensionless.] Definitions Many textbooks use the symbol for the comoving distance. However, this must be distinguished from the coordinate distance in the commonly used comoving coordinate system for a FLRW universe where the metric takes the form (in reduced-circumference polar coordinates, which only works half-way around a spherical universe): In this case the comoving coordinate distance is related to by: Most textbooks and research papers define the comoving distance between comoving observers to be a fixed unchanging quantity independent of time, while calling the dynamic, changing distance between them "proper distance". On this usage, comoving and proper distances are numerically equal at the current age of the universe, but will differ in the past and in the future; if the comoving distance to a galaxy is denoted , the proper distance at an arbitrary time is simply given by where is the scale factor (e.g. Davis & Lineweaver 2004). The proper distance between two galaxies at time t is just the distance that would be measured by rulers between them at that time. Uses of the proper distance Cosmological time is identical to locally measured time for an observer at a fixed comoving spatial position, that is, in the local comoving frame. Proper distance is also equal to the locally measured distance in the comoving frame for nearby objects. To measure the proper distance between two distant objects, one imagines that one has many comoving observers in a straight line between the two objects, so that all of the observers are close to each other, and form a chain between the two distant objects. All of these observers must have the same cosmological time. Each observer measures their distance to the nearest observer in the chain, and the length of the chain, the sum of distances between nearby observers, is the total proper distance. It is important to the definition of both comoving distance and proper distance in the cosmological sense (as opposed to proper length in special relativity) that all observers have the same cosmological age. For instance, if one measured the distance along a straight line or spacelike geodesic between the two points, observers situated between the two points would have different cosmological ages when the geodesic path crossed their own world lines, so in calculating the distance along this geodesic one would not be correctly measuring comoving distance or cosmological proper distance. Comoving and proper distances are not the same concept of distance as the concept of distance in special relativity. This can be seen by considering the hypothetical case of a universe empty of mass, where both sorts of distance can be measured. When the density of mass in the FLRW metric is set to zero (an empty 'Milne universe'), then the cosmological coordinate system used to write this metric becomes a non-inertial coordinate system in the Minkowski spacetime of special relativity where surfaces of constant Minkowski proper-time τ appear as hyperbolas in the Minkowski diagram from the perspective of an inertial frame of reference. In this case, for two events which are simultaneous according to the cosmological time coordinate, the value of the cosmological proper distance is not equal to the value of the proper length between these same events, which would just be the distance along a straight line between the events in a Minkowski diagram (and a straight line is a geodesic in flat Minkowski spacetime), or the coordinate distance between the events in the inertial frame where they are simultaneous. If one divides a change in proper distance by the interval of cosmological time where the change was measured (or takes the derivative of proper distance with respect to cosmological time) and calls this a "velocity", then the resulting "velocities" of galaxies or quasars can be above the speed of light, c. Such superluminal expansion is not in conflict with special or general relativity nor the definitions used in physical cosmology. Even light itself does not have a "velocity" of c in this sense; the total velocity of any object can be expressed as the sum where is the recession velocity due to the expansion of the universe (the velocity given by Hubble's law) and is the "peculiar velocity" measured by local observers (with and , the dots indicating a first derivative), so for light is equal to c (−c if the light is emitted towards our position at the origin and +c if emitted away from us) but the total velocity is generally different from c. Even in special relativity the coordinate speed of light is only guaranteed to be c in an inertial frame; in a non-inertial frame the coordinate speed may be different from c. In general relativity no coordinate system on a large region of curved spacetime is "inertial", but in the local neighborhood of any point in curved spacetime we can define a "local inertial frame" in which the local speed of light is c and in which massive objects such as stars and galaxies always have a local speed smaller than c. The cosmological definitions used to define the velocities of distant objects are coordinate-dependent – there is no general coordinate-independent definition of velocity between distant objects in general relativity. How best to describe and popularize that expansion of the universe is (or at least was) very likely proceeding – at the greatest scale – at above the speed of light, has caused a minor amount of controversy. One viewpoint is presented in Davis and Lineweaver, 2004. Short distances vs. long distances Within small distances and short trips, the expansion of the universe during the trip can be ignored. This is because the travel time between any two points for a non-relativistic moving particle will just be the proper distance (that is, the comoving distance measured using the scale factor of the universe at the time of the trip rather than the scale factor "now") between those points divided by the velocity of the particle. If the particle is moving at a relativistic velocity, the usual relativistic corrections for time dilation must be made.
Physical sciences
Physical cosmology
Astronomy
144428
https://en.wikipedia.org/wiki/Degenerate%20matter
Degenerate matter
Degenerate matter occurs when the Pauli exclusion principle significantly alters a state of matter at low temperature. The term is used in astrophysics to refer to dense stellar objects such as white dwarfs and neutron stars, where thermal pressure alone is not enough to prevent gravitational collapse. The term also applies to metals in the Fermi gas approximation. Degenerate matter is usually modelled as an ideal Fermi gas, an ensemble of non-interacting fermions. In a quantum mechanical description, particles limited to a finite volume may take only a discrete set of energies, called quantum states. The Pauli exclusion principle prevents identical fermions from occupying the same quantum state. At lowest total energy (when the thermal energy of the particles is negligible), all the lowest energy quantum states are filled. This state is referred to as full degeneracy. This degeneracy pressure remains non-zero even at absolute zero temperature. Adding particles or reducing the volume forces the particles into higher-energy quantum states. In this situation, a compression force is required, and is made manifest as a resisting pressure. The key feature is that this degeneracy pressure does not depend on the temperature but only on the density of the fermions. Degeneracy pressure keeps dense stars in equilibrium, independent of the thermal structure of the star. A degenerate mass whose fermions have velocities close to the speed of light (particle kinetic energy larger than its rest mass energy) is called relativistic degenerate matter. The concept of degenerate stars, stellar objects composed of degenerate matter, was originally developed in a joint effort between Arthur Eddington, Ralph Fowler and Arthur Milne. Eddington had suggested that the atoms in Sirius B were almost completely ionised and closely packed. Fowler described white dwarfs as composed of a gas of particles that became degenerate at low temperature; he also pointed out that ordinary atoms are broadly similar in regards to the filling of energy levels by fermions. Milne proposed that degenerate matter is found in most of the nuclei of stars, not only in compact stars. Concept Degenerate matter exhibits quantum mechanical properties when a fermion system temperature approaches absolute zero. These properties result from a combination of the Pauli exclusion principle and quantum confinement. The Pauli principle allows only one fermion in each quantum state and the confinement ensures that energy of these states increases as they are filled. The lowest states fill up and fermions are forced to occupy high energy states even at low temperature. While the Pauli principle and Fermi-Dirac distribution applies to all matter, the interesting cases for degenerate matter involve systems of many fermions. These cases can be understood with the help of the Fermi gas model. Examples include electrons in metals and in white dwarf stars and neutrons in neutron stars. The electrons are confined by Coulomb attraction to positive ion cores; the neutrons are confined by gravitation attraction. The fermions, forced in to higher levels by the Pauli principle, exert pressure preventing further compression. The allocation or distribution of fermions into quantum states ranked by energy is called the Fermi-Dirac distribution. Degenerate matter exhibits the results of Fermi-Dirac distribution. Degeneracy pressure Unlike a classical ideal gas, whose pressure is proportional to its temperature where P is pressure, kB is the Boltzmann constant, N is the number of particles (typically atoms or molecules), T is temperature, and V is the volume, the pressure exerted by degenerate matter depends only weakly on its temperature. In particular, the pressure remains nonzero even at absolute zero temperature. At relatively low densities, the pressure of a fully degenerate gas can be derived by treating the system as an ideal Fermi gas, in this way where m is the mass of the individual particles making up the gas. At very high densities, where most of the particles are forced into quantum states with relativistic energies, the pressure is given by where K is another proportionality constant depending on the properties of the particles making up the gas. All matter experiences both normal thermal pressure and degeneracy pressure, but in commonly encountered gases, thermal pressure dominates so much that degeneracy pressure can be ignored. Likewise, degenerate matter still has normal thermal pressure; the degeneracy pressure dominates to the point that temperature has a negligible effect on the total pressure. The adjacent figure shows the thermal pressure (red line) and total pressure (blue line) in a Fermi gas, with the difference between the two being the degeneracy pressure. As the temperature falls, the density and the degeneracy pressure increase, until the degeneracy pressure contributes most of the total pressure. While degeneracy pressure usually dominates at extremely high densities, it is the ratio between degenerate pressure and thermal pressure which determines degeneracy. Given a sufficiently drastic increase in temperature (such as during a red giant star's helium flash), matter can become non-degenerate without reducing its density. Degeneracy pressure contributes to the pressure of conventional solids, but these are not usually considered to be degenerate matter because a significant contribution to their pressure is provided by electrical repulsion of atomic nuclei and the screening of nuclei from each other by electrons. The free electron model of metals derives their physical properties by considering the conduction electrons alone as a degenerate gas, while the majority of the electrons are regarded as occupying bound quantum states. This solid state contrasts with degenerate matter that forms the body of a white dwarf, where most of the electrons would be treated as occupying free particle momentum states. Exotic examples of degenerate matter include neutron degenerate matter, strange matter, metallic hydrogen and white dwarf matter. Degenerate gases Degenerate gases are gases composed of fermions such as electrons, protons, and neutrons rather than molecules of ordinary matter. The electron gas in ordinary metals and in the interior of white dwarfs are two examples. Following the Pauli exclusion principle, there can be only one fermion occupying each quantum state. In a degenerate gas, all quantum states are filled up to the Fermi energy. Most stars are supported against their own gravitation by normal thermal gas pressure, while in white dwarf stars the supporting force comes from the degeneracy pressure of the electron gas in their interior. In neutron stars, the degenerate particles are neutrons. A fermion gas in which all quantum states below a given energy level are filled is called a fully degenerate fermion gas. The difference between this energy level and the lowest energy level is known as the Fermi energy. Electron degeneracy In an ordinary fermion gas in which thermal effects dominate, most of the available electron energy levels are unfilled and the electrons are free to move to these states. As particle density is increased, electrons progressively fill the lower energy states and additional electrons are forced to occupy states of higher energy even at low temperatures. Degenerate gases strongly resist further compression because the electrons cannot move to already filled lower energy levels due to the Pauli exclusion principle. Since electrons cannot give up energy by moving to lower energy states, no thermal energy can be extracted. The momentum of the fermions in the fermion gas nevertheless generates pressure, termed "degeneracy pressure". Under high densities, matter becomes a degenerate gas when all electrons are stripped from their parent atoms. The core of a star, once hydrogen burning nuclear fusion reactions stops, becomes a collection of positively charged ions, largely helium and carbon nuclei, floating in a sea of electrons, which have been stripped from the nuclei. Degenerate gas is an almost perfect conductor of heat and does not obey ordinary gas laws. White dwarfs are luminous not because they are generating energy but rather because they have trapped a large amount of heat which is gradually radiated away. Normal gas exerts higher pressure when it is heated and expands, but the pressure in a degenerate gas does not depend on the temperature. When gas becomes super-compressed, particles position right up against each other to produce degenerate gas that behaves more like a solid. In degenerate gases the kinetic energies of electrons are quite high and the rate of collision between electrons and other particles is quite low, therefore degenerate electrons can travel great distances at velocities that approach the speed of light. Instead of temperature, the pressure in a degenerate gas depends only on the speed of the degenerate particles; however, adding heat does not increase the speed of most of the electrons, because they are stuck in fully occupied quantum states. Pressure is increased only by the mass of the particles, which increases the gravitational force pulling the particles closer together. Therefore, the phenomenon is the opposite of that normally found in matter where if the mass of the matter is increased, the object becomes bigger. In degenerate gas, when the mass is increased, the particles become spaced closer together due to gravity (and the pressure is increased), so the object becomes smaller. Degenerate gas can be compressed to very high densities, typical values being in the range of 10,000 kilograms per cubic centimeter. There is an upper limit to the mass of an electron-degenerate object, the Chandrasekhar limit, beyond which electron degeneracy pressure cannot support the object against collapse. The limit is approximately 1.44 solar masses for objects with typical compositions expected for white dwarf stars (carbon and oxygen with two baryons per electron). This mass cut-off is appropriate only for a star supported by ideal electron degeneracy pressure under Newtonian gravity; in general relativity and with realistic Coulomb corrections, the corresponding mass limit is around 1.38 solar masses. The limit may also change with the chemical composition of the object, as it affects the ratio of mass to number of electrons present. The object's rotation, which counteracts the gravitational force, also changes the limit for any particular object. Celestial objects below this limit are white dwarf stars, formed by the gradual shrinking of the cores of stars that run out of fuel. During this shrinking, an electron-degenerate gas forms in the core, providing sufficient degeneracy pressure as it is compressed to resist further collapse. Above this mass limit, a neutron star (primarily supported by neutron degeneracy pressure) or a black hole may be formed instead. Neutron degeneracy Neutron degeneracy is analogous to electron degeneracy and exists in neutron stars, which are partially supported by the pressure from a degenerate neutron gas. Neutron stars are formed either directly from the supernova of stars with masses between 10 and 25 M☉ (solar masses), or by white dwarfs acquiring a mass in excess of the Chandrasekhar limit of 1.44 M☉, usually either as a result of a merger or by feeding off of a close binary partner. Above the Chandrasekhar limit, the gravitational pressure at the core exceeds the electron degeneracy pressure, and electrons begin to combine with protons to produce neutrons (via inverse beta decay, also termed electron capture). The result is an extremely compact star composed of "nuclear matter", which is predominantly a degenerate neutron gas with a small admixture of degenerate proton and electron gases. Neutrons in a degenerate neutron gas are spaced much more closely than electrons in an electron-degenerate gas because the more massive neutron has a much shorter wavelength at a given energy. This phenomenon is compounded by the fact that the pressures within neutron stars are much higher than those in white dwarfs. The pressure increase is caused by the fact that the compactness of a neutron star causes gravitational forces to be much higher than in a less compact body with similar mass. The result is a star with a diameter on the order of a thousandth that of a white dwarf. The properties of neutron matter set an upper limit to the mass of a neutron star, the Tolman–Oppenheimer–Volkoff limit, which is analogous to the Chandrasekhar limit for white dwarf stars. Proton degeneracy Sufficiently dense matter containing protons experiences proton degeneracy pressure, in a manner similar to the electron degeneracy pressure in electron-degenerate matter: protons confined to a sufficiently small volume have a large uncertainty in their momentum due to the Heisenberg uncertainty principle. However, because protons are much more massive than electrons, the same momentum represents a much smaller velocity for protons than for electrons. As a result, in matter with approximately equal numbers of protons and electrons, proton degeneracy pressure is much smaller than electron degeneracy pressure, and proton degeneracy is usually modelled as a correction to the equations of state of electron-degenerate matter. Quark degeneracy At densities greater than those supported by neutron degeneracy, quark matter is expected to occur. Several variations of this hypothesis have been proposed that represent quark-degenerate states. Strange matter is a degenerate gas of quarks that is often assumed to contain strange quarks in addition to the usual up and down quarks. Color superconductor materials are degenerate gases of quarks in which quarks pair up in a manner similar to Cooper pairing in electrical superconductors. The equations of state for the various proposed forms of quark-degenerate matter vary widely, and are usually also poorly defined, due to the difficulty of modelling strong force interactions. Quark-degenerate matter may occur in the cores of neutron stars, depending on the equations of state of neutron-degenerate matter. It may also occur in hypothetical quark stars, formed by the collapse of objects above the Tolman–Oppenheimer–Volkoff mass limit for neutron-degenerate objects. Whether quark-degenerate matter forms at all in these situations depends on the equations of state of both neutron-degenerate matter and quark-degenerate matter, both of which are poorly known. Quark stars are considered to be an intermediate category between neutron stars and black holes. History Quantum mechanics uses the word 'degenerate' in two ways: degenerate energy levels and as the low temperature ground state limit for states of matter. The electron degeneracy pressure occurs in the ground state systems which are non-degenerate in energy levels. The term "degeneracy" derives from work on the specific heat of gases that pre-dates the use of the term in quantum mechanics. In 1914 Walther Nernst described the reduction of the specific heat of gases at very low temperature as "degeneration"; he attributed this to quantum effects. In subsequent work in various papers on quantum thermodynamics by Albert Einstein, by Max Planck, and by Erwin Schrödinger, the effect at low temperatures came to be called "gas degeneracy". A fully degenerate gas has no volume dependence on pressure when temperature approaches absolute zero. Early in 1927 Enrico Fermi and separately Llewellyn Thomas developed a semi-classical model for electrons in a metal. The model treated the electrons as a gas. Later in 1927, Arnold Sommerfeld applied the Pauli principle via Fermi-Dirac statistics to this electron gas model, computing the specific heat of metals; the result became Fermi gas model for metals. Sommerfeld called the low temperature region with quantum effects a "wholly degenerate gas". Also in 1927 Ralph H. Fowler applied Fermi's model to the puzzle of the stability of white dwarf stars. This approach was extended to relativistic models by later studies and with the work of Subrahmanyan Chandrasekhar became the accepted model for star stability.
Physical sciences
States of matter
null
144551
https://en.wikipedia.org/wiki/Knapping
Knapping
Knapping is the shaping of flint, chert, obsidian, or other conchoidal fracturing stone through the process of lithic reduction to manufacture stone tools, strikers for flintlock firearms, or to produce flat-faced stones for building or facing walls, and flushwork decoration. The original Germanic term knopp meant to strike, shape, or work, so it could theoretically have referred equally well to making statues or dice. Modern usage is more specific, referring almost exclusively to the free hand percussion process pictured. It is distinguished from the more general verb "chip" (to break up into small pieces, or unintentionally break off a piece of something) and is different from "carve" (removing only part of a face), and "cleave" (breaking along a natural plane). Method Flintknapping or knapping is done in a variety of ways depending on the purpose of the final product. For stone tools and flintlock strikers, chert is worked using a fabricator such as a hammerstone to remove lithic flakes from a nucleus or core of tool stone. Stone tools can then be further refined using wood, bone, and antler tools to perform pressure flaking. For building work a hammer or pick is used to split chert nodules supported on the lap. Often the chert nodule will be split in half to create two cherts with a flat circular face for use in walls constructed of lime. More sophisticated knapping is employed to produce near-perfect cubes which are used as bricks. Tools There are many different methods of shaping stone into useful tools. Early knappers could have used simple hammers made of wood or antler to shape stone tools. The factors that contribute to the knapping results are varied, but the EPA (exterior platform angle) indeed influences many attributes, such as length, thickness and termination of flakes. Hard hammer techniques are used to remove large flakes of stone. Early knappers and hobbyists replicating their methods often use cobbles of very hard stone, such as quartzite. This technique can be used by flintknappers to remove broad flakes that can be made into smaller tools. This method of manufacture is believed to have been used to make some of the earliest stone tools ever found, some of which date from over 2 million years ago. Soft hammer techniques are more precise than hard hammer methods of shaping stone. Soft hammer techniques allow a knapper to shape a stone into many different kinds of cutting, scraping, and projectile tools. These "soft hammer" techniques also produce longer, thinner flakes, potentially allowing for material conservation or a lighter lithic tool kit to be carried by mobile societies. Pressure flaking involves removing narrow flakes along the edge of a stone tool. This technique is often used to do detailed thinning and shaping of a stone tool. Pressure flaking involves putting a large amount of force across a region on the edge of the tool and (when successful) causing a narrow flake to come off of the stone. Modern hobbyists often use pressure flaking tools with a copper or brass tip, but early knappers could have used antler tines or a pointed wooden punch; traditionalist knappers still use antler tines and copper-tipped tools. The major advantage of using soft metals rather than wood or bone is that the metal punches wear down less and are less likely to break under pressure. Uses In cultures that have not adopted metalworking technologies, the production of stone tools by knappers is common, but in modern cultures the making of such tools is the domain of experimental archaeologists and hobbyists. Archaeologists usually undertake the task so that they can better understand how prehistoric stone tools were made. Knapping is often learned by outdoor enthusiasts. Knapping gun flints, used by flintlock firearms was formerly a major industry in flint-bearing locations, such as Brandon in Suffolk, England, and the small towns of Meusnes and Couffy in France. Meusnes has a small museum dedicated to the industry. In 1804, during the Napoleonic Wars, Brandon was supplying over 400,000 flints a month for use by the British Army and Navy. Brandon knappers made gun flints for export to Africa as late as the 1960s. Knapping for building purposes is still a skill that is practiced in the flint-bearing regions of southern England, such as Sussex, Suffolk, and Norfolk, and in northern France, especially Brittany and Normandy, where there is a resurgence of the craft due to government funding. Health hazards The sustained inhalation of flint dust produced by knapping can cause silicosis. This has been called "the world's first industrial disease". However, it is unclear how severe the issue may actually have been in prehistoric working conditions, as silicosis is aggravated by a lack of ventilation and the use of metal tools which produce more dust. Ancient knappers, working in the open air and with stone and bone tools, would have had less prolonged exposure to dust than in more modern workshops. When gun flint knapping was a large-scale industry in Brandon, Suffolk, silicosis was widely known as knappers' rot. It has been claimed silicosis was responsible for the early death of three-quarters of Brandon gun flint makers. In one workshop, seven of the eight workers died of the condition before the age of fifty. The average age of death for knappers was 44 years, compared to 66 for other employed men in the same area. Modern knappers are advised to work in the open air to reduce the dust hazard, and to wear eye and hand protection. Some modern knappers wear a respirator to guard against dust. A 2020 survey of 173 knappers found that 86% used eye protection, 57% wore gloves, and only 5% used a respirator, mask, or fan to control dust (although 68% preferred to knap outdoors). About half of respondents reported being injured at least "often" when knapping, and 23% admitted having to seek professional medical attention at least once. The most commons injuries were cuts and bruises, typically on the fingers and hands, while flakes in the eye were also frequent. Contemporary study Modern American interest in knapping can be traced back to the study of a California Native American called Ishi who lived in the early twentieth century. Ishi taught scholars and academics traditional methods of making stone tools and how to use them for survival in the wild. Early European explorers to the New world were also exposed to flint knapping techniques. Additionally, several pioneering nineteenth-century European experimental knappers are also known and in the late 1960s and early 1970s experimental archaeologist Don Crabtree published texts such as Experiments in Flintworking. François Bordes was an early writer on Old World knapping; he experimented with ways to replicate stone tools found across Western Europe. These authors helped to ignite a small craze in knapping among archaeologists and prehistorians. English archaeologist Phil Harding is another contemporary expert, whose exposure on the television series Time Team has led to him being a familiar figure in the UK and beyond. Many groups, with members from all walks of life, can now be found across the United States and Europe. These organizations continue to demonstrate and teach various ways of shaping stone tools.
Technology
Materials
null