id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
4462484 | https://en.wikipedia.org/wiki/Ohm | Ohm | The ohm (symbol: Ω, the uppercase Greek letter omega) is the unit of electrical resistance in the International System of Units (SI). It is named after German physicist Georg Ohm. Various empirically derived standard units for electrical resistance were developed in connection with early telegraphy practice, and the British Association for the Advancement of Science proposed a unit derived from existing units of mass, length and time, and of a convenient scale for practical work as early as 1861.
Following the 2019 revision of the SI, in which the ampere and the kilogram were redefined in terms of fundamental constants, the ohm is now also defined as an exact value in terms of these constants.
Definition
The ohm is defined as an electrical resistance between two points of a conductor when a constant potential difference of one volt (V), applied to these points, produces in the conductor a current of one ampere (A), the conductor not being the seat of any electromotive force.
in which the following additional units appear: siemens (S), watt (W), second (s), farad (F), henry (H), weber (Wb), joule (J), coulomb (C), kilogram (kg), and meter (m).
In many cases the resistance of a conductor is approximately constant within a certain range of voltages, temperatures, and other parameters. These are called linear resistors. In other cases resistance varies, such as in the case of the thermistor, which exhibits a strong dependence of its resistance with temperature.
In the US, a double vowel in the prefixed units "kiloohm" and "megaohm" is commonly simplified, producing "kilohm" and "megohm".
In alternating current circuits, electrical impedance is also measured in ohms.
Relation to conductance
The siemens (S) is the SI derived unit of electric conductance and admittance, historically known as the "mho" (ohm spelled backwards, symbol is ℧); it is one reciprocal ohm:
Power as a function of resistance
The power dissipated by a resistor may be calculated from its resistance, and the voltage or current involved. The formula is a combination of Ohm's law and Joule's law:
where is the power, is the resistance, is the voltage across the resistor, and is the current through the resistor.
A linear resistor has a constant resistance value over all applied voltages or currents; many practical resistors are linear over a useful range of currents. Non-linear resistors have a value that may vary depending on the applied voltage (or current). Where alternating current is applied to the circuit (or where the resistance value is a function of time), the relation above is true at any instant, but calculation of average power over an interval of time requires integration of "instantaneous" power over that interval.
Since the ohm belongs to a coherent system of units, when each of these quantities has its corresponding SI unit (watt for , ohm for , volt for and ampere for , which are related as in ) this formula remains valid numerically when these units are used (and thought of as being cancelled or omitted).
History
The rapid rise of electrotechnology in the last half of the 19th century created a demand for a rational, coherent, consistent, and international system of units for electrical quantities. Telegraphers and other early users of electricity in the 19th century needed a practical standard unit of measurement for resistance. Resistance was often expressed as a multiple of the resistance of a standard length of telegraph wires; different agencies used different bases for a standard, so units were not readily interchangeable. Electrical units so defined were not a coherent system with the units for energy, mass, length, and time, requiring conversion factors to be used in calculations relating energy or power to resistance.
Two different methods of establishing a system of electrical units can be chosen. Various artifacts, such as a length of wire or a standard electrochemical cell, could be specified as producing defined quantities for resistance, voltage, and so on. Alternatively, the electrical units can be related to the mechanical units by defining, for example, a unit of current that gives a specified force between two wires, or a unit of charge that gives a unit of force between two unit charges. This latter method ensures coherence with the units of energy. Defining a unit for resistance that is coherent with units of energy and time in effect also requires defining units for potential and current. It is desirable that one unit of electrical potential will force one unit of electric current through one unit of electrical resistance, doing one unit of work in one unit of time, otherwise, all electrical calculations will require conversion factors.
Since so-called "absolute" units of charge and current are expressed as combinations of units of mass, length, and time, dimensional analysis of the relations between potential, current, and resistance show that resistance is expressed in units of length per time – a velocity. Some early definitions of a unit of resistance, for example, defined a unit resistance as one quadrant of the Earth per second.
The absolute-unit system related magnetic and electrostatic quantities to metric base units of mass, time, and length. These units had the great advantage of simplifying the equations used in the solution of electromagnetic problems, and eliminated conversion factors in calculations about electrical quantities. However, the centimeter–gram–second, CGS, units turned out to have impractical sizes for practical measurements.
Various artifact standards were proposed as the definition of the unit of resistance. In 1860 Werner Siemens (1816–1892) published a suggestion for a reproducible resistance standard in Poggendorff's Annalen der Physik und Chemie. He proposed a column of pure mercury, of one square millimeter cross section, one meter long: Siemens mercury unit. However, this unit was not coherent with other units. One proposal was to devise a unit based on a mercury column that would be coherent – in effect, adjusting the length to make the resistance one ohm. Not all users of units had the resources to carry out metrology experiments to the required precision, so working standards notionally based on the physical definition were required.
In 1861, Latimer Clark (1822–1898) and Sir Charles Bright (1832–1888) presented a paper at the British Association for the Advancement of Science meeting suggesting that standards for electrical units be established and suggesting names for these units derived from eminent philosophers, 'Ohma', 'Farad' and 'Volt'. The BAAS in 1861 appointed a committee including Maxwell and Thomson to report upon standards of electrical resistance. Their objectives were to devise a unit that was of convenient size, part of a complete system for electrical measurements, coherent with the units for energy, stable, reproducible and based on the French metrical system. In the third report of the committee, 1864, the resistance unit is referred to as "B.A. unit, or Ohmad". By 1867 the unit is referred to as simply ohm.
The B.A. ohm was intended to be 109 CGS units but owing to an error in calculations the definition was 1.3% too small. The error was significant for preparation of working standards.
On 21 September 1881 the International Electrical Congress defined a practical unit of ohm for the resistance, based on CGS units, using a mercury column 1 mm2 in cross-section, approximately 104.9 cm in length at 0 °C, similar to the apparatus suggested by Siemens.
A legal ohm, a reproducible standard, was defined by the international conference of electricians at Paris in 1884 as the resistance of a mercury column of specified weight and 106 cm long; this was a compromise value between the B. A. unit (equivalent to 104.7 cm), the Siemens unit (100 cm by definition), and the CGS unit. Although called "legal", this standard was not adopted by any national legislation. The "international" ohm was recommended by unanimous resolution at the International Electrical Congress 1893 in Chicago. The unit was based upon the ohm equal to 109 units of resistance of the C.G.S. system of electromagnetic units. The international ohm is represented by the resistance offered to an unvarying electric current in a mercury column of constant cross-sectional area 106.3 cm long of mass 14.4521 grams and 0 °C. This definition became the basis for the legal definition of the ohm in several countries. In 1908, this definition was adopted by scientific representatives from several countries at the International Conference on Electric Units and Standards in London. The mercury column standard was maintained until the 1948 General Conference on Weights and Measures, at which the ohm was redefined in absolute terms instead of as an artifact standard.
By the end of the 19th century, units were well understood and consistent. Definitions would change with little effect on commercial uses of the units. Advances in metrology allowed definitions to be formulated with a high degree of precision and repeatability.
Historical units of resistance
Realization of standards
The mercury column method of realizing a physical standard ohm turned out to be difficult to reproduce, owing to the effects of non-constant cross section of the glass tubing. Various resistance coils were constructed by the British Association and others, to serve as physical artifact standards for the unit of resistance. The long-term stability and reproducibility of these artifacts was an ongoing field of research, as the effects of temperature, air pressure, humidity, and time on the standards were detected and analyzed.
Artifact standards are still used, but metrology experiments relating accurately dimensioned inductors and capacitors provided a more fundamental basis for the definition of the ohm. Since 1990 the quantum Hall effect has been used to define the ohm with high precision and repeatability. The quantum Hall experiments are used to check the stability of working standards that have convenient values for comparison.
Following the 2019 revision of the SI, in which the ampere and the kilogram were redefined in terms of fundamental constants, the ohm is now also defined in terms of these constants.
Symbol
The symbol Ω was suggested, because of the similar sound of ohm and omega, by William Henry Preece in 1867. In documents printed before Second World War the unit symbol often consisted of the raised lowercase omega (ω), such that 56 Ω was written as 56ω.
Historically, some document editing software applications have used the Symbol typeface to render the character Ω. Where the font is not supported, the same document may be displayed with a "W" ("10 W" instead of "10 Ω", for instance). As W represents the watt, the SI unit of power, this can lead to confusion, making the use of the correct Unicode code point preferable.
Where the character set is limited to ASCII, the IEEE 260.1 standard recommends using the unit name "ohm" as a symbol instead of Ω.
In the electronics industry it is common to use the character R instead of the Ω symbol, thus, a 10 Ω resistor may be represented as 10R. This is part of the RKM code. It is used in many instances where the value has a decimal place. For example, 5.6 Ω is listed as 5R6, or 2200 Ω is listed as 2K2. This method avoids overlooking the decimal point, which may not be rendered reliably on components or when duplicating documents.
Unicode encodes the symbol as , distinct from Greek omega among letterlike symbols, but it is only included for backward compatibility and the Greek uppercase omega character is preferred. In MS-DOS and Microsoft Windows, the alt code ALT 234 may produce the Ω symbol. In Mac OS, does the same.
| Physical sciences | Electromagnetism | null |
85746 | https://en.wikipedia.org/wiki/Stoma | Stoma | In botany, a stoma (: stomata, from Greek στόμα, "mouth"), also called a stomate (: stomates), is a pore found in the epidermis of leaves, stems, and other organs, that controls the rate of gas exchange between the internal air spaces of the leaf and the atmosphere. The pore is bordered by a pair of specialized parenchyma cells known as guard cells that regulate the size of the stomatal opening.
The term is usually used collectively to refer to the entire stomatal complex, consisting of the paired guard cells and the pore itself, which is referred to as the stomatal aperture. Air, containing oxygen, which is used in respiration, and carbon dioxide, which is used in photosynthesis, passes through stomata by gaseous diffusion. Water vapour diffuses through the stomata into the atmosphere as part of a process called transpiration.
Stomata are present in the sporophyte generation of the vast majority of land plants, with the exception of liverworts, as well as some mosses and hornworts. In vascular plants the number, size and distribution of stomata varies widely. Dicotyledons usually have more stomata on the lower surface of the leaves than the upper surface. Monocotyledons such as onion, oat and maize may have about the same number of stomata on both leaf surfaces. In plants with floating leaves, stomata may be found only on the upper epidermis and submerged leaves may lack stomata entirely. Most tree species have stomata only on the lower leaf surface. Leaves with stomata on both the upper and lower leaf surfaces are called amphistomatous leaves; leaves with stomata only on the lower surface are hypostomatous, and leaves with stomata only on the upper surface are epistomatous or hyperstomatous. Size varies across species, with end-to-end lengths ranging from 10 to 80 μm and width ranging from a few to 50 μm.
Function
CO2 gain and water loss
Carbon dioxide, a key reactant in photosynthesis, is present in the atmosphere at a concentration of about 400 ppm. Most plants require the stomata to be open during daytime. The air spaces in the leaf are saturated with water vapour, which exits the leaf through the stomata in a process known as transpiration. Therefore, plants cannot gain carbon dioxide without simultaneously losing water vapour.
Alternative approaches
Ordinarily, carbon dioxide is fixed to ribulose 1,5-bisphosphate (RuBP) by the enzyme RuBisCO in mesophyll cells exposed directly to the air spaces inside the leaf. This exacerbates the transpiration problem for two reasons: first, RuBisCo has a relatively low affinity for carbon dioxide, and second, it fixes oxygen to RuBP, wasting energy and carbon in a process called photorespiration. For both of these reasons, RuBisCo needs high carbon dioxide concentrations, which means wide stomatal apertures and, as a consequence, high water loss.
Narrower stomatal apertures can be used in conjunction with an intermediary molecule with a high carbon dioxide affinity, phosphoenolpyruvate carboxylase (PEPcase). Retrieving the products of carbon fixation from PEPCase is an energy-intensive process, however. As a result, the PEPCase alternative is preferable only where water is limiting but light is plentiful, or where high temperatures increase the solubility of oxygen relative to that of carbon dioxide, magnifying RuBisCo's oxygenation problem.
C.A.M. plants
A group of mostly desert plants called "C.A.M." plants (crassulacean acid metabolism, after the family Crassulaceae, which includes the species in which the CAM process was first discovered) open their stomata at night (when water evaporates more slowly from leaves for a given degree of stomatal opening), use PEPcase to fix carbon dioxide and store the products in large vacuoles. The following day, they close their stomata and release the carbon dioxide fixed the previous night into the presence of RuBisCO. This saturates RuBisCO with carbon dioxide, allowing minimal photorespiration. This approach, however, is severely limited by the capacity to store fixed carbon in the vacuoles, so it is preferable only when water is severely limited.
Opening and closing
However, most plants do not have CAM and must therefore open and close their stomata during the daytime, in response to changing conditions, such as light intensity, humidity, and carbon dioxide concentration. When conditions are conducive to stomatal opening (e.g., high light intensity and high humidity), a proton pump drives protons (H+) from the guard cells. This means that the cells' electrical potential becomes increasingly negative. The negative potential opens potassium voltage-gated channels and so an uptake of potassium ions (K+) occurs. To maintain this internal negative voltage so that entry of potassium ions does not stop, negative ions balance the influx of potassium. In some cases, chloride ions enter, while in other plants the organic ion malate is produced in guard cells. This increase in solute concentration lowers the water potential inside the cell, which results in the diffusion of water into the cell through osmosis. This increases the cell's volume and turgor pressure. Then, because of rings of cellulose microfibrils that prevent the width of the guard cells from swelling, and thus only allow the extra turgor pressure to elongate the guard cells, whose ends are held firmly in place by surrounding epidermal cells, the two guard cells lengthen by bowing apart from one another, creating an open pore through which gas can diffuse.
When the roots begin to sense a water shortage in the soil, abscisic acid (ABA) is released. ABA binds to receptor proteins in the guard cells' plasma membrane and cytosol, which first raises the pH of the cytosol of the cells and cause the concentration of free Ca2+ to increase in the cytosol due to influx from outside the cell and release of Ca2+ from internal stores such as the endoplasmic reticulum and vacuoles. This causes the chloride (Cl−) and organic ions to exit the cells. Second, this stops the uptake of any further K+ into the cells and, subsequently, the loss of K+. The loss of these solutes causes an increase in water potential, which results in the diffusion of water back out of the cell by osmosis. This makes the cell plasmolysed, which results in the closing of the stomatal pores.
Guard cells have more chloroplasts than the other epidermal cells from which guard cells are derived. Their function is controversial.
Inferring stomatal behavior from gas exchange
The degree of stomatal resistance can be determined by measuring leaf gas exchange of a leaf. The transpiration rate is dependent on the diffusion resistance provided by the stomatal pores and also on the humidity gradient between the leaf's internal air spaces and the outside air. Stomatal resistance (or its inverse, stomatal conductance) can therefore be calculated from the transpiration rate and humidity gradient. This allows scientists to investigate how stomata respond to changes in environmental conditions, such as light intensity and concentrations of gases such as water vapor, carbon dioxide, and ozone. Evaporation (E) can be calculated as
where ei and ea are the partial pressures of water in the leaf and in the ambient air respectively, P is atmospheric pressure, and r is stomatal resistance.
The inverse of r is conductance to water vapor (g), so the equation can be rearranged to
and solved for g:
Photosynthetic CO2 assimilation (A) can be calculated from
where Ca and Ci are the atmospheric and sub-stomatal partial pressures of CO2 respectively. The rate of evaporation from a leaf can be determined using a photosynthesis system. These scientific instruments measure the amount of water vapour leaving the leaf and the vapor pressure of the ambient air. Photosynthetic systems may calculate water use efficiency (A/E), g, intrinsic water use efficiency (A/g), and Ci. These scientific instruments are commonly used by plant physiologists to measure CO2 uptake and thus measure photosynthetic rate.
Evolution
There is little evidence of the evolution of stomata in the fossil record, but they had appeared in land plants by the middle of the Silurian period. They may have evolved by the modification of conceptacles from plants' alga-like ancestors.
However, the evolution of stomata must have happened at the same time as the waxy cuticle was evolving – these two traits together constituted a major advantage for early terrestrial plants.
Development
There are three major epidermal cell types which all ultimately derive from the outermost (L1) tissue layer of the shoot apical meristem, called protodermal cells: trichomes, pavement cells and guard cells, all of which are arranged in a non-random fashion.
An asymmetrical cell division occurs in protodermal cells resulting in one large cell that is fated to become a pavement cell and a smaller cell called a meristemoid that will eventually differentiate into the guard cells that surround a stoma. This meristemoid then divides asymmetrically one to three times before differentiating into a guard mother cell. The guard mother cell then makes one symmetrical division, which forms a pair of guard cells. Cell division is inhibited in some cells so there is always at least one cell between stomata.
Stomatal patterning is controlled by the interaction of many signal transduction components such as EPF (Epidermal Patterning Factor), ERL (ERecta Like) and YODA (a putative MAP kinase kinase kinase). Mutations in any one of the genes which encode these factors may alter the development of stomata in the epidermis. For example, a mutation in one gene causes more stomata that are clustered together, hence is called Too Many Mouths (TMM). Whereas, disruption of the SPCH (SPeecCHless) gene prevents stomatal development all together. Inhibition of stomatal production can occur by the activation of EPF1, which activates TMM/ERL, which together activate YODA. YODA inhibits SPCH, causing SPCH activity to decrease, preventing asymmetrical cell division that initiates stomata formation. Stomatal development is also coordinated by the cellular peptide signal called stomagen, which signals the activation of the SPCH, resulting in increased number of stomata.
Environmental and hormonal factors can affect stomatal development. Light increases stomatal development in plants; while, plants grown in the dark have a lower amount of stomata. Auxin represses stomatal development by affecting their development at the receptor level like the ERL and TMM receptors. However, a low concentration of auxin allows for equal division of a guard mother cell and increases the chance of producing guard cells.
Most angiosperm trees have stomata only on their lower leaf surface. Poplars and willows have them on both surfaces. When leaves develop stomata on both leaf surfaces, the stomata on the lower surface tend to be larger and more numerous, but there can be a great degree of variation in size and frequency about species and genotypes. White ash and white birch leaves had fewer stomata but larger in size. On the other hand sugar maple and silver maple had small stomata that were more numerous.
Types
Different classifications of stoma types exist. One that is widely used is based on the types that Julien Joseph Vesque introduced in 1889, was further developed by Metcalfe and Chalk, and later complemented by other authors. It is based on the size, shape and arrangement of the subsidiary cells that surround the two guard cells.
They distinguish for dicots:
(meaning star-celled) stomata have guard cells that are surrounded by at least five radiating cells forming a star-like circle. This is a rare type that can for instance be found in the family Ebenaceae.
(meaning unequal celled) stomata have guard cells between two larger subsidiary cells and one distinctly smaller one. This type of stomata can be found in more than thirty dicot families, including Brassicaceae, Solanaceae, and Crassulaceae. It is sometimes called cruciferous type.
(meaning irregular celled) stomata have guard cells that are surrounded by cells that have the same size, shape and arrangement as the rest of the epidermis cells. This type of stomata can be found in more than hundred dicot families such as Apocynaceae, Boraginaceae, Chenopodiaceae, and Cucurbitaceae. It is sometimes called ranunculaceous type.
(meaning cross-celled) stomata have guard cells surrounded by two subsidiary cells, that each encircle one end of the opening and contact each other opposite to the middle of the opening. This type of stomata can be found in more than ten dicot families such as Caryophyllaceae and Acanthaceae. It is sometimes called caryophyllaceous type.
stomata are bordered by just one subsidiary cell that differs from the surrounding epidermis cells, its length parallel to the stoma opening. This type occurs for instance in the Molluginaceae and Aizoaceae.
(meaning parallel celled) stomata have one or more subsidiary cells parallel to the opening between the guard cells. These subsidiary cells may reach beyond the guard cells or not. This type of stomata can be found in more than hundred dicot families such as Rubiaceae, Convolvulaceae and Fabaceae. It is sometimes called rubiaceous type.
In monocots, several different types of stomata occur such as:
gramineous or graminoid (meaning grass-like) stomata have two guard cells surrounded by two lens-shaped subsidiary cells. The guard cells are narrower in the middle and bulbous on each end. This middle section is strongly thickened. The axis of the subsidiary cells are parallel stoma opening. This type can be found in monocot families including Poaceae and Cyperaceae.
(meaning six-celled) stomata have six subsidiary cells around both guard cells, one at either end of the opening of the stoma, one adjoining each guard cell, and one between that last subsidiary cell and the standard epidermis cells. This type can be found in some monocot families.
(meaning four-celled) stomata have four subsidiary cells, one on either end of the opening, and one next to each guard cell. This type occurs in many monocot families, but also can be found in some dicots, such as Tilia and several Asclepiadaceae.
In ferns, four different types are distinguished:
stomata have two guard cells in one layer with only ordinary epidermis cells, but with two subsidiary cells on the outer surface of the epidermis, arranged parallel to the guard cells, with a pore between them, overlying the stoma opening.
stomata have two guard cells that are entirely encircled by one continuous subsidiary cell (like a donut).
stomata have two guard cells that are entirely encircled by one subsidiary cell that has not merged its ends (like a sausage).
stomata have two guard cells that are largely encircled by one subsidiary cell, but also contact ordinary epidermis cells (like a U or horseshoe).
A catalogue of leaf epidermis prints showing stomata from a wide range of species can be found in Wikimedia commons https://commons.wikimedia.org/wiki/Category:Leaf_epidermis_and_stomata_prints
Stomatal crypts
Stomatal crypts are sunken areas of the leaf epidermis which form a chamber-like structure that contains one or more stomata and sometimes trichomes or accumulations of wax. Stomatal crypts can be an adaption to drought and dry climate conditions when the stomatal crypts are very pronounced. However, dry climates are not the only places where they can be found. The following plants are examples of species with stomatal crypts or antechambers: Nerium oleander, conifers, Hakea and Drimys winteri which is a species of plant found in the cloud forest.
Stomata as pathogenic pathways
Stomata are holes in the leaf by which pathogens can enter unchallenged. However, stomata can sense the presence of some, if not all, pathogens. However, pathogenic bacteria applied to Arabidopsis plant leaves can release the chemical coronatine, which induce the stomata to reopen.
Stomata and climate change
Response of stomata to environmental factors
Photosynthesis, plant water transport (xylem) and gas exchange are regulated by stomatal function which is important in the functioning of plants.
Stomata are responsive to light with blue light being almost 10 times as effective as red light in causing stomatal response. Research suggests this is because the light response of stomata to blue light is independent of other leaf components like chlorophyll. Guard cell protoplasts swell under blue light provided there is sufficient availability of potassium. Multiple studies have found support that increasing potassium concentrations may increase stomatal opening in the mornings, before the photosynthesis process starts, but that later in the day sucrose plays a larger role in regulating stomatal opening. Zeaxanthin in guard cells acts as a blue light photoreceptor which mediates the stomatal opening. The effect of blue light on guard cells is reversed by green light, which isomerizes zeaxanthin.
Stomatal density and aperture (length of stomata) varies under a number of environmental factors such as atmospheric CO2 concentration, light intensity, air temperature and photoperiod (daytime duration).
Decreasing stomatal density is one way plants have responded to the increase in concentration of atmospheric CO2 ([CO2]atm). Although changes in [CO2]atm response is the least understood mechanistically, this stomatal response has begun to plateau where it is soon expected to impact transpiration and photosynthesis processes in plants.
Drought inhibits stomatal opening, but research on soybeans suggests moderate drought does not have a significant effect on stomatal closure of its leaves. There are different mechanisms of stomatal closure. Low humidity stresses guard cells causing turgor loss, termed hydropassive closure. Hydroactive closure is contrasted as the whole leaf affected by drought stress, believed to be most likely triggered by abscisic acid.
Future adaptations during climate change
It is expected that [CO2]atm will reach 500–1000 ppm by 2100. 96% of the past 400,000 years experienced below 280 ppm CO2. From this figure, it is highly probable that genotypes of today’s plants have diverged from their pre-industrial relatives.
The gene HIC (high carbon dioxide) encodes a negative regulator for the development of stomata in plants. Research into the HIC gene using Arabidopsis thaliana found no increase of stomatal development in the dominant allele, but in the ‘wild type’ recessive allele showed a large increase, both in response to rising CO2 levels in the atmosphere. These studies imply the plants response to changing CO2 levels is largely controlled by genetics.
Agricultural implications
The CO2 fertiliser effect has been greatly overestimated during Free-Air Carbon dioxide Enrichment (FACE) experiments where results show increased CO2 levels in the atmosphere enhances photosynthesis, reduce transpiration, and increase water use efficiency (WUE). Increased biomass is one of the effects with simulations from experiments predicting a 5–20% increase in crop yields at 550 ppm of CO2. Rates of leaf photosynthesis were shown to increase by 30–50% in C3 plants, and 10–25% in C4 under doubled CO2 levels. The existence of a feedback mechanism results a phenotypic plasticity in response to [CO2]atm that may have been an adaptive trait in the evolution of plant respiration and function.
Predicting how stomata perform during adaptation is useful for understanding the productivity of plant systems for both natural and agricultural systems. Plant breeders and farmers are beginning to work together using evolutionary and participatory plant breeding to find the best suited species such as heat and drought resistant crop varieties that could naturally evolve to the change in the face of food security challenges.
| Biology and health sciences | Plant: General | null |
85747 | https://en.wikipedia.org/wiki/Reduced%20mass | Reduced mass | In physics, reduced mass is a measure of the effective inertial mass of a system with two or more particles when the particles are interacting with each other. Reduced mass allows the two-body problem to be solved as if it were a one-body problem. Note, however, that the mass determining the gravitational force is not reduced. In the computation, one mass can be replaced with the reduced mass, if this is compensated by replacing the other mass with the sum of both masses. The reduced mass is frequently denoted by (mu), although the standard gravitational parameter is also denoted by (as are a number of other physical quantities). It has the dimensions of mass, and SI unit kg.
Reduced mass is particularly useful in classical mechanics.
Equation
Given two bodies, one with mass m1 and the other with mass m2, the equivalent one-body problem, with the position of one body with respect to the other as the unknown, is that of a single body of mass
where the force on this mass is given by the force between the two bodies.
Properties
The reduced mass is always less than or equal to the mass of each body:
and has the reciprocal additive property:
which by re-arrangement is equivalent to half of the harmonic mean.
In the special case that :
If , then .
Derivation
The equation can be derived as follows.
Newtonian mechanics
Using Newton's second law, the force exerted by a body (particle 2) on another body (particle 1) is:
The force exerted by particle 1 on particle 2 is:
According to Newton's third law, the force that particle 2 exerts on particle 1 is equal and opposite to the force that particle 1 exerts on particle 2:
Therefore:
The relative acceleration arel between the two bodies is given by:
Note that (since the derivative is a linear operator) the relative acceleration is equal to the acceleration of the separation between the two particles.
This simplifies the description of the system to one force (since ), one coordinate , and one mass . Thus we have reduced our problem to a single degree of freedom, and we can conclude that particle 1 moves with respect to the position of particle 2 as a single particle of mass equal to the reduced mass, .
Lagrangian mechanics
Alternatively, a Lagrangian description of the two-body problem gives a Lagrangian of
where is the position vector of mass (of particle ). The potential energy V is a function as it is only dependent on the absolute distance between the particles. If we define
and let the centre of mass coincide with our origin in this reference frame, i.e.
then
Then substituting above gives a new Lagrangian
where
is the reduced mass. Thus we have reduced the two-body problem to that of one body.
Applications
Reduced mass can be used in a multitude of two-body problems, where classical mechanics is applicable.
Moment of inertia of two point masses in a line
In a system with two point masses and such that they are co-linear, the two distances and to the rotation axis may be found with
where is the sum of both distances .
This holds for a rotation around the center of mass.
The moment of inertia around this axis can be then simplified to
Collisions of particles
In a collision with a coefficient of restitution e, the change in kinetic energy can be written as
where vrel is the relative velocity of the bodies before collision.
For typical applications in nuclear physics, where one particle's mass is much larger than the other the reduced mass can be approximated as the smaller mass of the system. The limit of the reduced mass formula as one mass goes to infinity is the smaller mass, thus this approximation is used to ease calculations, especially when the larger particle's exact mass is not known.
Motion of two massive bodies under their gravitational attraction
In the case of the gravitational potential energy
we find that the position of the first body with respect to the second is governed by the same differential equation as the position of a body with the reduced mass orbiting a body with a mass (M) equal to the one particular sum equal to the sum of these two masses , because
but all those other pairs whose sum is M would have the wrong product of their masses.
Non-relativistic quantum mechanics
Consider the electron (mass me) and proton (mass mp) in the hydrogen atom. They orbit each other about a common centre of mass, a two body problem. To analyze the motion of the electron, a one-body problem, the reduced mass replaces the electron mass
This idea is used to set up the Schrödinger equation for the hydrogen atom.
| Physical sciences | Classical mechanics | Physics |
85752 | https://en.wikipedia.org/wiki/Lah%20number | Lah number | In mathematics, the (signed and unsigned) Lah numbers are coefficients expressing rising factorials in terms of falling factorials and vice versa. They were discovered by Ivo Lah in 1954. Explicitly, the unsigned Lah numbers are given by the formula involving the binomial coefficient
for .
Unsigned Lah numbers have an interesting meaning in combinatorics: they count the number of ways a set of elements can be partitioned into nonempty linearly ordered subsets. Lah numbers are related to Stirling numbers.
For , the Lah number is equal to the factorial in the interpretation above, the only partition of into 1 set can have its set ordered in 6 ways: is equal to 6, because there are six partitions of into two ordered parts: is always 1 because the only way to partition into non-empty subsets results in subsets of size 1, that can only be permuted in one way.
In the more recent literature, Karamata–Knuth style notation has taken over. Lah numbers are now often written as
Table of values
Below is a table of values for the Lah numbers:
The row sums are .
Rising and falling factorials
Let represent the rising factorial and let represent the falling factorial . The Lah numbers are the coefficients that express each of these families of polynomials in terms of the other. Explicitly,andFor example,and
where the coefficients 6, 6, and 1 are exactly the Lah numbers , , and .
Identities and relations
The Lah numbers satisfy a variety of identities and relations.
In Karamata–Knuth notation for Stirling numberswhere are the unsigned Stirling numbers of the first kind and are the Stirling numbers of the second kind.
, for .
Recurrence relations
The Lah numbers satisfy the recurrence relationswhere , the Kronecker delta, and for all .
Exponential generating function
Derivative of exp(1/x)
The n-th derivative of the function can be expressed with the Lah numbers, as followsFor example,
Link to Laguerre polynomials
Generalized Laguerre polynomials are linked to Lah numbers upon setting This formula is the default Laguerre polynomial in Umbral calculus convention.
Practical application
In recent years, Lah numbers have been used in steganography for hiding data in images. Compared to alternatives such as DCT, DFT and DWT, it has lower complexity of calculation——of their integer coefficients.
The Lah and Laguerre transforms naturally arise in the perturbative description of the chromatic dispersion.
In Lah-Laguerre optics, such an approach tremendously speeds up optimization problems.
| Mathematics | Combinatorics | null |
85754 | https://en.wikipedia.org/wiki/Phonon | Phonon | A phonon is a collective excitation in a periodic, elastic arrangement of atoms or molecules in condensed matter, specifically in solids and some liquids. A type of quasiparticle in physics, a phonon is an excited state in the quantum mechanical quantization of the modes of vibrations for elastic structures of interacting particles. Phonons can be thought of as quantized sound waves, similar to photons as quantized light waves.
The study of phonons is an important part of condensed matter physics. They play a major role in many of the physical properties of condensed matter systems, such as thermal conductivity and electrical conductivity, as well as in models of neutron scattering and related effects.
The concept of phonons was introduced in 1930 by Soviet physicist Igor Tamm. The name phonon was suggested by Yakov Frenkel. It comes from the Greek word (), which translates to sound or voice, because long-wavelength phonons give rise to sound. The name emphasizes the analogy to the word photon, in that phonons represent wave-particle duality for sound waves in the same way that photons represent wave-particle duality for light waves. Solids with more than one atom in the smallest unit cell exhibit both acoustic and optical phonons.
Definition
A phonon is the quantum mechanical description of an elementary vibrational motion in which a lattice of atoms or molecules uniformly oscillates at a single frequency. In classical mechanics this designates a normal mode of vibration. Normal modes are important because any arbitrary lattice vibration can be considered to be a superposition of these elementary vibration modes (cf. Fourier analysis). While normal modes are wave-like phenomena in classical mechanics, phonons have particle-like properties too, in a way related to the wave–particle duality of quantum mechanics.
Lattice dynamics
The equations in this section do not use axioms of quantum mechanics but instead use relations for which there exists a direct correspondence in classical mechanics.
For example: a rigid regular, crystalline (not amorphous) lattice is composed of N particles. These particles may be atoms or molecules. N is a large number, say of the order of 1023, or on the order of the Avogadro number for a typical sample of a solid. Since the lattice is rigid, the atoms must be exerting forces on one another to keep each atom near its equilibrium position. These forces may be Van der Waals forces, covalent bonds, electrostatic attractions, and others, all of which are ultimately due to the electric force. Magnetic and gravitational forces are generally negligible. The forces between each pair of atoms may be characterized by a potential energy function V that depends on the distance of separation of the atoms. The potential energy of the entire lattice is the sum of all pairwise potential energies multiplied by a factor of 1/2 to compensate for double counting:
where ri is the position of the ith atom, and V is the potential energy between two atoms.
It is difficult to solve this many-body problem explicitly in either classical or quantum mechanics. In order to simplify the task, two important approximations are usually imposed. First, the sum is only performed over neighboring atoms. Although the electric forces in real solids extend to infinity, this approximation is still valid because the fields produced by distant atoms are effectively screened. Secondly, the potentials V are treated as harmonic potentials. This is permissible as long as the atoms remain close to their equilibrium positions. Formally, this is accomplished by Taylor expanding V about its equilibrium value to quadratic order, giving V proportional to the displacement x2 and the elastic force simply proportional to x. The error in ignoring higher order terms remains small if x remains close to the equilibrium position.
The resulting lattice may be visualized as a system of balls connected by springs. The following figure shows a cubic lattice, which is a good model for many types of crystalline solid. Other lattices include a linear chain, which is a very simple lattice which we will shortly use for modeling phonons. (For other common lattices, see crystal structure.)
The potential energy of the lattice may now be written as
Here, ω is the natural frequency of the harmonic potentials, which are assumed to be the same since the lattice is regular. Ri is the position coordinate of the ith atom, which we now measure from its equilibrium position. The sum over nearest neighbors is denoted (nn).
It is important to mention that the mathematical treatment given here is highly simplified in order to make it accessible to non-experts. The simplification has been achieved by making two basic assumptions in the expression for the total potential energy of the crystal. These assumptions are that (i) the total potential energy can be written as a sum of pairwise interactions, and (ii) each atom interacts with only its nearest neighbors. These are used only sparingly in modern lattice dynamics. A more general approach is to express the potential energy in terms of force constants. See, for example, the Wiki article on multiscale Green's functions.
Lattice waves
Due to the connections between atoms, the displacement of one or more atoms from their equilibrium positions gives rise to a set of vibration waves propagating through the lattice. One such wave is shown in the figure to the right. The amplitude of the wave is given by the displacements of the atoms from their equilibrium positions. The wavelength λ is marked.
There is a minimum possible wavelength, given by twice the equilibrium separation a between atoms. Any wavelength shorter than this can be mapped onto a wavelength longer than 2a, due to the periodicity of the lattice. This can be thought of as a consequence of the Nyquist–Shannon sampling theorem, the lattice points being viewed as the "sampling points" of a continuous wave.
Not every possible lattice vibration has a well-defined wavelength and frequency. However, the normal modes do possess well-defined wavelengths and frequencies.
One-dimensional lattice
In order to simplify the analysis needed for a 3-dimensional lattice of atoms, it is convenient to model a 1-dimensional lattice or linear chain. This model is complex enough to display the salient features of phonons.
Classical treatment
The forces between the atoms are assumed to be linear and nearest-neighbour, and they are represented by an elastic spring. Each atom is assumed to be a point particle and the nucleus and electrons move in step (adiabatic theorem):
n − 1 n n + 1 ← a →
···o++++++o++++++o++++++o++++++o++++++o++++++o++++++o++++++o++++++o···
→→→→→→
un − 1unun + 1
where labels the th atom out of a total of , is the distance between atoms when the chain is in equilibrium, and the displacement of the th atom from its equilibrium position.
If C is the elastic constant of the spring and the mass of the atom, then the equation of motion of the th atom is
This is a set of coupled equations.
Since the solutions are expected to be oscillatory, new coordinates are defined by a discrete Fourier transform, in order to decouple them.
Put
Here, corresponds and devolves to the continuous variable of scalar field theory. The are known as the normal coordinates for continuum field modes with for .
Substitution into the equation of motion produces the following decoupled equations (this requires a significant manipulation using the orthonormality and completeness relations of the discrete Fourier transform),
These are the equations for decoupled harmonic oscillators which have the solution
Each normal coordinate Qk represents an independent vibrational mode of the lattice with wavenumber , which is known as a normal mode.
The second equation, for , is known as the dispersion relation between the angular frequency and the wavenumber.
In the continuum limit, →0, →∞, with held fixed, → , a scalar field, and . This amounts to classical free scalar field theory, an assembly of independent oscillators.
Quantum treatment
A one-dimensional quantum mechanical harmonic chain consists of N identical atoms. This is the simplest quantum mechanical model of a lattice that allows phonons to arise from it. The formalism for this model is readily generalizable to two and three dimensions.
In contrast to the previous section, the positions of the masses are not denoted by , but instead by as measured from their equilibrium positions. (I.e. if particle is at its equilibrium position.) In two or more dimensions, the are vector quantities. The Hamiltonian for this system is
where m is the mass of each atom (assuming it is equal for all), and xi and pi are the position and momentum operators, respectively, for the ith atom and the sum is made over the nearest neighbors (nn). However one expects that in a lattice there could also appear waves that behave like particles. It is customary to deal with waves in Fourier space which uses normal modes of the wavevector as variables instead of coordinates of particles. The number of normal modes is the same as the number of particles. Still, the Fourier space is very useful given the periodicity of the system.
A set of N "normal coordinates" Qk may be introduced, defined as the discrete Fourier transforms of the xk and N "conjugate momenta" Πk defined as the Fourier transforms of the pk:
The quantity k turns out to be the wavenumber of the phonon, i.e. 2 divided by the wavelength.
This choice retains the desired commutation relations in either real space or wavevector space
From the general result
The potential energy term is
where
The Hamiltonian may be written in wavevector space as
The couplings between the position variables have been transformed away; if the Q and Π were Hermitian (which they are not), the transformed Hamiltonian would describe N uncoupled harmonic oscillators.
The form of the quantization depends on the choice of boundary conditions; for simplicity, periodic boundary conditions are imposed, defining the (N + 1)th atom as equivalent to the first atom. Physically, this corresponds to joining the chain at its ends. The resulting quantization is
The upper bound to n comes from the minimum wavelength, which is twice the lattice spacing a, as discussed above.
The harmonic oscillator eigenvalues or energy levels for the mode ωk are:
The levels are evenly spaced at:
where ħω is the zero-point energy of a quantum harmonic oscillator.
An exact amount of energy ħω must be supplied to the harmonic oscillator lattice to push it to the next energy level. By analogy to the photon case when the electromagnetic field is quantized, the quantum of vibrational energy is called a phonon.
All quantum systems show wavelike and particlelike properties simultaneously. The particle-like properties of the phonon are best understood using the methods of second quantization and operator techniques described later.
Three-dimensional lattice
This may be generalized to a three-dimensional lattice. The wavenumber k is replaced by a three-dimensional wavevector k. Furthermore, each k is now associated with three normal coordinates.
The new indices s = 1, 2, 3 label the polarization of the phonons. In the one-dimensional model, the atoms were restricted to moving along the line, so the phonons corresponded to longitudinal waves. In three dimensions, vibration is not restricted to the direction of propagation, and can also occur in the perpendicular planes, like transverse waves. This gives rise to the additional normal coordinates, which, as the form of the Hamiltonian indicates, we may view as independent species of phonons.
Dispersion relation
For a one-dimensional alternating array of two types of ion or atom of mass m1, m2 repeated periodically at a distance a, connected by springs of spring constant K, two modes of vibration result:
where k is the wavevector of the vibration related to its wavelength by
.
The connection between frequency and wavevector, ω = ω(k), is known as a dispersion relation. The plus sign results in the so-called optical mode, and the minus sign to the acoustic mode. In the optical mode two adjacent different atoms move against each other, while in the acoustic mode they move together.
The speed of propagation of an acoustic phonon, which is also the speed of sound in the lattice, is given by the slope of the acoustic dispersion relation, (see group velocity.) At low values of k (i.e. long wavelengths), the dispersion relation is almost linear, and the speed of sound is approximately ωa, independent of the phonon frequency. As a result, packets of phonons with different (but long) wavelengths can propagate for large distances across the lattice without breaking apart. This is the reason that sound propagates through solids without significant distortion. This behavior fails at large values of k, i.e. short wavelengths, due to the microscopic details of the lattice.
For a crystal that has at least two atoms in its primitive cell, the dispersion relations exhibit two types of phonons, namely, optical and acoustic modes corresponding to the upper blue and lower red curve in the diagram, respectively. The vertical axis is the energy or frequency of phonon, while the horizontal axis is the wavevector. The boundaries at − and are those of the first Brillouin zone. A crystal with N ≥ 2 different atoms in the primitive cell exhibits three acoustic modes: one longitudinal acoustic mode and two transverse acoustic modes. The number of optical modes is 3N – 3. The lower figure shows the dispersion relations for several phonon modes in GaAs as a function of wavevector k in the principal directions of its Brillouin zone.
The modes are also referred to as the branches of phonon dispersion. In general, if there are p atoms (denoted by N earlier) in the primitive unit cell, there will be 3p branches of phonon dispersion in a 3-dimensional crystal. Out of these, 3 branches correspond to acoustic modes and the remaining 3p-3 branches will correspond to optical modes. In some special directions, some branches coincide due to symmetry. These branches are called degenerate. In acoustic modes, all the p atoms vibrate in phase. So there is no change in the relative displacements of these atoms during the wave propagation.
Study of phonon dispersion is useful for modeling propagation of sound waves in solids, which is characterized by phonons. The energy of each phonon, as given earlier, is ħω. The velocity of the wave also is given in terms of ω and k . The direction of the wave vector is the direction of the wave propagation and the phonon polarization vector gives the direction in which the atoms vibrate. Actually, in general, the wave velocity in a crystal is different for different directions of k. In other words, most crystals are anisotropic for phonon propagation.
A wave is longitudinal if the atoms vibrate in the same direction as the wave propagation. In a transverse wave, the atoms vibrate perpendicular to the wave propagation. However, except for isotropic crystals, waves in a crystal are not exactly longitudinal or transverse. For general anisotropic crystals, the phonon waves are longitudinal or transverse only in certain special symmetry directions. In other directions, they can be nearly longitudinal or nearly transverse. It is only for labeling convenience, that they are often called longitudinal or transverse but are actually quasi-longitudinal or quasi-transverse. Note that in the three-dimensional case, there are two directions perpendicular to a straight line at each point on the line. Hence, there are always two (quasi) transverse waves for each (quasi) longitudinal wave.
Many phonon dispersion curves have been measured by inelastic neutron scattering.
The physics of sound in fluids differs from the physics of sound in solids, although both are density waves: sound waves in fluids only have longitudinal components, whereas sound waves in solids have longitudinal and transverse components. This is because fluids cannot support shear stresses (but see viscoelastic fluids, which only apply to high frequencies).
Interpretation of phonons using second quantization techniques
The above-derived Hamiltonian may look like a classical Hamiltonian function, but if it is interpreted as an operator, then it describes a quantum field theory of non-interacting bosons.
The second quantization technique, similar to the ladder operator method used for quantum harmonic oscillators, is a means of extracting energy eigenvalues without directly solving the differential equations. Given the Hamiltonian, , as well as the conjugate position, , and conjugate momentum defined in the quantum treatment section above, we can define creation and annihilation operators:
and
The following commutators can be easily obtained by substituting in the canonical commutation relation:
Using this, the operators bk† and bk can be inverted to redefine the conjugate position and momentum as:
and
Directly substituting these definitions for and into the wavevector space Hamiltonian, as it is defined above, and simplifying then results in the Hamiltonian taking the form:
This is known as the second quantization technique, also known as the occupation number formulation, where nk = bk†bk is the occupation number. This can be seen to be a sum of N independent oscillator Hamiltonians, each with a unique wave vector, and compatible with the methods used for the quantum harmonic oscillator (note that nk is hermitian). When a Hamiltonian can be written as a sum of commuting sub-Hamiltonians, the energy eigenstates will be given by the products of eigenstates of each of the separate sub-Hamiltonians. The corresponding energy spectrum is then given by the sum of the individual eigenvalues of the sub-Hamiltonians.
As with the quantum harmonic oscillator, one can show that bk† and bk respectively create and destroy a single field excitation, a phonon, with an energy of ħωk.
Three important properties of phonons may be deduced from this technique. First, phonons are bosons, since any number of identical excitations can be created by repeated application of the creation operator bk†. Second, each phonon is a "collective mode" caused by the motion of every atom in the lattice. This may be seen from the fact that the creation and annihilation operators, defined here in momentum space, contain sums over the position and momentum operators of every atom when written in position space. (See position and momentum space.) Finally, using the position–position correlation function, it can be shown that phonons act as waves of lattice displacement.
This technique is readily generalized to three dimensions, where the Hamiltonian takes the form:
This can be interpreted as the sum of 3N independent oscillator Hamiltonians, one for each wave vector and polarization.
Acoustic and optical phonons
Solids with more than one atom in the smallest unit cell exhibit two types of phonons: acoustic phonons and optical phonons.
Acoustic phonons are coherent movements of atoms of the lattice out of their equilibrium positions. If the displacement is in the direction of propagation, then in some areas the atoms will be closer, in others farther apart, as in a sound wave in air (hence the name acoustic). Displacement perpendicular to the propagation direction is comparable to waves on a string. If the wavelength of acoustic phonons goes to infinity, this corresponds to a simple displacement of the whole crystal, and this costs zero deformation energy. Acoustic phonons exhibit a linear relationship between frequency and phonon wave-vector for long wavelengths. The frequencies of acoustic phonons tend to zero with longer wavelength. Longitudinal and transverse acoustic phonons are often abbreviated as LA and TA phonons, respectively.
Optical phonons are out-of-phase movements of the atoms in the lattice, one atom moving to the left, and its neighbor to the right. This occurs if the lattice basis consists of two or more atoms. They are called optical because in ionic crystals, such as sodium chloride, fluctuations in displacement create an electrical polarization that couples to the electromagnetic field. Hence, they can be excited by infrared radiation, the electric field of the light will move every positive sodium ion in the direction of the field, and every negative chloride ion in the other direction, causing the crystal to vibrate.
Optical phonons have a non-zero frequency at the Brillouin zone center and show no dispersion near that long wavelength limit. This is because they correspond to a mode of vibration where positive and negative ions at adjacent lattice sites swing against each other, creating a time-varying electrical dipole moment. Optical phonons that interact in this way with light are called infrared active. Optical phonons that are Raman active can also interact indirectly with light, through Raman scattering. Optical phonons are often abbreviated as LO and TO phonons, for the longitudinal and transverse modes respectively; the splitting between LO and TO frequencies is often described accurately by the Lyddane–Sachs–Teller relation.
When measuring optical phonon energy experimentally, optical phonon frequencies are sometimes given in spectroscopic wavenumber notation, where the symbol ω represents ordinary frequency (not angular frequency), and is expressed in units of cm−1. The value is obtained by dividing the frequency by the speed of light in vacuum. In other words, the wave-number in cm−1 units corresponds to the inverse of the wavelength of a photon in vacuum that has the same frequency as the measured phonon.
Crystal momentum
By analogy to photons and matter waves, phonons have been treated with wavevector k as though it has a momentum ħk; however, this is not strictly correct, because ħk is not actually a physical momentum; it is called the crystal momentum or pseudomomentum. This is because k is only determined up to addition of constant vectors (the reciprocal lattice vectors and integer multiples thereof). For example, in the one-dimensional model, the normal coordinates Q and Π are defined so that
where
for any integer n. A phonon with wavenumber k is thus equivalent to an infinite family of phonons with wavenumbers k ± , k ± , and so forth. Physically, the reciprocal lattice vectors act as additional chunks of momentum which the lattice can impart to the phonon. Bloch electrons obey a similar set of restrictions.
It is usually convenient to consider phonon wavevectors k which have the smallest magnitude |k| in their "family". The set of all such wavevectors defines the first Brillouin zone. Additional Brillouin zones may be defined as copies of the first zone, shifted by some reciprocal lattice vector.
Thermodynamics
The thermodynamic properties of a solid are directly related to its phonon structure. The entire set of all possible phonons that are described by the phonon dispersion relations combine in what is known as the phonon density of states which determines the heat capacity of a crystal. By the nature of this distribution, the heat capacity is dominated by the high-frequency part of the distribution, while thermal conductivity is primarily the result of the low-frequency region.
At absolute zero temperature, a crystal lattice lies in its ground state, and contains no phonons. A lattice at a nonzero temperature has an energy that is not constant, but fluctuates randomly about some mean value. These energy fluctuations are caused by random lattice vibrations, which can be viewed as a gas of phonons. Because these phonons are generated by the temperature of the lattice, they are sometimes designated thermal phonons.
Thermal phonons can be created and destroyed by random energy fluctuations. In the language of statistical mechanics this means that the chemical potential for adding a phonon is zero. This behavior is an extension of the harmonic potential into the anharmonic regime. The behavior of thermal phonons is similar to the photon gas produced by an electromagnetic cavity, wherein photons may be emitted or absorbed by the cavity walls. This similarity is not coincidental, for it turns out that the electromagnetic field behaves like a set of harmonic oscillators, giving rise to black-body radiation. Both gases obey the Bose–Einstein statistics: in thermal equilibrium and within the harmonic regime, the probability of finding phonons or photons in a given state with a given angular frequency is:
where ωk,s is the frequency of the phonons (or photons) in the state, kB is the Boltzmann constant, and T is the temperature.
Phonon tunneling
Phonons have been shown to exhibit quantum tunneling behavior (or phonon tunneling) where, across gaps up to a nanometer wide, heat can flow via phonons that "tunnel" between two materials. This type of heat transfer works between distances too large for conduction to occur but too small for radiation to occur and therefore cannot be explained by classical heat transfer models.
Operator formalism
The phonon Hamiltonian is given by
In terms of the creation and annihilation operators, these are given by
Here, in expressing the Hamiltonian in operator formalism, we have not taken into account the ħωq term as, given a continuum or infinite lattice, the ħωq terms will add up yielding an infinite term. Because the difference in energy is what we measure and not the absolute value of it, the constant term ħωq can be ignored without changing the equations of motion. Hence, the ħωq factor is absent in the operator formalized expression for the Hamiltonian.
The ground state, also called the "vacuum state", is the state composed of no phonons. Hence, the energy of the ground state is 0. When a system is in the state , we say there are nα phonons of type α, where nα is the occupation number of the phonons. The energy of a single phonon of type α is given by ħωq and the total energy of a general phonon system is given by n1ħω1 + n2ħω2 +.... As there are no cross terms (e.g. n1ħω2), the phonons are said to be non-interacting. The action of the creation and annihilation operators is given by:
and,
The creation operator, aα† creates a phonon of type α while aα annihilates one. Hence, they are respectively the creation and annihilation operators for phonons. Analogous to the quantum harmonic oscillator case, we can define particle number operator as
The number operator commutes with a string of products of the creation and annihilation operators if and only if the number of creation operators is equal to number of annihilation operators.
It can be shown that phonons are symmetric under exchange (i.e. = ), so therefore they are considered bosons.
Nonlinearity
As well as photons, phonons can interact via parametric down conversion and form squeezed coherent states.
Predicted properties
Recent research has shown that phonons and rotons may have a non-negligible mass and be affected by gravity just as standard particles are. In particular, phonons are predicted to have a kind of negative mass and negative gravity. This can be explained by how phonons are known to travel faster in denser materials. Because the part of a material pointing towards a gravitational source is closer to the object, it becomes denser on that end. From this, it is predicted that phonons would deflect away as it detects the difference in densities, exhibiting the qualities of a negative gravitational field. Although the effect would be too small to measure, it is possible that future equipment could lead to successful results.
Superconductivity
Superconductivity is a state of electronic matter in which electrical resistance vanishes and magnetic fields are expelled from the material. In a superconductor, electrons are bound together into Cooper pairs by a weak attractive force. In a conventional superconductor, this attraction is caused by an exchange of phonons between the electrons. The evidence that phonons, the vibrations of the ionic lattice, are relevant for superconductivity is provided by the isotope effect, the dependence of the superconducting critical temperature on the mass of the ions.
Other research
In 2019, researchers were able to isolate individual phonons without destroying them for the first time.
They have been also shown to form “phonon winds” where an electric current in a graphene surface is generated by a liquid flow above it due to the viscous forces at the liquid–solid interface.
| Physical sciences | States of matter | Physics |
85816 | https://en.wikipedia.org/wiki/Complete%20graph | Complete graph | In the mathematical field of graph theory, a complete graph is a simple undirected graph in which every pair of distinct vertices is connected by a unique edge. A complete digraph is a directed graph in which every pair of distinct vertices is connected by a pair of unique edges (one in each direction).
Graph theory itself is typically dated as beginning with Leonhard Euler's 1736 work on the Seven Bridges of Königsberg. However, drawings of complete graphs, with their vertices placed on the points of a regular polygon, had already appeared in the 13th century, in the work of Ramon Llull. Such a drawing is sometimes referred to as a mystic rose.
Properties
The complete graph on vertices is denoted by . Some sources claim that the letter in this notation stands for the German word , but the German name for a complete graph, , does not contain the letter , and other sources state that the notation honors the contributions of Kazimierz Kuratowski to graph theory.
has edges (a triangular number), and is a regular graph of degree . All complete graphs are their own maximal cliques. They are maximally connected as the only vertex cut which disconnects the graph is the complete set of vertices. The complement graph of a complete graph is an empty graph.
If the edges of a complete graph are each given an orientation, the resulting directed graph is called a tournament.
can be decomposed into trees such that has vertices. Ringel's conjecture asks if the complete graph can be decomposed into copies of any tree with edges. This is known to be true for sufficiently large .
The number of all distinct paths between a specific pair of vertices in is given by
where refers to Euler's constant, and
The number of matchings of the complete graphs are given by the telephone numbers
1, 1, 2, 4, 10, 26, 76, 232, 764, 2620, 9496, 35696, 140152, 568504, 2390480, 10349536, 46206736, ... .
These numbers give the largest possible value of the Hosoya index for an -vertex graph. The number of perfect matchings of the complete graph (with even) is given by the double factorial .
The crossing numbers up to are known, with requiring either 7233 or 7234 crossings. Further values are collected by the Rectilinear Crossing Number project. Rectilinear Crossing numbers for are
0, 0, 0, 0, 1, 3, 9, 19, 36, 62, 102, 153, 229, 324, 447, 603, 798, 1029, 1318, 1657, 2055, 2528, 3077, 3699, 4430, 5250, 6180, ... .
Geometry and topology
A complete graph with nodes represents the edges of an -simplex. Geometrically forms the edge set of a triangle, a tetrahedron, etc. The Császár polyhedron, a nonconvex polyhedron with the topology of a torus, has the complete graph as its skeleton. Every neighborly polytope in four or more dimensions also has a complete skeleton.
through are all planar graphs. However, every planar drawing of a complete graph with five or more vertices must contain a crossing, and the nonplanar complete graph plays a key role in the characterizations of planar graphs: by Kuratowski's theorem, a graph is planar if and only if it contains neither nor the complete bipartite graph as a subdivision, and by Wagner's theorem the same result holds for graph minors in place of subdivisions. As part of the Petersen family, plays a similar role as one of the forbidden minors for linkless embedding. In other words, and as Conway and Gordon proved, every embedding of into three-dimensional space is intrinsically linked, with at least one pair of linked triangles. Conway and Gordon also showed that any three-dimensional embedding of contains a Hamiltonian cycle that is embedded in space as a nontrivial knot.
Examples
Complete graphs on vertices, for between 1 and 12, are shown below along with the numbers of edges:
| Mathematics | Graph theory | null |
85821 | https://en.wikipedia.org/wiki/Regular%20graph | Regular graph | In graph theory, a regular graph is a graph where each vertex has the same number of neighbors; i.e. every vertex has the same degree or valency. A regular directed graph must also satisfy the stronger condition that the indegree and outdegree of each internal vertex are equal to each other. A regular graph with vertices of degree is called a graph or regular graph of degree .
Special cases
Regular graphs of degree at most 2 are easy to classify: a graph consists of disconnected vertices, a graph consists of disconnected edges, and a graph consists of a disjoint union of cycles and infinite chains.
A graph is known as a cubic graph.
A strongly regular graph is a regular graph where every adjacent pair of vertices has the same number of neighbors in common, and every non-adjacent pair of vertices has the same number of neighbors in common. The smallest graphs that are regular but not strongly regular are the cycle graph and the circulant graph on 6 vertices.
The complete graph is strongly regular for any .
Existence
The necessary and sufficient conditions for a -regular graph of order to exist are that and that is even.
Proof: A complete graph has every pair of distinct vertices connected to each other by a unique edge. So edges are maximum in complete graph and number of edges are and degree here is . So . This is the minimum for a particular . Also note that if any regular graph has order then number of edges are so has to be even.
In such case it is easy to construct regular graphs by considering appropriate parameters for circulant graphs.
Properties
From the handshaking lemma, a -regular graph with odd has an even number of vertices.
A theorem by Nash-Williams says that every graph on vertices has a Hamiltonian cycle.
Let A be the adjacency matrix of a graph. Then the graph is regular if and only if is an eigenvector of A. Its eigenvalue will be the constant degree of the graph. Eigenvectors corresponding to other eigenvalues are orthogonal to , so for such eigenvectors , we have .
A regular graph of degree k is connected if and only if the eigenvalue k has multiplicity one. The "only if" direction is a consequence of the Perron–Frobenius theorem.
There is also a criterion for regular and connected graphs :
a graph is connected and regular if and only if the matrix of ones J, with , is in the adjacency algebra of the graph (meaning it is a linear combination of powers of A).
Let G be a k-regular graph with diameter D and eigenvalues of adjacency matrix . If G is not bipartite, then
Generation
Fast algorithms exist to generate, up to isomorphism, all regular graphs with a given degree and number of vertices.
| Mathematics | Graph theory | null |
85846 | https://en.wikipedia.org/wiki/Longsword | Longsword | A longsword (also spelled as long sword or long-sword) is a type of European sword characterized as having a cruciform hilt with a grip for primarily two-handed use (around ), a straight double-edged blade of around , and weighing approximately .
The "longsword" type exists in a morphological continuum with the medieval knightly sword and the Renaissance-era Zweihänder. It was prevalent during the late medieval and Renaissance periods (approximately 1350 to 1550), with early and late use reaching into the 12th and 17th centuries.
Names
English
The longsword has many names in the English language, which, aside from variant spellings, include terms such as "bastard sword" and "hand-and-a-half sword." Of these, "bastard sword" is the oldest, its use being contemporaneous with the weapon's heyday.
The French and the English "bastard sword" originate in the 15th or 16th century, originally in the general sense of "irregular sword, sword of uncertain origin", but by the mid-16th century could refer to exceptionally large swords.
The "Masters of Defence" competition organised by Henry VIII in July 1540 listed two handed sword and bastard sword as two separate items.
It is uncertain whether the same term could still be used to other types of smaller swords, but antiquarian usage in the 19th century established the use of "bastard sword" as referring unambiguously to these large swords.
The term "hand-and-a-half sword" is relatively modern (from the late 19th century);
this name was given because the balance of the sword made it usable in one hand, as well as two. During the first half of the 20th century, the term "bastard sword" was also used regularly to refer to this type of sword, while "long sword" (or "long-sword"), if used at all, referred to the rapier (in the context of Renaissance or Early Modern fencing).
Another name originating from the 19th century is "broadsword," which grew out of comparisons of the blade to more slender swords. This name is common in non-expert literature, where it often refers generically to any medieval sword. However, it more properly—and historically—refers to the basket-hilted swords of the 18th century.
Contemporary use of "long-sword" or "longsword" only resurfaced in the 2000s in the context of reconstruction of the German school of fencing, translating the German . Prior to this the term "long sword" merely referred to any sword with a long blade; 'long' being simply an adjective rather than a classification.
Other languages
Historical (15th to 16th century) terms for this type of sword included Portuguese , or for the version with longer handle used exclusively with both hands; Spanish , , or , Italian or , and Middle French . The Scottish Gaelic means "great sword"; anglicised as claymore, it came to refer to a large Scottish type of longsword with a V–shaped crossguard.
Historical terminology overlaps with that applied to the Zweihänder sword in the 16th century: French , Spanish , or Portuguese may also be used more narrowly to refer to these large swords. The French may also refer to a medieval single-handed sword optimized for thrusting.
The German ("long sword") in 15th and 16th-century manuals does not denote a type of weapon, but the technique of fencing with both hands at the hilt, contrasting with ("short sword") used of fencing with the same weapon, but with one hand gripping the blade (also known as a half-sword).
Evolution
The longsword is characterized not so much by a longer blade, but by a longer grip, which indicates a weapon designed for two-handed use. Swords with exceptionally long hilts are found throughout the High Middle Ages. For example, there is a longsword in The Glasgow Art and History Museum, Labelled XIIIa. 5, which scholars have dated back to between 1100 and 1200 due to the hilt style and specific taper, but swords like this remain incredibly rare, and are not representative of an identifiable trend before the late 13th or early 14th century.
The longsword as a late medieval type of sword emerges in the 14th century, as a military steel weapon of the earlier phase of the Hundred Years' War. It remains identifiable as a type during the period of about 1350 to 1550.
It remained in use as a weapon of war intended for wielders wearing full plate armour either on foot or on horseback, throughout the late medieval period.
From the late 15th century, however, it is also attested as being worn and used by unarmoured soldiers or mercenaries.
Use of the two-handed Great Sword or Schlachtschwert by infantry (as opposed to their use as a weapon of mounted and fully armoured knights) seems to have originated with the Swiss in the 14th century.
By the 16th century, its military use was mostly obsolete, culminating in the brief period where the oversized Zweihänder were wielded by the German Landsknechte during the early to mid 16th century. By the second half of the 16th century, it persisted mostly as a weapon for sportive competition (Schulfechten), and possibly in knightly duels.
Distinct "bastard sword" hilt types developed during the first half of the 16th century. Ewart Oakeshott distinguishes twelve different types. These all seem to have originated in Bavaria and in Switzerland. By the late 16th century, early forms of the developed-hilt appear on this type of sword.
Beginning about 1520, the Swiss sabre (schnepf) in Switzerland began to replace the straight longsword, inheriting its hilt types, and the longsword had fallen out of use in Switzerland by 1550.
In southern Germany, it persisted into the 1560s, but its use also declined during the second half of the 16th century.
There are two late examples of longswords kept in the Swiss National Museum, both with vertically grooved pommels and elaborately decorated with silver inlay, and both belonging to Swiss noblemen in French service during the late 16th and early 17th century, Gugelberg von Moos and Rudolf von Schauenstein.
The longsword, greatsword and bastard-sword were also made in Spain, appearing relatively late, known as the , the and or respectively.
Morphology
The swords grouped as "longswords" for the purposes of this article are united by their being intended for two-handed use. In terms of blade typology, they do not form a single category. In the Oakeshott typology of blade morphology, "longswords" figure as a range of sub-types of the corresponding single-handed sword types.
Types XIIa and XIIIa represent the Great Sword or War Sword type used in the later 13th and in the 14th century. They represent larger versions of type XII and type XIII swords which were the standard knightly swords during the Crusades. They are primarily intended for cutting, with grips for either "hand-and-half" or two-handed use. Type XIIa blades are broad, flat and evenly tapering, with a lenticular cross-section and a fuller running along about two thirds of the blade's length. Type XIIIa blades are broad, with a flat lenticular cross-section, parallel edges and a fuller running along half the blade's length.
Type XVa is the classical two-handed sword of the 14th and 15th centuries (with early examples appearing from the later 13th century). These blades are strongly tapered, more narrow and slender even than the single-handed type XV variant, with a flattened diamond cross-section.
Type XVIa is the classical "longsword" of the 14th and 15th centuries. These blades are long and slowly tapering, with a flat hexagonal blade cross-section and a fuller running along one third of the blade. They represent an optimised compromise between thrusting capability and retaining good cutting characteristics.
Type XVII is a shorter-lived type, popular during the mid-14th to early 15th century. These blades are long, slender and acutely tapering, approaching the outline of type XVa, while still retaining a narrow hexagonal cross-section and a shallow fuller running along about one quarter of the blade.
Types XVIIIb and XVIIIc represent the later longswords of the mid-15th to early 16th centuries. They have a flattened diamond cross-section, often with pronounced mid-rib, some being hollow-ground. Type XVIIIb blades are slender, comparable to XVa blades but longer, measuring between 90 and 107 cm, with a correspondingly longer grip, often waisted for comfortable two-handed use. Type XVIIIc blades are somewhat broader and shorter (about 85 cm), and sometimes have a short and narrow fuller.
Type XX blades are broad, with lenticular or octagonal cross-sections. Their defining characteristics is that they have three fullers, a shallow central fuller running along half the blade's length, with two shallow parallel fullers along the first quarter. They were in use during the 14th and 15th centuries. Sub-type XXa has a more acutely tapering blade and a more acute point.
Fighting with the longsword
The expression ("fencing with the long sword") in the German school of fencing denotes the style of fencing which uses both hands at the hilt; ("fencing with the short sword") is used in half-sword fighting, with one hand gripping the blade.
The two terms are largely equivalent to "unarmoured fighting" () and "armoured fencing" ().
History
Codified systems of fighting with the longsword existed from the later 14th century, with a variety of styles and teachers each providing a slightly different take on the art. Hans Talhoffer, a mid-15th-century German fightmaster, is probably the most prominent, using a wide variety of moves, most resulting in wrestling. The longsword was a quick, effective, and versatile weapon capable of deadly thrusts, slices, and cuts. The blade was generally used with both hands on the hilt, one resting close to or on the pommel. The weapon may be held with one hand during disarmament or grappling techniques.
In a depiction of a duel, individuals may be seen wielding sharply pointed longswords in one hand, leaving the other hand open to manipulate the large dueling shield.
Another variation of use comes from the use of armour. Half-swording was a manner of using both hands, one on the hilt and one on the blade, to better control the weapon in thrusts and jabs. This versatility was unique, as multiple works hold that the longsword provided the foundations for learning a variety of other weapons including spears, staves, and polearms. Use of the longsword in attack was not limited only to use of the blade, however, as several Fechtbücher explain and depict use of the pommel and cross as offensive weapons. The cross has been shown to be used as a hook for tripping or knocking an opponent off balance. Some manuals even depict the cross as a hammer.
What is known of combat with the longsword comes from artistic depictions of battle from manuscripts and the Fechtbücher of Medieval and Renaissance Masters. Therein the basics of combat were described and, in some cases, depicted. The German school of swordsmanship includes the earliest known longsword Fechtbuch, a manual from approximately 1389, known as GNM 3227a. This manual, unfortunately for modern scholars, was written in obscure verse. It was through students of Liechtenauer, like Sigmund Ringeck, who transcribed the work into more understandable prose that the system became notably more codified and understandable. Others provided similar work, some with a wide array of images to accompany the text.
The Italian school of swordsmanship was the other primary school of longsword use. The 1410 manuscript by Fiore dei Liberi presents a variety of uses for the longsword. Like the German manuals, the weapon is most commonly depicted and taught with both hands on the hilt. However, a section on one-handed use is among the volume and demonstrates the techniques and advantages, such as sudden additional reach, of single-handed longsword play. The manual also presents half-sword techniques as an integral part of armoured combat.
Both schools declined in the late 16th century, with the later Italian masters forgoing the longsword and focusing primarily on rapier fencing. The last known German manual to include longsword teaching was that of Jakob Sutor, published in 1612. In Italy, , or longsword, instruction lingered on despite the popularity of the rapier, at least into the mid-17th century (Alfieri's Lo Spadone of 1653), with a late treatise of the "two handed sword" by one Giuseppe Colombani, a dentist in Venice dating to 1711. A tradition of teaching based on this has survived in contemporary French and Italian stick fighting.
German school of fencing
(blosz fechten) or "bare fighting" is the technique of fighting without significant protective armour such as plate or mail.
The lack of significant torso and limb protection leads to the use of a large amount of cutting and slicing techniques in addition to thrusts. These techniques could be nearly instantly fatal or incapacitating, as a thrust to the skull, heart, or major blood vessel would cause massive trauma. Similarly, strong strikes could cut through skin and bone, effectively amputating limbs. The hands and forearms are a frequent target of some cuts and slices in a defensive or offensive manoeuvre, serving both to disable an opponent and align the swordsman and his weapon for the next attack.
Harnischfechten
, or "armoured fighting" (German , or , literally "fighting in armour on foot"), depicts fighting in full plate armour.
The increased defensive capability of a man clad in full plate armour caused the use of the sword to be drastically changed. While slashing attacks were still moderately effective against infantry wearing half-plate armour, cutting and slicing attacks against an opponent wearing plate armour were almost entirely ineffective in providing any sort of slashing wound as the sword simply could not cut through the steel, although a combatant could aim for the chinks in a suit of armour, sometimes to great effect. Instead, the energy of the cut becomes essentially pure concussive energy. The later hardened plate armours, complete with ridges and roping, posed a threat against the careless attacker. It is considered possible for strong blows of the sword against plate armour to damage the blade of the sword, potentially rendering it much less effective at cutting and producing only a concussive effect against the armoured opponent.
To overcome this problem, swords began to be used primarily for thrusting. The weapon was used in the half-sword, with one or both hands on the blade. This increased the accuracy and strength of thrusts and provided more leverage for or "wrestling at/with the sword". Also, the hand on the blade increases its rigidity which is advantageous when thrusting. This technique combines the use of the sword with wrestling, providing opportunities to trip, disarm, break, or throw an opponent and place them in a less offensively and defensively capable position. During half-swording, the entirety of the sword works as a weapon, including the pommel and crossguard. One example how a sword can be used this way is to thrust the tip of the crossguard at the opponent's head right after parrying a stroke. Another technique would be the Mordstreich (lit. "murder stroke"), where the weapon is held by the blade (hilt, pommel and crossguard serving as an improvised hammer head) and swung, taking advantage of the balance being close to the hilt to increase the concussive effect (see the fighter on the right of the Codex Wallerstein picture).
| Technology | Swords | null |
86058 | https://en.wikipedia.org/wiki/Binoculars | Binoculars | Binoculars or field glasses are two refracting telescopes mounted side-by-side and aligned to point in the same direction, allowing the viewer to use both eyes (binocular vision) when viewing distant objects. Most binoculars are sized to be held using both hands, although sizes vary widely from opera glasses to large pedestal-mounted military models.
Unlike a (monocular) telescope, binoculars give users a three-dimensional image: each eyepiece presents a slightly different image to each of the viewer's eyes and the parallax allows the visual cortex to generate an impression of depth.
Optical design evolution
Galilean
Almost from the invention of the telescope in the 17th century the advantages of mounting two of them side by side for binocular vision seems to have been explored. Most early binoculars used Galilean optics; that is, they used a convex objective and a concave eyepiece lens. The Galilean design has the advantage of presenting an erect image but has a narrow field of view and is not capable of very high magnification. This type of construction is still used in very cheap models and in opera glasses or theater glasses. The Galilean design is also used in low magnification binocular surgical and jewelers' loupes because they can be very short and produce an upright image without extra or unusual erecting optics, reducing expense and overall weight. They also have large exit pupils, making centering less critical, and the narrow field of view works well in those applications. These are typically mounted on an eyeglass frame or custom-fit onto eyeglasses.
Keplerian
An improved image and higher magnification are achieved in binoculars employing Keplerian optics, where the image formed by the objective lens is viewed through a positive eyepiece lens (ocular).
Since the Keplerian configuration produces an inverted image, different methods are used to turn the image the right way up.
Erecting lenses
In aprismatic binoculars with Keplerian optics (which were sometimes called "twin telescopes"), each tube has one or two additional lenses (relay lens) between the objective and the eyepiece. These lenses are used to erect the image. The binoculars with erecting lenses had a serious disadvantage: they are too long. Such binoculars were popular in the 1800s (for example, G. & S. Merz models). The Keplerian "twin telescopes" binoculars were optically and mechanically hard to manufacture, but it took until the 1890s to supersede them with better prism-based technology.
Prism
Optical prisms added to the design enabled the display of the image the right way up without needing as many lenses, and decreasing the overall length of the instrument, typically using Porro prism or roof prism systems. The Italian inventor of optical instruments Ignazio Porro worked during the 1860s with Hofmann in Paris to produce monoculars using the same prism configuration used in modern Porro prism binoculars. At the 1873 Vienna Trade Fair German optical designer and scientist Ernst Abbe displayed a prism telescope with two cemented Porro prisms. The optical solutions of Porro and Abbe were theoretically sound, but the employed prism systems failed in practice primarily due to insufficient glass quality.
Porro
Porro prism binoculars are named after Ignazio Porro, who patented this image erecting system in 1854. The later refinement by Ernst Abbe and his cooperation with glass scientist Otto Schott, who managed to produce a better type of Crown glass in 1888, and instrument maker Carl Zeiss resulted in 1894 in the commercial introduction of improved 'modern' Porro prism binoculars by the Carl Zeiss company. Binoculars of this type use a pair of Porro prisms in a Z-shaped configuration to erect the image. This results in wide binoculars, with objective lenses that are well separated and offset from the eyepieces, giving a better sensation of depth. Porro prism designs have the added benefit of folding the optical path so that the physical length of the binoculars is less than the focal length of the objective. Porro prism binoculars were made in such a way to erect an image in a relatively small space, thus binoculars using prisms started in this way.
Porro prisms require typically within 10 arcminutes ( of 1 degree) tolerances for alignment of their optical elements (collimation) at the factory. Sometimes Porro prisms binoculars need their prisms set to be re-aligned to bring them into collimation. Good-quality Porro prism design binoculars often feature about deep grooves or notches ground across the width of the hypotenuse face center of the prisms, to eliminate image quality reducing abaxial non-image-forming reflections. Porro prism binoculars can offer good optical performance with relatively little manufacturing effort and as human eyes are ergonomically limited by their interpupillary distance the offset and separation of big (60+ mm wide) diameter objective lenses and the eyepieces becomes a practical advantage in a stereoscopic optical product.
In the early 2020s, the commercial market share of Porro prism-type binoculars had become the second most numerous compared to other prism-type optical designs.
There are alternative Porro prism-based systems available that find application in binoculars on a small scale, like the Perger prism that offers a significantly reduced axial offset compared to traditional Porro prism designs .
Roof
Roof prism binoculars may have appeared as early as the 1870s in a design by Achille Victor Emile Daubresse. In 1897 Moritz Hensoldt began marketing pentaprism based roof prism binoculars.
Most roof prism binoculars use either the Schmidt–Pechan prism (invented in 1899) or the Abbe–Koenig prism (named after Ernst Karl Abbe and Albert König and patented by Carl Zeiss in 1905) designs to erect the image and fold the optical path. They have objective lenses that are approximately in a line with the eyepieces.
Binoculars with roof prisms have been in use to a large extent since the second half of the 20th century. Roof prism designs result in objective lenses that are almost or totally in line with the eyepieces, creating an instrument that is narrower and more compact than Porro prisms and lighter. There is also a difference in image brightness. Porro prism and Abbe–Koenig roof-prism binoculars will inherently produce a brighter image than Schmidt–Pechan roof prism binoculars of the same magnification, objective size, and optical quality, because the Schmidt-Pechan roof-prism design employs mirror-coated surfaces that reduce light transmission.
In roof prism designs, optically relevant prism angles must be correct within 2 arcseconds ( of 1 degree) to avoid seeing an obstructive double image. Maintaining such tight production tolerances for the alignment of their optical elements by laser or interference (collimation) at an affordable price point is challenging. To avoid the need for later re-collimation, the prisms are generally aligned at the factory and then permanently fixed to a metal plate. These complicating production requirements make high-quality roof prism binoculars more costly to produce than Porro prism binoculars of equivalent optical quality and until phase correction coatings were invented in 1988 Porro prism binoculars optically offered superior resolution and contrast to non-phase corrected roof prism binoculars.
In the early 2020s, the commercial offering of Schmidt-Pechan designs exceeds the Abbe-Koenig design offerings and had become the dominant optical design compared to other prism-type designs.
Alternative roof prism-based designs like the Uppendahl prism system composed of three prisms cemented together were and are commercially offered on a small scale.
Optical systems and their practical effect on binoculars housing shapes
The optical system of modern binoculars consists of three main optical assemblies:
Objective lens assembly. This is the lens assembly at the front of the binoculars. It gathers light from the object and forms an image at the image plane.
Image orientation correction assembly. This is usually a prism assembly that shortens the optical path. Without this, the image would be inverted and laterally reversed, which is inconvenient for the user.
Eyepiece lens assembly. This is the lens assembly near the user's eyes. Its function is to magnify the image.
Although different prism systems have optical design-induced advantages and disadvantages when compared, due to technological progress in fields like optical coatings, optical glass manufacturing, etcetera, differences in the early 2020s in high-quality binoculars practically became irrelevant. At high-quality price points, similar optical performance can be achieved with every commonly applied optical system. This was 20–30 years earlier not possible, as occurring optical disadvantages and problems could at that time not be technically mitigated to practical irrelevancy. Relevant differences in optical performance in the sub-high-quality price categories can still be observed with roof prism-type binoculars today because well-executed technical problem mitigation measures and narrow manufacturing tolerances remain difficult and cost-intensive.
Optical parameters
Binoculars are usually designed for specific applications. These different designs require certain optical parameters which may be listed on the prism cover plate of the binoculars. Those parameters are:
Magnification
Given as the first number in a binocular description (e.g., 7×35, 10×50), magnification is the ratio of the focal length of the objective divided by the focal length of the eyepiece. This gives the magnifying power of binoculars (sometimes expressed as "diameters"). A magnification factor of 7, for example, produces an image 7 times larger than the original seen from that distance. The desirable amount of magnification depends upon the intended application, and in most binoculars is a permanent, non-adjustable feature of the device (zoom binoculars are the exception). Hand-held binoculars typically have magnifications ranging from 7× to 10×, so they will be less susceptible to the effects of shaking hands. A larger magnification leads to a smaller field of view and may require a tripod for image stability. Some specialized binoculars for astronomy or military use have magnifications ranging from 15× to 25×.
Objective diameter
Given as the second number in a binocular description (e.g., 7×35, 10×50), the diameter of the objective lens determines the resolution (sharpness) and how much light can be gathered to form an image. When two different binoculars have equal magnification, equal quality, and produce a sufficiently matched exit pupil (see below), the larger objective diameter produces a "brighter" and sharper image. An 8×40, then, will produce a "brighter" and sharper image than an 8×25, even though both enlarge the image an identical eight times. The larger front lenses in the 8×40 also produce wider beams of light (exit pupil) that leave the eyepieces. This makes it more comfortable to view with an 8×40 than an 8×25. A pair of 10×50 binoculars is better than a pair of 8×40 binoculars for magnification, sharpness and luminous flux. Objective diameter is usually expressed in millimeters. It is customary to categorize binoculars by the magnification × the objective diameter; e.g., 7×50. Smaller binoculars may have a diameter of as low as 22 mm; 35 mm and 50 mm are common diameters for field binoculars; astronomical binoculars have diameters ranging from 70 mm to 150 mm.
Field of view
The field of view of a pair of binoculars depends on its optical design and in general is inversely proportional to the magnifying power. It is usually notated in a linear value, such as how many feet (meters) in width will be seen at 1,000 yards (or 1,000 m), or in an angular value of how many degrees can be viewed.
Exit pupil
Binoculars concentrate the light gathered by the objective into a beam, of which the diameter, the exit pupil, is the objective diameter divided by the magnifying power. For maximum effective light-gathering and brightest image, and to maximize the sharpness, the exit pupil should at least equal the diameter of the pupil of the human eye: about 7 mm at night and about 3 mm in the daytime, decreasing with age. If the cone of light streaming out of the binoculars is larger than the pupil it is going into, any light larger than the pupil is wasted. In daytime use, the human pupil is typically dilated about 3 mm, which is about the exit pupil of a 7×21 binocular. Much larger 7×50 binoculars will produce a (7.14 mm) cone of light bigger than the pupil it is entering, and this light will, in the daytime, be wasted. An exit pupil that is too small also will present an observer with a dimmer view, since only a small portion of the light-gathering surface of the retina is used. For applications where equipment must be carried (birdwatching, hunting), users opt for much smaller (lighter) binoculars with an exit pupil that matches their expected iris diameter so they will have maximum resolution but are not carrying the weight of wasted aperture.
A larger exit pupil makes it easier to put the eye where it can receive the light; anywhere in the large exit pupil cone of light will do. This ease of placement helps avoid, especially in large field of view binoculars, vignetting, which brings to the viewer an image with its borders darkened because the light from them is partially blocked, and it means that the image can be quickly found, which is important when looking at birds or game animals that move rapidly, or for a seafarer on the deck of a pitching vessel or observing from a moving vehicle. Narrow exit pupil binoculars also may be fatiguing because the instrument must be held exactly in place in front of the eyes to provide a useful image. Finally, many people use their binoculars at dawn, at dusk, in overcast conditions, or at night, when their pupils are larger. Thus, the daytime exit pupil is not a universally desirable standard. For comfort, ease of use, and flexibility in applications, larger binoculars with larger exit pupils are satisfactory choices even if their capability is not fully used by day.
Twilight factor and relative brightness
Before innovations like anti-reflective coatings were commonly used in binoculars, their performance was often mathematically expressed. Nowadays, the practically achievable instrumentally measurable brightness of binoculars rely on a complex mix of factors like the quality of optical glass used and various applied optical coatings and not just the magnification and the size of objective lenses.
The twilight factor for binoculars can be calculated by first multiplying the magnification by the objective lens diameter and then finding the square root of the result. For instance, the twilight factor of 7×50 binoculars is therefore the square root of 7 × 50: the square root of 350 = 18.71. The higher the twilight factor, mathematically, the better the resolution of the binoculars when observing under dim light conditions. Mathematically, 7×50 binoculars have exactly the same twilight factor as 70×5 ones, but 70×5 binoculars are useless during twilight and also in well-lit conditions as they would offer only a 0.14 mm exit pupil. The twilight factor without knowing the accompanying more decisive exit pupil does not permit a practical determination of the low light capability of binoculars. Ideally, the exit pupil should be at least as large as the pupil diameter of the user's dark-adapted eyes in circumstances with no extraneous light.
A primarily historic, more meaningful mathematical approach to indicate the level of clarity and brightness in binoculars was relative brightness. It is calculated by squaring the diameter of the exit pupil. In the above 7×50 binoculars example, this means that their relative brightness index is 51 (7.14 × 7.14 = 51). The higher the relative brightness index number, mathematically, the better the binoculars are suited for low light use.
Eye relief
Eye relief is the distance from the rear eyepiece lens to the exit pupil or eye point. It is the distance the observer must position his or her eye behind the eyepiece in order to see an unvignetted image. The longer the focal length of the eyepiece, the greater the potential eye relief. Binoculars may have eye relief ranging from a few millimeters to 25 mm or more. Eye relief can be particularly important for eyeglasses wearers. The eye of an eyeglasses wearer is typically farther from the eye piece which necessitates a longer eye relief in order to avoid vignetting and, in the extreme cases, to conserve the entire field of view. Binoculars with short eye relief can also be hard to use in instances where it is difficult to hold them steady.
Eyeglasses wearers who intend to wear their glasses when using binoculars should look for binoculars with an eye relief that is long enough so that their eyes are not behind the point of focus (also called the eyepoint). Else, their glasses will occupy the space where their eyes should be. Generally, an eye relief over 16 mm should be adequate for any eyeglass wearer. However, if glasses frames are thicker and so significantly protrude from the face, an eye relief over 17 mm should be considered. Eyeglasses wearers should also look for binoculars with twist-up eye cups that ideally have multiple settings, so they can be partially or fully retracted to adjust eye relief to individual ergonomic preferences.
Close focus distance
Close focus distance is the closest point that the binocular can focus on. This distance varies from about , depending upon the design of the binoculars. If the close focus distance is short with respect to the magnification, the binocular can be used also to see particulars not visible to the naked eye.
Eyepieces
Binocular eyepieces usually consist of three or more lens elements in two or more groups. The lens furthest from the viewer's eye is called the field lens or objective lens and that closest to the eye the eye lens or ocular lens. The most common Kellner configuration is that invented in 1849 by Carl Kellner. In this arrangement, the eye lens is a plano-concave/ double convex achromatic doublet (the flat part of the former facing the eye) and the field lens is a double-convex singlet. A reversed Kellner eyepiece was developed in 1975 and in it the field lens is a double concave/ double convex achromatic doublet and the eye lens is a double convex singlet. The reverse Kellner provides 50% more eye relief and works better with small focal ratios as well as having a slightly wider field.
Wide field binoculars typically utilize some kind of Erfle configuration, patented in 1921. These have five or six elements in three groups. The groups may be two achromatic doublets with a double convex singlet between them or may all be achromatic doublets. These eyepieces tend not to perform as well as Kellner eyepieces at high power because they suffer from astigmatism and ghost images. However they have large eye lenses, excellent eye relief, and are comfortable to use at lower powers.
Field flattener lens
High-end binoculars often incorporate a field flattener lens in the eyepiece behind their prism configuration, designed to improve image sharpness and reduce image distortion at the outer regions of the field of view.
Mechanical design
Focus and adjustment
Binoculars have a focusing arrangement which changes the distance between eyepiece and objective lenses or internally mounted lens elements. Normally there are two different arrangements used to provide focus, "independent focus" and "central focusing":
Independent focusing is an arrangement where the two telescope tubes are focused independently by adjusting each eyepiece. Binoculars designed for harsh environmental conditions and heavy field use, such as military or marine applications, traditionally have used independent focusing.
Central focusing is an arrangement which involves rotation of a central focusing wheel to adjust both telescope tubes together. In addition, one of the two eyepieces can be further adjusted to compensate for differences between the viewer's eyes (usually by rotating the eyepiece in its mount). Because the focal change effected by the adjustable eyepiece can be measured in the customary unit of refractive power, the dioptre, the adjustable eyepiece itself is often called a dioptre. Once this adjustment has been made for a given viewer, the binoculars can be refocused on an object at a different distance by using the focusing wheel to adjust both tubes together without eyepiece readjustment.Central focusing binoculars can be further subdivided into:
External focusing, which focuses binoculars by moving the eyepieces, where the volume of the binoculars always changes. During this process, external air and also small dust particles and moisture can be drawn into or pressed out of the binoculars. It is hard to seal or waterproof such systems and in case the eyepieces are moved by a central focuser shaft and external eyepiece arms bridge construction, this construction can (accidentally) get bent/deformed that can result in disabling misalignment.
Internal focusing, which focuses binoculars by moving internal mounted optical lenses located between the objective lens group and the prism assembly – or rarely located between the prism assembly and eyepiece lens assembly – within the housing without changing the volume of the binoculars. The addition of a focusing lens reduces the light transmission of the optical system contained in the telescope tube somewhat. Internal focusing is generally considered the mechanically more robust central focusing solution and with the help of an appropriate seal like O-rings air and moisture ingress can be prevented, to make binoculars fully waterproof.
With increasing magnification, the depth of field – the distance between the nearest and the farthest objects that are in acceptably sharp focus in an image – decreases. The depth of field reduces quadratic with the magnification, so compared to 7× binoculars, 10× binoculars offer about half (7² ÷ 10² = 0.49) the depth of field. However, not related to the binoculars optical system, the user perceived practical depth of field or depth of acceptable view performance is also dependent on the accommodation ability (accommodation ability varies from person to person and decreases significantly with age) and light conditions dependent effective pupil size or diameter of the user's eyes.
There are "focus-free" or "fixed-focus" binoculars that have no focusing mechanism other than the eyepiece adjustments that are meant to be set for the user's eyes and left fixed. These are considered to be compromise designs, suited for convenience, but not well suited for work that falls outside their designed hyperfocal distance range (for hand held binoculars generally from about to infinity without performing eyepiece adjustments for a given viewer).
Binoculars can be generally used without eyeglasses by myopic (near-sighted) or hyperopic (far-sighted) users simply by adjusting the focus a little farther. Most manufacturers leave a little extra available focal-range beyond the infinity-stop/setting to account for this when focusing for infinity. People with severe astigmatism, however, will still need to use their glasses while using binoculars.
Some binoculars have adjustable magnification, zoom binoculars, such as 7-21×50 intended to give the user the flexibility of having a single pair of binoculars with a wide range of magnifications, usually by moving a "zoom" lever. This is accomplished by a complex series of adjusting lenses similar to a zoom camera lens. These designs are noted to be a compromise and even a gimmick since they add bulk, complexity and fragility to the binocular. The complex optical path also leads to a narrow field of view and a large drop in brightness at high zoom. Models also have to match the magnification for both eyes throughout the zoom range and hold collimation to avoid eye strain and fatigue. These almost always perform much better at the low power setting than they do at the higher settings. This is natural, since the front objective cannot enlarge to let in more light as the power is increased, so the view gets dimmer. At 7×, the 50mm front objective provides a 7.14 mm exit pupil, but at 21×, the same front objective provides only a 2.38 mm exit pupil. Also, the optical quality of a zoom binocular at any given power is inferior to that of a fixed power binocular of that power.
Interpupillary distance
Most modern binoculars are also adjustable via a hinged construction that enables the distance between the two telescope halves to be adjusted to accommodate viewers with different eye separation or "interpupillary distance (IPD)" (the distance measured in millimeters between the centers of the pupils of the eyes). Most are optimized for the interpupillary distance (typically about 63 mm) for adults. Interpupillary distance varies with respect to age, gender and race. The binoculars industry has to take IPD variance (most adults have IPDs in the 50–75 mm range) and its extrema into account, because stereoscopic optical products need to be able to cope with many possible users, including those with the smallest and largest IPDs.
Children and adults with narrow IPDs can experience problems with the IPD adjustment range of binocular barrels to match the width between the centers of the pupils in each eye impairing the use of some binoculars. Adults with average or wide IPDs generally experience no eye separation adjustment range problems, but straight barreled roof prism binoculars featuring over 60 mm diameter objectives can dimensionally be problematic to correctly adjust for adults with a relatively narrow IPDs. Anatomic conditions like hypertelorism and hypotelorism can affect IPD and due to extreme IPDs result in practical impairment of using stereoscopic optical products like binoculars.
Alignment
The two telescopes in binoculars are aligned in parallel (collimated), to produce a single circular, apparently three-dimensional, image. Misalignment will cause the binoculars to produce a double image. Even slight misalignment will cause vague discomfort and visual fatigue as the brain tries to combine the skewed images.
Alignment is performed by small movements to the prisms, by adjusting an internal support cell or by turning external set screws, or by adjusting the position of the objective via eccentric rings built into the objective cell.
Unconditional aligning (3-axis collimation, meaning both optical axes are aligned parallel with the axis of the hinge used to select various interpupillary distance settings) binoculars requires specialized equipment. Unconditional alignment is usually done by a professional, although the externally mounted adjustment features can usually be accessed by the end user.
Conditional alignment ignores the third axis (the hinge) in the alignment process. Such a conditional alignment comes down to a 2-axis pseudo-collimation and will only be serviceable within a small range of interpupillary distance settings, as conditional aligned binoculars are not collimated for the full interpupillary distance setting range.
Image stability
Some binoculars use image-stabilization technology to reduce shake at higher magnifications. This is done by having a gyroscope move part of the instrument, or by powered mechanisms driven by gyroscopic or inertial detectors, or via a mount designed to oppose and damp the effect of shaking movements. Stabilization may be enabled or disabled by the user as required. These techniques allow binoculars up to 20× to be hand-held, and much improve the image stability of lower-power instruments. There are some disadvantages: the image may not be quite as good as the best unstabilized binoculars when tripod-mounted, stabilized binoculars also tend to be more expensive and heavier than similarly specified non-stabilized binoculars.
Housing
Binoculars housings can be made of various structural materials. Old binoculars barrels and hinge bridges were often made of brass. Later steel and relatively light metals like aluminum and magnesium alloys were used, as well as polymers like (fibre-reinforced) polycarbonate and acrylonitrile butadiene styrene. The housing can be rubber armored externally as outer covering to provide a non-slip gripping surface, absorption of undesired sounds and additional cushioning/protection against dents, scrapes, bumps and minor impacts.
Optical coatings
Because a typical binocular has 6 to 10 optical elements with special characteristics and up to 20 atmosphere-to-glass surfaces, binocular manufacturers use different types of optical coatings for technical reasons and to improve the image they produce.
Lens and prism optical coatings on binoculars can increase light transmission, minimize detrimental reflections and interference effects, optimize beneficial reflections, repel water and grease and even protect the lens from scratches. Modern optical coatings are composed of a combination of very thin layers of materials such as oxides, metals, or rare earth materials. The performance of an optical coating is dependent on the number of layers, manipulating their exact thickness and composition, and the refractive index difference between them. These coatings have become a key technology in the field of optics and manufacturers often have their own designations for their optical coatings. The various lens and prism optical coatings used in high-quality 21st century binoculars, when added together, can total about 200 (often superimposed) coating layers.
Anti-reflective
Anti-reflective interference coatings reduce light lost at every optical surface through reflection at each surface. Reducing reflection via anti-reflective coatings also reduces the amount of "lost" light present inside the binocular which would otherwise make the image appear hazy (low contrast). A pair of binoculars with good optical coatings may yield a brighter image than uncoated binoculars with a larger objective lens, on account of superior light transmission through the assembly. The first transparent interference-based coating Transparentbelag (T) used by Zeiss was invented in 1935 by Olexander Smakula. A classic lens-coating material is magnesium fluoride, which reduces reflected light from about 4% to 1.5%. At 16 atmosphere to optical glass surfaces passes, a 4% reflection loss theoretically means a 52% light transmission ( = 0.520) and a 1.5% reflection loss a much better 78.5% light transmission ( = 0.785). Reflection can be further reduced over a wider range of wavelengths and angles by using several superimposed layers with different refractive indices. The anti-reflective multi-coating Transparentbelag* (T*) used by Zeiss in the late 1970s consisted of six superimposed layers. In general, the outer coating layers have slightly lower index of refraction values and the layer thickness is adapted to the range of wavelengths in the visible spectrum to promote optimal destructive interference via reflection in the beams reflected from the interfaces, and constructive interference in the corresponding transmitted beams. There is no simple formula for the optimal layer thickness for a given choice of materials. These parameters are therefore determined with the help of simulation programs. Determined by the optical properties of the lenses used and intended primary use of the binoculars, different coatings are preferred, to optimize light transmission dictated by the human eye luminous efficiency function variance. Maximal light transmission around wavelengths of 555 nm (green) is important for obtaining optimal photopic vision using the eye cone cells for observation in well-lit conditions. Maximal light transmission around wavelengths of 498 nm (cyan) is important for obtaining optimal scotopic vision using the eye rod cells for observation in low light conditions. As a result, effective modern anti-reflective lens coatings consist of complex multi-layers and reflect only 0.25% or less to yield an image with maximum brightness and natural colors. These allow high-quality 21st century binoculars to practically achieve at the eye lens or ocular lens measured over 90% light transmission values in low light conditions. Depending on the coating, the character of the image seen in the binoculars under normal daylight can either look "warmer" or "colder" and appear either with higher or lower contrast. Subject to the application, the coating is also optimized for maximum color fidelity through the visible spectrum, for example in the case of lenses specially designed for bird watching.
A common application technique is physical vapor deposition of one or more superimposed anti-reflective coating layer(s) which includes evaporative deposition, making it a complex production process.
Phase correction
In binoculars with roof prisms the light path is split into two paths that reflect on either side of the roof prism ridge. One half of the light reflects from roof surface 1 to roof surface 2. The other half of the light reflects from roof surface 2 to roof surface 1. If the roof faces are uncoated, the mechanism of reflection is Total Internal Reflection (TIR). In TIR, light polarized in the plane of incidence (p-polarized) and light polarized orthogonal to the plane of incidence (s-polarized) experience different phase shifts. As a consequence, linearly polarized light emerges from a roof prism elliptically polarized. Furthermore, the state of elliptical polarization of the two paths through the prism is different. When the two paths recombine on the retina (or a detector) there is interference between light from the two paths causing a distortion of the Point Spread Function and a deterioration of the image. Resolution and contrast significantly suffer. These unwanted interference effects can be suppressed by vapor depositing a special dielectric coating known as a phase-correction coating or P-coating on the roof surfaces of the roof prism. To approximately correct a roof prism for polychromatic light several phase-correction coating layers are superimposed, since every layer is wavelength and angle of incidence specific.
The P-coating was developed in 1988 by Adolf Weyrauch at Carl Zeiss.
Other manufacturers followed soon, and since then phase-correction coatings are used across the board in medium and high-quality roof prism binoculars. This coating suppresses the difference in phase shift between s- and p- polarization so both paths have the same polarization and no interference degrades the image. In this way, since the 1990s, roof prism binoculars have also achieved resolution values that were previously only achievable with Porro prisms. The presence of a phase-correction coating can be checked on unopened binoculars using two polarization filters. Dielectric phase-correction prism coatings are applied in a vacuum chamber with maybe thirty or more different superimposed vapor coating layers deposits, making it a complex production process.
Binoculars using either a Schmidt–Pechan roof prism, Abbe–Koenig roof prism or an Uppendahl roof prism benefit from phase coatings that compensate for a loss of resolution and contrast caused by the interference effects that occur in untreated roof prisms. Porro prism and Perger prism binoculars do not split beams and therefore they do not require any phase coatings.
Metallic mirror
In binoculars with Schmidt–Pechan or Uppendahl roof prisms, mirror coatings are added to some surfaces of the roof prism because the light is incident at one of the prism's glass-air boundaries at an angle less than the critical angle so total internal reflection does not occur. Without a mirror coating most of that light would be lost. Roof prism aluminum mirror coating (reflectivity of 87% to 93%) or silver mirror coating (reflectivity of 95% to 98%) is used.
In older designs silver mirror coatings were used but these coatings oxidized and lost reflectivity over time in unsealed binoculars. Aluminum mirror coatings were used in later unsealed designs because they did not tarnish even though they have a lower reflectivity than silver. Using vacuum-vaporization technology, modern designs use either aluminum, enhanced aluminum (consisting of aluminum overcoated with a multilayer dielectric film) or silver. Silver is used in modern high-quality designs which are sealed and filled with nitrogen or argon to provide an inert atmosphere so that the silver mirror coating does not tarnish.
Porro prism and Perger prism binoculars and roof prism binoculars using the Abbe–Koenig roof prism configuration do not use mirror coatings because these prisms reflect with 100% reflectivity using total internal reflection in the prism rather than requiring a (metallic) mirror coating.
Dielectric mirror
Dielectric coatings are used in Schmidt–Pechan and Uppendahl roof prisms to cause the prism surfaces to act as a dielectric mirror. This coating was introduced in 2004 in Zeiss Victory FL binoculars featuring Schmidt–Pechan prisms. Other manufacturers followed soon, and since then dielectric coatings are used across the board in medium and high-quality Schmidt–Pechan and Uppendahl roof prism binoculars. The non-metallic dielectric reflective coating is formed from several multilayers of alternating high and low refractive index materials deposited on a prism's reflective surfaces. The manufacturing techniques for dielectric mirrors are based on thin-film deposition methods. A common application technique is physical vapor deposition which includes evaporative deposition with maybe seventy or more different superimposed vapor coating layers deposits, making it a complex production process. This multilayer coating increases reflectivity from the prism surfaces by acting as a distributed Bragg reflector. A well-designed multilayer dielectric coating can provide a reflectivity of over 99% across the visible light spectrum. This reflectivity is an improvement compared to either an aluminium mirror coating or silver mirror coating.
Porro prism and Perger prism binoculars and roof prism binoculars using the Abbe–Koenig roof prism do not use dielectric coatings because these prisms reflect with 100% reflectivity using total internal reflection in the prism rather than requiring a (dielectric) mirror coating.
Terms
All binoculars
The presence of any coatings is typically denoted on binoculars by the following terms:
coated optics: one or more surfaces are anti-reflective coated with a single-layer coating.
fully coated: all air-to-glass surfaces are anti-reflective coated with a single-layer coating. Plastic lenses, however, if used, may not be coated.
multi-coated: one or more surfaces have anti-reflective multi-layer coatings.
fully multi-coated: all air-to-glass surfaces are anti-reflective multi-layer coated.
The presence of optical high transmittance crown glass offering relatively low refractive index (≈1.52) and low dispersion (with Abbe numbers around 60) is typically denoted on binoculars by the following terms:
BK7 (Schott designates it as 517642. The first three digits designate its refractive index [1.517] and the last three designate its Abbe number [64.2]. Its critical angle is 41.2°.)
BaK4 (Schott designates it as 569560. The first three digits designate its refractive index [1.569] and the last three designate its Abbe number [56.0]. Its critical angle is 39.6°.)
Roof prisms only
phase-coated or P-coating: the roof prism has a phase-correcting coating
aluminium-coated: the roof prism mirrors are coated with an aluminium coating (the default if a mirror coating isn't mentioned).
silver-coated: the roof prism mirrors are coated with a silver coating
dielectric-coated: the roof prism mirrors are coated with a dielectric coating
Accessories
Common accessories for binoculars are:
neck and shoulder straps for carrying
binocular harnesses (sometimes combined with an integrated field case) to distribute weight evenly for prolonged carrying
field carrying cases/side bags
binoculars storage/travel cases
rainguards for protecting the eyepieces outer lenses
(tethered) lens caps for protecting the objectives outer lenses
cleaning kits to carefully remove dirt from lenses and other surfaces
tripod adapters
Applications
General use
Hand-held binoculars range from small 3 × 10 Galilean opera glasses, used in theaters, to glasses with 7 to 12 times magnification and 30 to 50 mm diameter objectives for typical outdoor use.
Compact or pocket binoculars are small light binoculars suitable for daytime use. Most compact binoculars feature magnifications of 7× to 10×, and objective diameter sizes of a relatively modest 20 mm to 25 mm, resulting in small exit pupil sizes limiting low light suitability. Roof prism designs tend to be narrower and more compact than equivalent Porro prism designs. Thus, compact binoculars are mostly roof prism designs. The telescope tubes of compact binoculars can often be folded closely to each other to radically reduce the binocular's volume when not in use, for easy carriage and storage.
Many tourist attractions have installed pedestal-mounted, coin-operated binocular tower viewers to allow visitors to obtain a closer view of the attraction.
Land surveys and geographic data collection
Although technology has surpassed using binoculars for data collection, historically these were advanced tools used by geographers and other geoscientists. Field glasses still today can provide visual aid when surveying large areas.
Bird watching
Birdwatching is a very popular hobby among nature and animal lovers; a binocular is their most basic tool because most human eyes cannot resolve sufficient detail to fully appreciate and/or study small birds. To be able to view birds in flight well rapid moving objects acquiring capability and depth of field are important. Typically, binoculars with a magnification of 8× to 10× are used, though many manufacturers produce models with 7× magnification for a wider field of view and increased depth of field. The other main consideration for birdwatching binoculars is the size of the objective that collects light. A larger (e.g. 40–45mm) objective works better in low light and for seeing into foliage, but also makes for a heavier binocular than a 30–35mm objective. Weight may not seem a primary consideration when first hefting a pair of binoculars, but birdwatching involves a lot of holding up the binoculars while standing in one place. Careful shopping is advised by the birdwatching community.
Hunting
Hunters commonly use binoculars in the field as a way to observe distant game animals. Hunters most commonly use about 8× magnification binoculars with 40–45mm objectives to be able to find and observe game in low light conditions. European manufacturers produced and produce 7×42 binoculars with good low light performance without getting too bulky for mobile use like extended carrying/stalking and much bigger bulky 8×56 and 9×63 low-light binoculars optically optimized for excellent low light performance for more stationary hunting at dusk and night. For hunting binoculars optimized for observation in twilight, coatings are preferred that maximize light transmission in the wavelength range around 460-540 nm.
Range finding
Some binoculars have a range finding reticle (scale) superimposed upon the view. This scale allows the distance to the object to be estimated if the object's height is known (or estimable). The common mariner 7×50 binoculars have these scales with the angle between marks equal to 5 mil. One mil is equivalent to the angle between the top and bottom of an object one meter in height at a distance of 1000 meters.
Therefore, to estimate the distance to an object that is a known height the formula is:
where:
is the Distance to the object in meters.
is the known Object Height.
is the angular height of the object in number of Mil.
With the typical 5 mil scale (each mark is 5 mil), a lighthouse that is 3 marks high and known to be 120 meters tall is 8000 meters distant.
Military
Binoculars have a long history of military use. Galilean designs were widely used up to the end of the 19th century when they gave way to porro prism types. Binoculars constructed for general military use tend to be more rugged than their civilian counterparts. They generally avoid fragile center focus arrangements in favor of independent focus, which also makes for easier, more effective weatherproofing. Prism sets in military binoculars may have redundant aluminized coatings on their prism sets to guarantee they do not lose their reflective qualities if they get wet.
One variant form was called "trench binoculars", a combination of binoculars and periscope, often used for artillery spotting purposes. It projected only a few inches above the parapet, thus keeping the viewer's head safely in the trench.
Military binoculars can and were also used as measuring and aiming devices, and can feature filters and (illuminated) reticles.
Military binoculars of the Cold War era were sometimes fitted with passive sensors that detected active IR emissions, while modern ones usually are fitted with filters blocking laser beams used as weapons. Further, binoculars designed for military usage may include a stadiametric reticle in one eyepiece in order to facilitate range estimation.
Modern binoculars designed for military usage can also feature laser rangefinders, compasses, and data exchange interfaces to send measurements to other peripheral devices.
Very large binocular naval rangefinders (up to 15 meters separation of the two objective lenses, weight 10 tons, for ranging World War II naval gun targets 25 km away) have been used, although late-20th century radar and laser range finding technology made this application mostly redundant.
Marine
There are binoculars designed specifically for civilian and military use under harsh environmental conditions at sea. Hand held models will be 5× to 8× magnification, but with very large prism sets combined with eyepieces designed to give generous eye relief. This optical combination prevents the image vignetting or going dark when the binoculars are pitching and vibrating relative to the viewer's eyes due to a vessel's motion.
Marine binoculars often contain one or more features to aid in navigation on ships and boats.
Hand held marine binoculars typically feature:
Sealed interior: O-rings or other seals prevent air and moisture ingress.
Nitrogen or argon filled interior: the interior is filled with 'dry' gas to prevent internal fogging/tarnishing of the optical surfaces. As fungi can not grow in the presence of an inert or noble gas atmosphere, it also prevents lens fungus formation.
Independent focusing: this method aids in providing a durable, sealed interior.
Reticle scale: a navigational aid which uses a horizon line and a vertical scale for measuring the distance of objects of known width or height – sometimes an important navigational aid.
Compass: A compass bearing projected in the image. Dampening helps to read the compass bearing on a moving ship or boat.
Floating strap: some marine binoculars float on water, to prevent sinking. Marine binoculars that do not float are sometime supplied with or provided by the user as an aftermarket accessory with a strap that will function as a flotation device.
Mariners also often deem an adequate low light performance of the optical combination important, explaining the many 7×50 hand held marine binoculars offerings featuring a large 7.14 mm exit pupil, which corresponds to the average pupil size of a youthful dark-adapted human eye in circumstances with no extraneous light.
Civilian and military ships can also use large, high-magnification binocular models with large objectives in fixed mountings.
Astronomical
Binoculars are widely used by amateur astronomers; their wide field of view makes them useful for comet and supernova seeking (giant binoculars) and general observation (portable binoculars). Binoculars specifically geared towards astronomical viewing will have larger aperture objectives (in the 70 mm or 80 mm range) because the diameter of the objective lens increases the total amount of light captured, and therefore determines the faintest star that can be observed. Binoculars designed specifically for astronomical viewing (often 80 mm and larger) are sometimes designed without prisms in order to allow maximum light transmission. Such binoculars also usually have changeable eyepieces to vary magnification. Binoculars with high magnification and heavy weight usually require some sort of mount to stabilize the image. A magnification of 10x is generally considered the practical limit for observation with handheld binoculars. Binoculars more powerful than 15×70 require support of some type. Much larger binoculars have been made by amateur telescope makers, essentially using two refracting or reflecting astronomical telescopes.
Of particular relevance for low-light and astronomical viewing is the ratio between magnifying power and objective lens diameter. A lower magnification facilitates a larger field of view which is useful in viewing the Milky Way and large nebulous objects (referred to as deep sky objects) such as the nebulae and galaxies. The large (typical 7.14 mm using 7×50) exit pupil [objective (mm)/power] of these devices results in a small portion of the gathered light not being usable by individuals whose pupils do not sufficiently dilate. For example, the pupils of those over 50 rarely dilate over 5 mm wide. The large exit pupil also collects more light from the background sky, effectively decreasing contrast, making the detection of faint objects more difficult except perhaps in remote locations with negligible light pollution. Many astronomical objects of 8 magnitude or brighter, such as the star clusters, nebulae and galaxies listed in the Messier Catalog, are readily viewed in hand-held binoculars in the 35 to 40 mm range, as are found in many households for birding, hunting, and viewing sports events. For observing smaller star clusters, nebulae, and galaxies binocular magnification is an important factor for visibility because these objects appear tiny at typical binocular magnifications.
Some open clusters, such as the bright double cluster (NGC 869 and NGC 884) in the constellation Perseus, and globular clusters, such as M13 in Hercules, are easy to spot. Among nebulae, M17 in Sagittarius and the North America Nebula (NGC 7000) in Cygnus are also readily viewed. Binoculars can show a few of the wider-split binary stars such as Albireo in the constellation Cygnus.
A number of Solar System objects that are mostly to completely invisible to the human eye are reasonably detectable with medium-size binoculars, including larger craters on the Moon; the dim outer planets Uranus and Neptune; the inner "minor planets" Ceres, Vesta and Pallas; Saturn's largest moon Titan; and the Galilean moons of Jupiter. Although visible unaided in pollution-free skies, Uranus and Vesta require binoculars for easy detection. 10×50 binoculars are limited to an apparent magnitude of +9.5 to +11 depending on sky conditions and observer experience. Asteroids like Interamnia, Davida, Europa and, unless under exceptional conditions, Hygiea, are too faint to be seen with commonly sold binoculars. Likewise too faint to be seen with most binoculars are the planetary moons, except the Galileans and Titan, and the dwarf planets Pluto and Eris. Other difficult binocular targets include the phases of Venus and the rings of Saturn. Only binoculars with very high magnification, 20x or higher, are capable of discerning Saturn's rings to a recognizable extent. High-power binoculars can sometimes show one or two cloud belts on the disk of Jupiter, if optics and observing conditions are sufficiently good.
Binoculars can also aid in observation of human-made space objects, such as spotting satellites in the sky as they pass.
List of binocular manufacturers
There are many companies that manufacturer binoculars, both past and present. They include:
Barr and Stroud (UK) – sold binoculars commercially and primary supplier to the Royal Navy in WWII. The new range of Barr & Stroud binoculars are currently made in China (Nov. 2011) and distributed by Optical Vision Ltd.
Bausch & Lomb (US) – has not made binoculars since 1976, when they licensed their name to Bushnell, Inc., who made binoculars under the Bausch & Lomb name until the license expired, and was not renewed, in 2005.
BELOMO (Belarus) – both porro prism and roof prism models manufactured.
Bresser (Germany)
Bushnell Corporation (US)
Blaser (Germany)– Premium binoculars
Canon Inc (Japan) – I.S. series: porro variants
Celestron (US).
Docter Optics (Germany) – Nobilem series: porro prisms
Fujinon (Japan) – FMTSX, FMTSX-2, MTSX series: porro
I.O.R. (Romania)
Kazan Optical-Mechanical Plant (KOMZ) (Russia) – manufactures a variety of porro prism models, sold under the trade name Baigish
Kowa (Japan)
Krasnogorsky Zavod (Russia) – both porro prism and roof prism models, models with optical stabilizers. The factory is part of the Shvabe Holding Group
Leica Camera (Germany) – Noctivid, Ultravid, Duovid, Geovid, Trinovid: most are roof prism, with a few high end porro prism examples
Leupold & Stevens, Inc (US)
Meade Instruments (US) – Glacier (roof prism), TravelView (porro), CaptureView (folding roof prism) and Astro Series (roof prism). Also sells under the name Coronado.
Meopta (Czech Republic) – Meostar B1 (roof prism)
Minox (Germany)
Nikon (Japan) – EDG, High Grade, Monarch, RAII, and Spotter series: roof prism; Prostar, Superior E, E, and Action EX series: porro; Prostaff series, Aculon series
Olympus Corporation (Japan)
Pentax (Japan) – DCFED/SP/XP series: roof prism; UCF series: inverted porro; PCFV/WP/XCF series: porro
(Germany) – both porro prism and roof prism models
Steiner-Optik (Germany)
PRAKTICA (UK) for birdwatching, sightseeing, hiking, camping
Swarovski Optik (Austria)
Takahashi Seisakusho (Japan)
Tasco (US)
Vixen (telescopes) (Japan) – Apex/Apex Pro: roof prism; Ultima: porro
Vivitar (US)
Vortex Optics (US)
Zeiss (Germany) – FL, Victory, Conquest: roof prism; 7×50 BGAT/T: porro, 15×60 BGA/T: porro, discontinued
| Technology | Optical instruments | null |
86061 | https://en.wikipedia.org/wiki/Cygnus%20X-1 | Cygnus X-1 | Cygnus X-1 (abbreviated Cyg X-1) is a galactic X-ray source in the constellation Cygnus and was the first such source widely accepted to be a black hole. It was discovered in 1964 during a rocket flight and is one of the strongest X-ray sources detectable from Earth, producing a peak X-ray flux density of (). It remains among the most studied astronomical objects in its class. The compact object is now estimated to have a mass about 21.2 times the mass of the Sun and has been shown to be too small to be any known kind of normal star or other likely object besides a black hole. If so, the radius of its event horizon has "as upper bound to the linear dimension of the source region" of occasional X-ray bursts lasting only for about 1 ms.
Cygnus X-1 belongs to a high-mass X-ray binary system, located about 2.22 kiloparsecs from the Sun, that includes a blue supergiant variable star designated HDE 226868, which it orbits at about 0.2 AU, or 20% of the distance from Earth to the Sun. A stellar wind from the star provides material for an accretion disk around the X-ray source. Matter in the inner disk is heated to millions of degrees, generating the observed X-rays. A pair of relativistic jets, arranged perpendicularly to the disk, are carrying part of the energy of the infalling material away into interstellar space.
This system may belong to a stellar association called Cygnus OB3, which would mean that Cygnus X-1 is about 5 million years old and formed from a progenitor star that had more than . The majority of the star's mass was shed, most likely as a stellar wind. If this star had then exploded as a supernova, the resulting force would most likely have ejected the remnant from the system. Hence the star may have instead collapsed directly into a black hole.
Cygnus X-1 was the subject of a friendly scientific wager between physicists Stephen Hawking and Kip Thorne in 1975, with Hawking—betting that it was not a black hole—hoping to lose. Hawking conceded the bet in 1990 after observational data had strengthened the case that there was indeed a black hole in the system. , this hypothesis lacked direct empirical evidence but was generally accepted based on indirect evidence.
Discovery and observation
Observation of X-ray emissions allows astronomers to study celestial phenomena involving gas with temperatures in the millions of degrees. However, because X-ray emissions are blocked by Earth's atmosphere, observation of celestial X-ray sources is not possible without lifting instruments to altitudes where the X-rays can penetrate. Cygnus X-1 was discovered using X-ray instruments that were carried aloft by a sounding rocket launched from White Sands Missile Range in New Mexico. As part of an ongoing effort to map these sources, a survey was conducted in 1964 using two Aerobee suborbital rockets. The rockets carried Geiger counters to measure X-ray emission in wavelength range 1– across an 8.4° section of the sky. These instruments swept across the sky as the rockets rotated, producing a map of closely spaced scans.
As a result of these surveys, eight new sources of cosmic X-rays were discovered, including Cyg XR-1 (later Cyg X-1) in the constellation Cygnus. The celestial coordinates of this source were estimated as right ascension 19h53m and declination 34.6°. It was not associated with any especially prominent radio or optical source at that position.
Seeing a need for longer-duration studies, in 1963 Riccardo Giacconi and Herb Gursky proposed the first orbital satellite to study X-ray sources. NASA launched their Uhuru satellite in 1970, which led to the discovery of 300 new X-ray sources. Extended Uhuru observations of Cygnus X-1 showed fluctuations in the X-ray intensity that occurs several times a second. This rapid variation meant that the X-ray generation must occur over a compact region no larger than ~ (roughly the size of Jupiter), as the speed of light restricts communication between more distant regions.
In April–May 1971, Luc Braes and George K. Miley from Leiden Observatory, and independently Robert M. Hjellming and Campbell Wade at the National Radio Astronomy Observatory, detected radio emission from Cygnus X-1, and their accurate radio position pinpointed the X-ray source to the star AGK2 +35 1910 = HDE 226868. On the celestial sphere, this star lies about half a degree from the 4th-magnitude star Eta Cygni. It is a supergiant star that is by itself incapable of emitting the observed quantities of X-rays. Hence, the star must have a companion that could heat gas to the millions of degrees needed to produce the radiation source for Cygnus X-1.
Louise Webster and Paul Murdin, at the Royal Greenwich Observatory, and Charles Thomas Bolton, working independently at the University of Toronto's David Dunlap Observatory, announced the discovery of a massive hidden companion to HDE 226868 in 1972. Measurements of the Doppler shift of the star's spectrum demonstrated the companion's presence and allowed its mass to be estimated from the orbital parameters. Based on the high predicted mass of the object, they surmised that it may be a black hole, as the largest possible neutron star cannot exceed three times the mass of the Sun.
With further observations strengthening the evidence, by the end of 1973 the astronomical community generally conceded that Cygnus X-1 was most likely a black hole. More precise measurements of Cygnus X-1 demonstrated variability down to a single millisecond. This interval is consistent with turbulence in a disk of accreted matter surrounding a black hole—the accretion disk. X-ray bursts that last for about a third of a second match the expected time frame of matter falling toward a black hole.
Cygnus X-1 has since been studied extensively using observations by orbiting and ground-based instruments. The similarities between the emissions of X-ray binaries such as HDE 226868/Cygnus X-1 and active galactic nuclei suggests a common mechanism of energy generation involving a black hole, an orbiting accretion disk and associated jets. For this reason, Cygnus X-1 is identified among a class of objects called microquasars; an analog of the quasars, or quasi-stellar radio sources, now known to be distant active galactic nuclei. Scientific studies of binary systems such as HDE 226868/Cygnus X-1 may lead to further insights into the mechanics of active galaxies.
Binary system
The compact object and blue supergiant star form a binary system in which they orbit around their center of mass every 5.599829 days. From the perspective of Earth, the compact object never goes behind the other star; in other words, the system does not eclipse. However, the inclination of the orbital plane to the line of sight from Earth remains uncertain, with predictions ranging from 27° to 65°. A 2007 study estimated the inclination as , which would mean that the semi-major axis is about , or 20% of the distance from Earth to the Sun. The orbital eccentricity is thought to be only , meaning a nearly circular orbit. Earth's distance to this system is calculated by trigonometric parallax as , and by radio astrometry as .
The HDE 226868/Cygnus X-1 system shares a common motion through space with an association of massive stars named Cygnus OB3, which is located at roughly 2000 parsecs from the Sun. This implies that HDE 226868, Cygnus X-1 and this OB association may have formed at the same time and location. If so, then the age of the system is about . The motion of HDE 226868 with respect to Cygnus OB3 is , a typical value for random motion within a stellar association. HDE 226868 is about from the center of the association and could have reached that separation in about —which roughly agrees with estimated age of the association.
With a galactic latitude of 4° and galactic longitude 71°, this system lies inward along the same Orion Spur, in which the Sun is located within the Milky Way, near where the spur approaches the Sagittarius Arm. Cygnus X-1 has been described as belonging to the Sagittarius Arm, though the structure of the Milky Way is not well established.
Compact object
From various techniques, the mass of the compact object appears to be greater than the maximum mass for a neutron star. Stellar evolutionary models suggest a mass of , while other techniques resulted in 10 solar masses. Measuring periodicities in the X-ray emission near the object yielded a more precise value of . In all cases, the object is most likely a black hole—a region of space with a gravitational field that is strong enough to prevent the escape of electromagnetic radiation from the interior. The boundary of this region is called the event horizon and has an effective radius called the Schwarzschild radius, which is about for Cygnus X-1. Anything (including matter and photons) that passes through this boundary is unable to escape. New measurements published in 2021 yielded an estimated mass of .
Evidence of just such an event horizon may have been detected in 1992 using ultraviolet (UV) observations with the High Speed Photometer on the Hubble Space Telescope. As self-luminous clumps of matter spiral into a black hole, their radiation is emitted in a series of pulses that are subject to gravitational redshift as the material approaches the horizon. That is, the wavelengths of the radiation steadily increase, as predicted by general relativity. Matter hitting a solid, compact object would emit a final burst of energy, whereas material passing through an event horizon would not. Two such "dying pulse trains" were observed, which is consistent with the existence of a black hole.
The spin of the compact object is not yet well determined. Past analysis of data from the space-based Chandra X-ray Observatory suggested that Cygnus X-1 was not rotating to any significant degree. However, evidence announced in 2011 suggests that it is rotating extremely rapidly, approximately 790 times per second.
Formation
The largest star in the Cygnus OB3 association has a mass 40 times that of the Sun. As more massive stars evolve more rapidly, this implies that the progenitor star for Cygnus X-1 had more than 40 solar masses. Given the current estimated mass of the black hole, the progenitor star must have lost over 30 solar masses of material. Part of this mass may have been lost to HDE 226868, while the remainder was most likely expelled by a strong stellar wind. The helium enrichment of HDE 226868's outer atmosphere may be evidence for this mass transfer. Possibly the progenitor may have evolved into a Wolf–Rayet star, which ejects a substantial proportion of its atmosphere using just such a powerful stellar wind.
If the progenitor star had exploded as a supernova, then observations of similar objects show that the remnant would most likely have been ejected from the system at a relatively high velocity. As the object remained in orbit, this indicates that the progenitor may have collapsed directly into a black hole without exploding (or at most produced only a relatively modest explosion).
Accretion disk
The compact object is thought to be orbited by a thin, flat disk of accreting matter known as an accretion disk. This disk is intensely heated by friction between ionized gas in faster-moving inner orbits and that in slower outer ones. It is divided into a hot inner region with a relatively high level of ionization—forming a plasma—and a cooler, less ionized outer region that extends to an estimated 500 times the Schwarzschild radius, or about 15,000 km.
Though highly and erratically variable, Cygnus X-1 is typically the brightest persistent source of hard X-rays—those with energies from about 30 up to several hundred kiloelectronvolts—in the sky. The X-rays are produced as lower-energy photons in the thin inner accretion disk, then given more energy through Compton scattering with very high-temperature electrons in a geometrically thicker, but nearly transparent corona enveloping it, as well as by some further reflection from the surface of the thin disk. An alternative possibility is that the X-rays may be Compton-scattered by the base of a jet instead of a disk corona.
The X-ray emission from Cygnus X-1 can vary in a somewhat repetitive pattern called quasi-periodic oscillations (QPO). The mass of the compact object appears to determine the distance at which the surrounding plasma begins to emit these QPOs, with the emission radius decreasing as the mass decreases. This technique has been used to estimate the mass of Cygnus X-1, providing a cross-check with other mass derivations.
Pulsations with a stable period, similar to those resulting from the spin of a neutron star, have never been seen from Cygnus X-1. The pulsations from neutron stars are caused by the neutron star's rotating magnetic field, but the no-hair theorem guarantees that the magnetic field of a black hole is exactly aligned with its rotation axis and thus is static. For example, the X-ray binary V 0332+53 was thought to be a possible black hole until pulsations were found. Cygnus X-1 has also never displayed X-ray bursts similar to those seen from neutron stars. Cygnus X-1 unpredictably changes between two X-ray states, although the X-rays may vary continuously between those states as well. In the most common state, the X-rays are "hard", which means that more of the X-rays have high energy. In the less common state, the X-rays are "soft", with more of the X-rays having lower energy. The soft state also shows greater variability. The hard state is believed to originate in a corona surrounding the inner part of the more opaque accretion disk. The soft state occurs when the disk draws closer to the compact object (possibly as close as ), accompanied by cooling or ejection of the corona. When a new corona is generated, Cygnus X-1 transitions back to the hard state.
The spectral transition of Cygnus X-1 can be explained using a two-component advective flow solution, as proposed by Chakrabarti and Titarchuk. A hard state is generated by the inverse Comptonisation of seed photons from the Keplarian disk and likewise synchrotron photons produced by the hot electrons in the centrifugal-pressure–supported boundary layer (CENBOL).
The X-ray flux from Cygnus X-1 varies periodically every 5.6 days, especially during superior conjunction when the orbiting objects are most closely aligned with Earth and the compact source is the more distant. This indicates that the emissions are being partially blocked by circumstellar matter, which may be the stellar wind from the star HDE 226868. There is a roughly 300-day periodicity in the emission, which could be caused by the precession of the accretion disk.
Jets
As accreted matter falls toward the compact object, it loses gravitational potential energy. Part of this released energy is dissipated by jets of particles, aligned perpendicular to the accretion disk, that flow outward with relativistic velocities (that is, the particles are moving at a significant fraction of the speed of light). This pair of jets provide a means for an accretion disk to shed excess energy and angular momentum. They may be created by magnetic fields within the gas that surrounds the compact object.
The Cygnus X-1 jets are inefficient radiators and so release only a small proportion of their energy in the electromagnetic spectrum. That is, they appear "dark". The estimated angle of the jets to the line of sight is 30°, and they may be precessing. One of the jets is colliding with a relatively dense part of the interstellar medium (ISM), forming an energized ring that can be detected by its radio emission. This collision appears to be forming a nebula that has been observed in the optical wavelengths. To produce this nebula, the jet must have an estimated average power of 4–, or . This is more than 1,000 times the power emitted by the Sun. There is no corresponding ring in the opposite direction because that jet is facing a lower-density region of the ISM.
In 2006, Cygnus X-1 became the first stellar-mass black hole found to display evidence of gamma-ray emission in the very high-energy band, above . The signal was observed at the same time as a flare of hard X-rays, suggesting a link between the events. The X-ray flare may have been produced at the base of the jet, while the gamma rays could have been generated where the jet interacts with the stellar wind of HDE 226868.
HDE 226868
HDE 226868 is a supergiant star with a spectral class of O9.7 Iab, which is on the borderline between class-O and class-B stars. It has an estimated surface temperature of 31,000 K and mass approximately 20–40 times the mass of the Sun. Based on a stellar evolutionary model, at the estimated distance of 2,000 parsecs, this star may have a radius equal to about 15–17 times the solar radius and has approximately 300,000–400,000 times the luminosity of the Sun. For comparison, the compact object is estimated to be orbiting HDE 226868 at a distance of about 40 solar radii, or twice the radius of this star.
The surface of HDE 226868 is being tidally distorted by the gravity of the massive companion, forming a tear-drop shape that is further distorted by rotation. This causes the optical brightness of the star to vary by 0.06 magnitudes during each 5.6-day binary orbit, with the minimum magnitude occurring when the system is aligned with the line of sight. The "ellipsoidal" pattern of light variation results from the limb darkening and gravity darkening of the star's surface.
When the spectrum of HDE 226868 is compared to the similar star Alnilam, the former shows an overabundance of helium and an underabundance of carbon in its atmosphere. The ultraviolet and hydrogen-alpha spectral lines of HDE 226868 show profiles similar to the star P Cygni, which indicates that the star is surrounded by a gaseous envelope that is being accelerated away from the star at speeds of about 1,500 km/s.
Like other stars of its spectral type, HDE 226868 is thought to be shedding mass in a stellar wind at an estimated rate of solar masses per year; or one solar mass every 400,000 years. The gravitational influence of the compact object appears to be reshaping this stellar wind, producing a focused wind geometry rather than a spherically symmetrical wind. X-rays from the region surrounding the compact object heat and ionize this stellar wind. As the object moves through different regions of the stellar wind during its 5.6-day orbit, the UV lines, the radio emission, and the X-rays themselves all vary.
The Roche lobe of HDE 226868 defines the region of space around the star where orbiting material remains gravitationally bound. Material that passes beyond this lobe may fall toward the orbiting companion. This Roche lobe is believed to be close to the surface of HDE 226868 but not overflowing, so the material at the stellar surface is not being stripped away by its companion. However, a significant proportion of the stellar wind emitted by the star is being drawn onto the compact object's accretion disk after passing beyond this lobe.
The gas and dust between the Sun and HDE 226868 results in a reduction in the apparent magnitude of the star, as well as a reddening of the hue—red light can more effectively penetrate the dust in the interstellar medium. The estimated value of the interstellar extinction (AV) is 3.3 magnitudes. Without the intervening matter, HDE 226868 would be a fifth-magnitude star, and thus visible to the unaided eye.
Stephen Hawking and Kip Thorne
Cygnus X-1 was the subject of a bet between physicists Stephen Hawking and Kip Thorne, in which Hawking bet against the existence of black holes in the region. Hawking later described this as an "insurance policy" of sorts. In his book A Brief History of Time he wrote:
According to the updated tenth-anniversary edition of A Brief History of Time, Hawking has conceded the bet due to subsequent observational data in favor of black holes. In his own book Black Holes and Time Warps, Thorne reports that Hawking conceded the bet by breaking into Thorne's office while he was in Russia, finding the framed bet, and signing it. While Hawking referred to the bet as taking place in 1975, the written bet itself (in Thorne's handwriting, with his and Hawking's signatures) bears additional witness signatures under a legend stating "Witnessed this tenth day of December 1974". This date was confirmed by Kip Thorne on the January 10, 2018 episode of Nova on PBS.
In popular culture
Cygnus X-1 is the subject of a two-part song series by Canadian progressive rock band Rush. The first part, "Book I: The Voyage", is the last song on the 1977 album A Farewell to Kings. The second part, "Book II: Hemispheres", is the first song on the following 1978 album, Hemispheres. The lyrics describe an explorer aboard the spaceship Rocinante, who travels to the black hole, believing that there may be something beyond it. As he moves closer, it becomes increasingly difficult to control the ship, and he is eventually drawn in by the pull of gravity.
In the 1979 Disney live-action science fiction film The Black Hole, the scientific survey ship captained by Dr. Hans Reinhardt to study the black hole of the film's title is the Cygnus, presumably (although never stated as such) named for the first-identified black hole, Cygnus X-1.
| Physical sciences | Other notable objects | null |
86092 | https://en.wikipedia.org/wiki/Bile | Bile | Bile (from Latin bilis), or gall, is a yellow-green/misty green fluid produced by the liver of most vertebrates that aids the digestion of lipids in the small intestine. In humans, bile is primarily composed of water, is produced continuously by the liver, and is stored and concentrated in the gallbladder. After a human eats, this stored bile is discharged into the first section of the small intestine.
Composition
In the human liver, bile is composed of 97–98% water, 0.7% bile salts, 0.2% bilirubin, 0.51% fats (cholesterol, fatty acids, and lecithin), and 200 meq/L inorganic salts. The two main pigments of bile are bilirubin, which is orange-yellow, and its oxidised form biliverdin, which is green. When mixed, they are responsible for the brown color of feces. About of bile is produced per day in adult human beings.
Function
Bile or gall acts to some extent as a surfactant, helping to emulsify the lipids in food. Bile salt anions are hydrophilic on one side and hydrophobic on the other side; consequently, they tend to aggregate around droplets of lipids (triglycerides and phospholipids) to form micelles, with the hydrophobic sides towards the fat and hydrophilic sides facing outwards. The hydrophilic sides are negatively charged, and this charge prevents fat droplets coated with bile from re-aggregating into larger fat particles. Ordinarily, the micelles in the duodenum have a diameter around 1–50 μm in humans.
The dispersion of food fat into micelles provides a greatly increased surface area for the action of the enzyme pancreatic lipase, which digests the triglycerides, and is able to reach the fatty core through gaps between the bile salts. A triglyceride is broken down into two fatty acids and a monoglyceride, which are absorbed by the villi on the intestine walls. After being transferred across the intestinal membrane, the fatty acids reform into triglycerides (), before being absorbed into the lymphatic system through lacteals. Without bile salts, most of the lipids in food would be excreted in feces, undigested.
Since bile increases the absorption of fats, it is an important part of the absorption of the fat-soluble substances, such as the vitamins A, D, E, and K.
Besides its digestive function, bile serves also as the route of excretion for bilirubin, a byproduct of red blood cells recycled by the liver. Bilirubin derives from hemoglobin by glucuronidation.
Bile tends to be alkaline on average. The pH of common duct bile (7.50 to 8.05) is higher than that of the corresponding gallbladder bile (6.80 to 7.65). Bile in the gallbladder becomes more acidic the longer a person goes without eating, though resting slows this fall in pH. As an alkali, it also has the function of neutralizing excess stomach acid before it enters the duodenum, the first section of the small intestine. Bile salts also act as bactericides, destroying many of the microbes that may be present in the food.
Clinical significance
In the absence of bile, fats become indigestible and are instead excreted in feces, a condition called steatorrhea. Feces lack their characteristic brown color and instead are white or gray, and greasy. Steatorrhea can lead to deficiencies in essential fatty acids and fat-soluble vitamins. In addition, past the small intestine (which is normally responsible for absorbing fat from food) the gastrointestinal tract and gut flora are not adapted to processing fats, leading to problems in the large intestine.
The cholesterol contained in bile will occasionally accrete into lumps in the gallbladder, forming gallstones. Cholesterol gallstones are generally treated through surgical removal of the gallbladder. However, they can sometimes be dissolved by increasing the concentration of certain naturally occurring bile acids, such as chenodeoxycholic acid and ursodeoxycholic acid.
On an empty stomach – after repeated vomiting, for example – a person's vomit may be green or dark yellow, and very bitter. The bitter and greenish component may be bile or normal digestive juices originating in the stomach. Bile may be forced into the stomach secondary due to a weakened valve (pylorus), the presence of certain drugs including alcohol, or powerful muscular contractions and duodenal spasms. This is known as biliary reflux.
Obstruction
Biliary obstruction refers to a condition when bile ducts which deliver bile from the gallbladder or liver to the duodenum become obstructed. The blockage of bile might cause a buildup of bilirubin in the bloodstream which can result in jaundice. There are several potential causes for biliary obstruction including gallstones, cancer, trauma, choledochal cysts, or other benign causes of bile duct narrowing. The most common cause of bile duct obstruction is when gallstone(s) are dislodged from the gallbladder into the cystic duct or common bile duct resulting in a blockage. A blockage of the gallbladder or cystic duct may cause cholecystitis. If the blockage is beyond the confluence of the pancreatic duct, this may cause gallstone pancreatitis. In some instances of biliary obstruction, the bile may become infected by bacteria resulting in ascending cholangitis.
Society and culture
In medical theories prevalent in the West from classical antiquity to the Middle Ages, the body's health depended on the equilibrium of four "humors", or vital fluids, two of which related to bile: blood, phlegm, "yellow bile" (choler), and "black bile". These "humors" are believed to have their roots in the appearance of a blood sedimentation test made in open air, which exhibits a dark clot at the bottom ("black bile"), a layer of unclotted erythrocytes ("blood"), a layer of white blood cells ("phlegm") and a layer of clear yellow serum ("yellow bile").
Excesses of black bile and yellow bile were thought to produce depression and aggression, respectively, and the Greek names for them gave rise to the English words cholera (from Greek χολή kholē, "bile") and melancholia. In the former of those senses, the same theories explain the derivation of the English word bilious from bile, the meaning of gall in English as "exasperation" or "impudence", and the Latin word cholera, derived from the Greek kholé, which was passed along into some Romance languages as words connoting anger, such as colère (French) and cólera (Spanish).
Soap
Soap can be mixed with bile from mammals, such as ox gall. This mixture, called bile soap or gall soap, can be applied to textiles a few hours before washing as a traditional and effective method for removing various kinds of tough stains.
Food
Pinapaitan is a dish in Philippine cuisine that uses bile as flavoring. Other areas where bile is commonly used as a cooking ingredient include Laos and northern parts of Thailand.
During the Boshin War, Satsuma soldiers of the early Imperial Japanese Army reportedly ate human livers boiled in bile. The practice of eating a slain enemy's liver, known as , was a tradition of the Satsuma people.
Bears
In regions where bile products are a popular ingredient in traditional medicine, the use of bears in bile-farming has been widespread. This practice has been condemned by activists, and some pharmaceutical companies have developed synthetic (non-ursine) alternatives.
Principal acids
| Biology and health sciences | Gastrointestinal tract | Biology |
86113 | https://en.wikipedia.org/wiki/Winding%20number | Winding number | In mathematics, the winding number or winding index of a closed curve in the plane around a given point is an integer representing the total number of times that the curve travels counterclockwise around the point, i.e., the curve's number of turns. For certain open plane curves, the number of turns may be a non-integer. The winding number depends on the orientation of the curve, and it is negative if the curve travels around the point clockwise.
Winding numbers are fundamental objects of study in algebraic topology, and they play an important role in vector calculus, complex analysis, geometric topology, differential geometry, and physics (such as in string theory).
Intuitive description
Suppose we are given a closed, oriented curve in the xy plane. We can imagine the curve as the path of motion of some object, with the orientation indicating the direction in which the object moves. Then the winding number of the curve is equal to the total number of counterclockwise turns that the object makes around the origin.
When counting the total number of turns, counterclockwise motion counts as positive, while clockwise motion counts as negative. For example, if the object first circles the origin four times counterclockwise, and then circles the origin once clockwise, then the total winding number of the curve is three.
Using this scheme, a curve that does not travel around the origin at all has winding number zero, while a curve that travels clockwise around the origin has negative winding number. Therefore, the winding number of a curve may be any integer. The following pictures show curves with winding numbers between −2 and 3:
Formal definition
Let be a continuous closed path on the plane minus one point. The winding number of around is the integer
where is the path written in polar coordinates, i.e. the lifted path through the covering map
The winding number is well defined because of the existence and uniqueness of the lifted path (given the starting point in the covering space) and because all the fibers of are of the form (so the above expression does not depend on the choice of the starting point). It is an integer because the path is closed.
Alternative definitions
Winding number is often defined in different ways in various parts of mathematics. All of the definitions below are equivalent to the one given above:
Alexander numbering
A simple combinatorial rule for defining the winding number was proposed by August Ferdinand Möbius in 1865
and again independently by James Waddell Alexander II in 1928.
Any curve partitions the plane into several connected regions, one of which is unbounded. The winding numbers of the curve around two points in the same region are equal. The winding number around (any point in) the unbounded region is zero. Finally, the winding numbers for any two adjacent regions differ by exactly 1; the region with the larger winding number appears on the left side of the curve (with respect to motion down the curve).
Differential geometry
In differential geometry, parametric equations are usually assumed to be differentiable (or at least piecewise differentiable). In this case, the polar coordinate θ is related to the rectangular coordinates x and y by the equation:
Which is found by differentiating the following definition for θ:
By the fundamental theorem of calculus, the total change in θ is equal to the integral of dθ. We can therefore express the winding number of a differentiable curve as a line integral:
The one-form dθ (defined on the complement of the origin) is closed but not exact, and it generates the first de Rham cohomology group of the punctured plane. In particular, if ω is any closed differentiable one-form defined on the complement of the origin, then the integral of ω along closed loops gives a multiple of the winding number.
Complex analysis
Winding numbers play a very important role throughout complex analysis (c.f. the statement of the residue theorem). In the context of complex analysis, the winding number of a closed curve in the complex plane can be expressed in terms of the complex coordinate . Specifically, if we write z = reiθ, then
and therefore
As is a closed curve, the total change in is zero, and thus the integral of is equal to multiplied by the total change in . Therefore, the winding number of closed path about the origin is given by the expression
More generally, if is a closed curve parameterized by , the winding number of about , also known as the index of with respect to , is defined for complex as
This is a special case of the famous Cauchy integral formula.
Some of the basic properties of the winding number in the complex plane are given by the following theorem:
Theorem. Let be a closed path and let be the set complement of the image of , that is, . Then the index of with respect to ,is (i) integer-valued, i.e., for all ; (ii) constant over each component (i.e., maximal connected subset) of ; and (iii) zero if is in the unbounded component of .
As an immediate corollary, this theorem gives the winding number of a circular path about a point . As expected, the winding number counts the number of (counterclockwise) loops makes around :
Corollary. If is the path defined by , then
Topology
In topology, the winding number is an alternate term for the degree of a continuous mapping. In physics, winding numbers are frequently called topological quantum numbers. In both cases, the same concept applies.
The above example of a curve winding around a point has a simple topological interpretation. The complement of a point in the plane is homotopy equivalent to the circle, such that maps from the circle to itself are really all that need to be considered. It can be shown that each such map can be continuously deformed to (is homotopic to) one of the standard maps , where multiplication in the circle is defined by identifying it with the complex unit circle. The set of homotopy classes of maps from a circle to a topological space form a group, which is called the first homotopy group or fundamental group of that space. The fundamental group of the circle is the group of the integers, Z; and the winding number of a complex curve is just its homotopy class.
Maps from the 3-sphere to itself are also classified by an integer which is also called the winding number or sometimes Pontryagin index.
Turning number
One can also consider the winding number of the path with respect to the tangent of the path itself. As a path followed through time, this would be the winding number with respect to the origin of the velocity vector. In this case the example illustrated at the beginning of this article has a winding number of 3, because the small loop is counted.
This is only defined for immersed paths (i.e., for differentiable paths with nowhere vanishing derivatives), and is the degree of the tangential Gauss map.
This is called the turning number, rotation number, rotation index or index of the curve, and can be computed as the total curvature divided by 2.
Polygons
In polygons, the turning number is referred to as the polygon density. For convex polygons, and more generally simple polygons (not self-intersecting), the density is 1, by the Jordan curve theorem. By contrast, for a regular star polygon {p/q}, the density is q.
Space curves
Turning number cannot be defined for space curves as degree requires matching dimensions. However, for locally convex, closed space curves, one can define tangent turning sign as , where is the turning number of the stereographic projection of its tangent indicatrix. Its two values correspond to the two non-degenerate homotopy classes of locally convex curves.
Winding number and Heisenberg ferromagnet equations
The winding number is closely related with the (2 + 1)-dimensional continuous Heisenberg ferromagnet equations and its integrable extensions: the Ishimori equation etc. Solutions of the last equations are classified by the winding number or topological charge (topological invariant and/or topological quantum number).
Applications
Point in polygon
A point's winding number with respect to a polygon can be used to solve the point in polygon (PIP) problem – that is, it can be used to determine if the point is inside the polygon or not.
Generally, the ray casting algorithm is a better alternative to the PIP problem as it does not require trigonometric functions, contrary to the winding number algorithm. Nevertheless, the winding number algorithm can be sped up so that it too, does not require calculations involving trigonometric functions. The sped-up version of the algorithm, also known as Sunday's algorithm, is recommended in cases where non-simple polygons should also be accounted for.
| Mathematics | Topology | null |
86333 | https://en.wikipedia.org/wiki/Microbat | Microbat | Microbats constitute the suborder Microchiroptera within the order Chiroptera. Bats have long been differentiated into Megachiroptera (megabats) and Microchiroptera, based on their size, the use of echolocation by the Microchiroptera and other features; molecular evidence suggests a somewhat different subdivision, as the microbats have been shown to be a paraphyletic group.
Characteristics
Microbats are long. Most microbats feed on insects, but some of the larger species hunt birds, lizards, frogs, smaller bats or even fish. Only three species of microbat feed on the blood of large mammals or birds ("vampire bats"); these bats live in South and Central America.
Although most "Leaf-nose" microbats are fruit and nectar-eating, the name “leaf-nosed” isn't a designation meant to indicate the preferred diet among said variety. Three species follow the bloom of columnar cacti in northwest Mexico and the Southwest United States northward in the northern spring and then the blooming agaves southward in the northern fall (autumn). Other leaf-nosed bats, such as Vampyrum spectrum of South America, hunt a variety of prey such as lizards and birds. The horseshoe bats of Europe, as well as California leaf-nosed bats, have a very intricate leaf-nose for echolocation, and feed primarily on insects.
Differences from megabats
Microbats use echolocation, whereas megabats do not typically. (The Egyptian fruit bat Rousettus egyptiacus is an exception, but does not use the larynx echolocation method of microbats, instead giving scientists the theory that it clicks using its nasal passages and back of its tongue.)
Microbats lack the claw at the second finger of the forelimb. This finger appears thinner and almost bonded by tissue with the third finger for extra support during flight.
Megabats lack tails, with the exception of a few genera such as Nyctimene, whereas this trait only occurs in certain species of microbats.
The ears of microbats possess a tragus (thought to be crucial in echolocation) and are relatively larger than megabat ears, whereas megabat ears are comparatively small and lack a tragus.
Megabat eyes are quite large, whereas microbat eyes are comparatively smaller.
Dentition
The form and function of microbat teeth differ as a result of the various diets these bats can have. Teeth are primarily designed to break down food; therefore, the shape of the teeth correlate to specific feeding behaviors. In comparison to megabats which feed only on fruit and nectar, microbats illustrate a range of diets and have been classified as insectivores, carnivores, sanguinivores, frugivores, and nectarivores. Differences seen between the size and function of the canines and molars among microbats in these groups vary as a result of this.
The diverse diets of microbats reflect having dentition, or cheek teeth, that display a morphology derived from dilambdodont teeth, which are characterized by a W-shaped ectoloph, or stylar shelf. A W-shaped dilambdodont upper molar includes a metacone and paracone, which are located at the bottom of the “W”; while the rest of the “W” is formed by crests that run from the metacone and paracone to the cusps of the stylar self.
Microbats display differences between the size and shape of their canines and molars, in addition to having distinctive variations among their skull features that contribute to their ability to feed effectively. Frugivorous microbats have small stylar shelf areas, short molariform rows, and wide palates and faces. In addition to having wide faces, frugivorous microbats have short skulls, which place the teeth closer to the fulcrum of the jaw lever, allowing an increase in jaw strength. Frugivorous microbats also possess a different pattern on their molars compared to carnivorous, insectivorous, nectarivorous, and sanguinivorous microbats. In contrast, insectivorous microbats are characterized by having larger, but fewer teeth, long canines, and shortened third upper molars; while carnivorous microbats have large upper molars. Generally, microbats that are insectivores, carnivores, and frugivores have large teeth and small palates; however, the opposite is true for microbats that are nectarivores. Though differences exist between the palate and teeth sizes of microbats, the proportion of the sizes of these two structures are maintained among microbats of various sizes.
Echolocation
Echolocation is the process where an animal produces a sound of certain wavelength, and then listens to and compares the reflected echoes to the original sound emitted. Bats use echolocation to form images of their surrounding environment and the organisms that inhabit it by eliciting ultrasonic waves via their larynx. The difference between the ultrasonic waves produced by the bat and what the bat hears provides the bat with information about its environment. Echolocation aids the bat in not only detecting prey, but also in orientation during flight.
Production of ultrasonic waves
Most microbats generate ultrasound with their larynx and emit the sound through their nose or mouth. Sound productions are generated from the vocal folds in mammals due to the elastic membranes that compose these folds. Vocalization requires these elastic membranes because they act as a source to transform airflow into acoustic pressure waves. Energy is supplied to the elastic membranes from the lungs, and results in the production of sound. The larynx houses the vocal cords and forms the passageway for the expiratory air that will produce sound. Microbat range in frequency from 14,000 to over 100,000 hertz, well beyond the range of the human ear (typical human hearing range is considered to be from 20 to 20,000 Hz). The emitted vocalizations form a broad beam of sound used to probe the environment, as well as communicate with other bats. At the molecular level, it has been found that CPLX1 is involved in this ultrasonic wave production.
Laryngeally echolocating microbats
Laryngeal echolocation is the dominant form of echolocation in microbats, however, it is not the only way in which microbats can produce ultrasonic waves. Excluding non-echolocating and laryngeally echolocating microbats, other species of microbats and megabats have been shown to produce ultrasonic waves by clapping their wings, clicking their tongues, or using their nose. Laryngeally echolocating bats, in general, produce ultrasonic waves with their larynx that is specialized to produce sounds of short wavelength. The larynx is located at the cranial end of the trachea and is surrounded by cricothyroid muscles and thyroid cartilage. For reference, in humans, this is the area where the Adam's apple is located. Phonation of ultrasonic waves is produced through the vibrations of the vocal membranes in the expiratory air. The intensity that these vocal folds vibrate at varies with activity and between bat species. A characteristic of laryngeally echolocating microbats that distinguishes them from other echolocating microbats is the articulation of their stylohyal bone with their tympanic bone. The stylohyal bones are part of the hyoid apparatus that help support the throat and larynx. The tympanic bone forms the floor of the middle ear. In addition to the connection between the stylohyal bone and the tympanic bone as being an indicator of laryngeally echolocating microbats, another definitive marker is the presence of a flattened and expanded stylohyal bone at the cranial end.
Microbats that laryngeally echolocate must be able to distinguish between the differences of the pulse that they produce and the returning echo that follows by being able to process and understand the ultrasonic waves at a neuronal level, in order to accurately obtain information about their surrounding environment and orientation in it. The connection between the stylohyal bone and the tympanic bone enables the bat to neurally register the outgoing and incoming ultrasonic waves produced by the larynx. Furthermore, the stylohyal bones connect the larynx to the tympanic bones via a cartilaginous or fibrous connection (depending on the species of bat). Mechanically the importance of this connection is that it supports the larynx by anchoring it to the surrounding cricothyroid muscles, as well as draws it closer to the nasal cavity during phonation. The stylohyal bones are often reduced in many other mammals, however, they are more prominent in laryngeally echolocating bats and are part of the mammalian hyoid apparatus. The hyoid apparatus functions in breathing, swallowing, and phonation in microbats as well as other mammals. An important feature of the bony connection in laryngeally echolocating microbats is the extended articulation of the ventral portion of the tympanic bones and the proximal end of the stylohyal bone that bends around it to make this connection.
Classification
While bats have been traditionally divided into megabats and microbats, recent molecular evidence has shown the superfamily Rhinolophoidea to be more genetically related to megabats than to microbats, indicating the microbats are paraphyletic. To resolve the paraphyly of microbats, the Chiroptera were redivided into suborders Yangochiroptera (which includes Nycteridae, vespertilionoids, noctilionoids, and emballonuroids) and Yinpterochiroptera, which includes megabats, rhinopomatids, Rhinolophidae, and Megadermatidae.
This is the classification according to Simmons and Geisler (1998):
Superfamily Emballonuroidea
Family Emballonuridae (sac-winged bats or sheath-tailed bats)
Superfamily Rhinopomatoidea
Family Rhinopomatidae (mouse-tailed bats)
Family Craseonycteridae (bumblebee bat or Kitti's hog-nosed bat)
Superfamily Rhinolophoidea
Family Rhinolophidae (horseshoe bats)
Family Nycteridae (hollow-faced bats or slit-faced bats)
Family Megadermatidae (false vampires)
Superfamily Vespertilionoidea
Family Vespertilionidae (vesper bats or evening bats)
Superfamily Molossoidea
Family Molossidae (free-tailed bats)
Family Antrozoidae (pallid bats)
Superfamily Nataloidea
Family Natalidae (funnel-eared bats)
Family Myzopodidae (sucker-footed bats)
Family Thyropteridae (disk-winged bats)
Family Furipteridae (smoky bats)
Superfamily Noctilionoidea
Family Noctilionidae (bulldog bats or fisherman bats)
Family Mystacinidae (New Zealand short-tailed bats)
Family Mormoopidae (ghost-faced bats or moustached bats)
Family Phyllostomidae (leaf-nosed bats)
| Biology and health sciences | Bats | null |
86347 | https://en.wikipedia.org/wiki/Group%20%28periodic%20table%29 | Group (periodic table) | In chemistry, a group (also known as a family) is a column of elements in the periodic table of the chemical elements. There are 18 numbered groups in the periodic table; the 14 f-block columns, between groups 2 and 3, are not numbered. The elements in a group have similar physical or chemical characteristics of the outermost electron shells of their atoms (i.e., the same core charge), because most chemical properties are dominated by the orbital location of the outermost electron.
The modern numbering system of "group 1" to "group 18" has been recommended by the International Union of Pure and Applied Chemistry (IUPAC) since 1988. The 1-18 system is based on each atom's s, p and d electrons beyond those in atoms of the preceding noble gas. Two older incompatible naming schemes can assign the same number to different groups depending on the system being used. The older schemes were used by the Chemical Abstract Service (CAS, more popular in the United States), and by IUPAC before 1988 (more popular in Europe). The system of eighteen groups is generally accepted by the chemistry community, but some dissent exists about membership of elements number 1 and 2 (hydrogen and helium). Similar variation on the inner transition metals continues to exist in textbooks, although the correct positioning has been known since 1948 and was twice endorsed by IUPAC in 1988 (together with the 1–18 numbering) and 2021.
Groups may also be identified using their topmost element, or have a specific name. For example, group 16 is also described as the "oxygen group" and as the "chalcogens". An exception is the "iron group", which usually refers to group 8, but in chemistry may also mean iron, cobalt, and nickel, or some other set of elements with similar chemical properties. In astrophysics and nuclear physics, it usually refers to iron, cobalt, nickel, chromium, and manganese.
Group names
Modern group names are numbers 1–18, with the 14 f-block columns remaining unnumbered (together making the 32 columns in the periodic table). Also, trivial names (like halogens) are common. In history, several sets of group names have been used, based on Roman numberings I–VIII, and "A" and "B" suffixes.
List of group names
Coinage metals: authors differ on whether roentgenium (Rg) is considered a coinage metal. It is in group 11, like the other coinage metals, and is expected to be chemically similar to gold. On the other hand, being extremely radioactive and short-lived, it cannot actually be used for coinage as the name suggests, and on that basis it is sometimes excluded.
triels (group 13), from Greek tri: three, III
tetrels (group 14), from Greek tetra: four, IV
pentel (group 15), from Greek penta: five, V
CAS and old IUPAC numbering (A/B)
Two earlier group number systems exist: CAS (Chemical Abstracts Service) and old IUPAC. Both use numerals (Arabic or Roman) and letters A and B. Both systems agree on the numbers. The numbers indicate approximately the highest oxidation number of the elements in that group, and so indicate similar chemistry with other elements with the same numeral. The number proceeds in a linearly increasing fashion for the most part, once on the left of the table, and once on the right (see List of oxidation states of the elements), with some irregularities in the transition metals. However, the two systems use the letters differently. For example, potassium (K) has one valence electron. Therefore, it is located in group 1. Calcium (Ca) is in group 2, for it contains two valence electrons.
In the old IUPAC system the letters A and B were designated to the left (A) and right (B) part of the table, while in the CAS system the letters A and B are designated to main group elements (A) and transition elements (B). The old IUPAC system was frequently used in Europe, while the CAS is most common in America. The new IUPAC scheme was developed to replace both systems as they confusingly used the same names to mean different things. The new system simply numbers the groups increasingly from left to right on the standard periodic table. The IUPAC proposal was first circulated in 1985 for public comments, and was later included as part of the 1990 edition of the Nomenclature of Inorganic Chemistry.
Non-columnwise groups
While groups are defined to be columns in the periodic table, as described above, there are also sets of elements named "group" that are not a column:
Similar sets: noble metals, coinage metals, precious metals, refractory metals.
| Physical sciences | Periodic table | Chemistry |
86350 | https://en.wikipedia.org/wiki/Period%20%28periodic%20table%29 | Period (periodic table) | A period on the periodic table is a row of chemical elements. All elements in a row have the same number of electron shells. Each next element in a period has one more proton and is less metallic than its predecessor. Arranged this way, elements in the same group (column) have similar chemical and physical properties, reflecting the periodic law. For example, the halogens lie in the second-to-last group (group 17) and share similar properties, such as high reactivity and the tendency to gain one electron to arrive at a noble-gas electronic configuration. , a total of 118 elements have been discovered and confirmed.
Modern quantum mechanics explains these periodic trends in properties in terms of electron shells. As atomic number increases, shells fill with electrons in approximately the order shown in the ordering rule diagram. The filling of each shell corresponds to a row in the table.
In the f-block and p-block of the periodic table, elements within the same period generally do not exhibit trends and similarities in properties (vertical trends down groups are more significant). However, in the d-block, trends across periods become significant, and in the f-block elements show a high degree of similarity across periods.
Periods
There are currently seven complete periods in the periodic table, comprising the 118 known elements. Any new elements will be placed into an eighth period; see extended periodic table. The elements are colour-coded below by their block: red for the s-block, yellow for the p-block, blue for the d-block, and green for the f-block.
Period 1
The first period contains fewer elements than any other, with only two, hydrogen and helium. They therefore do not follow the octet rule, but rather a duplet rule. Chemically, helium behaves like a noble gas, and thus is taken to be part of the group 18 elements. However, in terms of its nuclear structure it belongs to the s-block, and is therefore sometimes classified as a group 2 element, or simultaneously both 2 and 18. Hydrogen readily loses and gains an electron, and so behaves chemically as both a group 1 and a group 17 element.
Hydrogen (H) is the most abundant of the chemical elements, constituting roughly 75% of the universe's elemental mass. Ionized hydrogen is just a proton. Stars in the main sequence are mainly composed of hydrogen in its plasma state. Elemental hydrogen is relatively rare on Earth, and is industrially produced from hydrocarbons such as methane. Hydrogen can form compounds with most elements and is present in water and most organic compounds.
Helium (He) exists only as a gas except in extreme conditions. It is the second-lightest element and is the second-most abundant in the universe. Most helium was formed during the Big Bang, but new helium is created through nuclear fusion of hydrogen in stars. On Earth, helium is relatively rare, only occurring as a byproduct of the natural decay of some radioactive elements. Such 'radiogenic' helium is trapped within natural gas in concentrations of up to seven percent by volume.
Period 2
Period 2 elements involve the 2s and 2p orbitals. They include the biologically most essential elements besides hydrogen: carbon, nitrogen, and oxygen.
Lithium (Li) is the lightest metal and the least dense solid element. In its non-ionized state it is one of the most reactive elements, and so is only ever found naturally in compounds. It is the heaviest primordial element forged in large quantities during the Big Bang.
Beryllium (Be) has one of the highest melting points of all the light metals. Small amounts of beryllium were synthesised during the Big Bang, although most of it decayed or reacted further within stars to create larger nuclei, like carbon, nitrogen or oxygen. Beryllium is classified by the International Agency for Research on Cancer as a group 1 carcinogen. Between 1% and 15% of people are sensitive to beryllium and may develop an inflammatory reaction in their respiratory system and skin, called chronic beryllium disease.
Boron (B) does not occur naturally as a free element, but in compounds such as borates. It is an essential plant micronutrient, required for cell wall strength and development, cell division, seed and fruit development, sugar transport and hormone development, though high levels are toxic.
Carbon (C) is the fourth-most abundant element in the universe by mass after hydrogen, helium and oxygen and is the second-most abundant element in the human body by mass after oxygen, the third-most abundant by number of atoms. There are an almost infinite number of compounds that contain carbon due to carbon's ability to form long stable chains of C—C bonds. All organic compounds, those essential for life, contain at least one atom of carbon; combined with hydrogen, oxygen, nitrogen, sulfur, and phosphorus, carbon is the basis of every important biological compound.
Nitrogen (N) is found mainly as mostly inert diatomic gas, N2, which makes up 78% of the Earth's atmosphere by volume. It is an essential component of proteins and therefore of life.
Oxygen (O) comprising 21% of the atmosphere by volume and is required for respiration by all (or nearly all) animals, as well as being the principal component of water. Oxygen is the third-most abundant element in the universe, and oxygen compounds dominate the Earth's crust.
Fluorine (F) is the most reactive element in its non-ionized state, and so is never found that way in nature.
Neon (Ne) is a noble gas used in neon lighting.
Period 3
All period three elements occur in nature and have at least one stable isotope. All but the noble gas argon are essential to basic geology and biology.
Sodium (Na) is an alkali metal. It is present in Earth's oceans in large quantities in the form of sodium chloride (table salt).
Magnesium (Mg) is an alkaline earth metal. Magnesium ions are found in chlorophyll.
Aluminium (Al) is a post-transition metal. It is the most abundant metal in the Earth's crust.
Silicon (Si) is a metalloid. It is a semiconductor, making it the principal component in many integrated circuits. Silicon dioxide is the principal constituent of sand. As Carbon is to Biology, Silicon is to Geology.
Phosphorus (P) is a nonmetal essential to DNA. It is highly reactive, and as such is never found in nature as a free element.
Sulfur (S) is a nonmetal. It is found in two amino acids: cysteine and methionine.
Chlorine (Cl) is a halogen. Since it is one of the most reactive elements, it is often found on the Earth's surface as sodium chloride. Its compounds used as a disinfectant, especially in swimming pools.
Argon (Ar) is a noble gas, making it almost entirely nonreactive. Incandescent lamps are often filled with noble gases such as argon in order to preserve the filaments at high temperatures.
Period 4
Period 4 includes the biologically essential elements potassium and calcium, and is the first period in the d-block with the lighter transition metals. These include iron, the heaviest element forged in main-sequence stars and a principal component of the Earth, as well as other important metals such as cobalt, nickel, and copper. Almost all have biological roles.
Completing the fourth period are six p-block elements: gallium, germanium, arsenic, selenium, bromine, and krypton.
Period 5
Period 5 has the same number of elements as period 4 and follows the same general structure but with one more post transition metal and one fewer nonmetal. Of the three heaviest elements with biological roles, two (molybdenum and iodine) are in this period; tungsten, in period 6, is heavier, along with several of the early lanthanides. Period 5 also includes technetium, the lightest exclusively radioactive element.
Period 6
Period 6 is the first period to include the f-block, with the lanthanides (also known as the rare earth elements), and includes the heaviest stable elements. Many of these heavy metals are toxic and some are radioactive, but platinum and gold are largely inert.
Period 7
All elements of period 7 are radioactive. This period contains the heaviest element which occurs naturally on Earth, plutonium. All of the subsequent elements in the period have been synthesized artificially. Whilst five of these (from americium to einsteinium) are now available in macroscopic quantities, most are extremely rare, having only been prepared in microgram amounts or less. Some of the later elements have only ever been identified in laboratories in quantities of a few atoms at a time.
Although the rarity of many of these elements means that experimental results are not very extensive, periodic and group trends in behaviour appear to be less well defined for period 7 than for other periods. Whilst francium and radium do show typical properties of groups 1 and 2, respectively, the actinides display a much greater variety of behaviour and oxidation states than the lanthanides. These peculiarities of period 7 may be due to a variety of factors, including a large degree of spin–orbit coupling and relativistic effects, ultimately caused by the very high positive electrical charge from their massive atomic nuclei.
Period 8
No element of the eighth period has yet been synthesized. A g-block is predicted. It is not clear if all elements predicted for the eighth period are in fact physically possible. Therefore, there may not be a ninth period.
| Physical sciences | Periodic table | Chemistry |
86359 | https://en.wikipedia.org/wiki/Swamp | Swamp | A swamp is a forested wetland. Swamps are considered to be transition zones because both land and water play a role in creating this environment. Swamps vary in size and are located all around the world. The water of a swamp may be fresh water, brackish water, or seawater. Freshwater swamps form along large rivers or lakes where they are critically dependent upon rainwater and seasonal flooding to maintain natural water level fluctuations. Saltwater swamps are found along tropical and subtropical coastlines. Some swamps have hammocks, or dry-land protrusions, covered by aquatic vegetation, or vegetation that tolerates periodic inundation or soil saturation. The two main types of swamp are "true" or swamp forests and "transitional" or shrub swamps. In the boreal regions of Canada, the word swamp is colloquially used for what is more formally termed a bog, fen, or muskeg. Some of the world's largest swamps are found along major rivers such as the Amazon, the Mississippi, and the Congo.
Differences between marshes and swamps
Swamps and marshes are specific types of wetlands that form along waterbodies containing rich, hydric soils. Marshes are wetlands, continually or frequently flooded by nearby running bodies of water, that are dominated by emergent soft-stem vegetation and herbaceous plants. Swamps are wetlands consisting of saturated soils or standing water and are dominated by water-tolerant woody vegetation such as shrubs, bushes, and trees.
Hydrology
Swamps are characterized by their saturated soils and slow-moving waters. The water that accumulates in swamps comes from a variety of sources including precipitation, groundwater, tides and/or freshwater flooding. These hydrologic pathways all contribute to how energy and nutrients flow in and out of the ecosystem. As water flows through the swamp, nutrients, sediment and pollutants are naturally filtered out. Chemicals like phosphorus and nitrogen that end up in waterways get absorbed and used by the aquatic plants within the swamp, purifying the water. Any remaining or excess chemicals present will accumulate at the bottom of the swamp, being removed from the water and buried within the sediment. The hydrology of a swamp is a key factor in its biogeochemical environment, which includes the levels and availability of resources like oxygen, nutrients, pH, and toxicity.
Values and ecosystem services
Swamps and other wetlands have traditionally held a very low property value compared to fields, prairies, or woodlands. They have a reputation for being unproductive land that cannot easily be utilized for human activities, other than hunting, trapping, or fishing. Farmers, for example, typically drained swamps next to their fields so as to gain more land usable for planting crops, both historically, and to a lesser extent, presently. On the other hand, swamps can (and do) play a beneficial ecological role in the overall functions of the natural environment and provide a variety of resources that many species depend on. Swamps and other wetlands have shown to be a natural form of flood management and defense against flooding. In such circumstances where flooding does occur, swamps absorb and use the excess water within the wetland, preventing it from traveling and flooding surrounding areas. Dense vegetation within the swamp also provides soil stability to the land, holding soils and sediment in place whilst preventing erosion and land loss. Swamps are an abundant and valuable source of fresh water and oxygen for all life, and they are often breeding grounds for a wide variety of species. Floodplain swamps are an important resource in the production and distribution of fish. Two thirds of global fish and shellfish are commercially harvested and dependent on wetlands.
Impacts and conservation
Historically, humans have been known to drain and/or fill swamps and other wetlands in order to create more space for human development and to reduce the threat of diseases borne by swamp insects. Wetlands are removed and replaced with land that is then used for things like agriculture, real estate, and recreational uses. Many swamps have also undergone intensive logging and farming, requiring the construction of drainage ditches and canals. These ditches and canals contributed to drainage and, along the coast, allowed salt water to intrude, converting swamps to marsh or even to open water. Large areas of swamp were therefore lost or degraded. Louisiana provides a classic example of wetland loss from these combined factors. Europe has likely lost nearly half its wetlands. New Zealand lost 90 percent of its wetlands over a period of 150 years. Ecologists recognize that swamps provide ecological services including flood control, fish production, water purification, carbon storage, and wildlife habitats. In many parts of the world authorities protect swamps. In parts of Europe and North America, swamp restoration projects are becoming widespread. The United States government began enforcing stricter laws and management programs in the 1970s in efforts to protect and restore these ecosystems. Often the simplest steps to restoring swamps involve plugging drainage ditches and removing levees.
Conservationists work to preserve swamps such as those in northwest Indiana in the United States Midwest that were preserved as part of the Indiana Dunes.
Notable examples
Swamps can be found on all continents except Antarctica.
The largest swamp in the world is the Amazon River floodplain, which is particularly significant for its large number of fish and tree species.
Africa
The Sudd and the Okavango Delta are Africa's best known marshland areas. The Bangweulu Floodplains make up Africa's largest swamp.
Asia
The Mesopotamian Marshes is a large swamp and river system in southern Iraq, traditionally inhabited in part by the Marsh Arabs.
In Asia, tropical peat swamps are located in mainland East Asia and Southeast Asia. In Southeast Asia, peatlands are mainly found in low altitude coastal and sub-coastal areas and extend inland for distance more than along river valleys and across watersheds. They are mostly to be found on the coasts of East Sumatra, Kalimantan (Central, East, South and West Kalimantan provinces), West Papua, Papua New Guinea, Brunei, Peninsular Malaya, Sabah, Sarawak, Southeast Thailand, and the Philippines (Riley et al.,1996). Indonesia has the largest area of tropical peatland. Of the total tropical peat swamp, about are located in Indonesia (Page, 2001; Wahyunto, 2006).
The Vasyugan Swamp is a large swamp in the western Siberia area of the Russian Federation. This is one of the largest swamps in the world, covering an area larger than Switzerland.
North America
The Atchafalaya Swamp at the lower end of the Mississippi River is the largest swamp in the United States. It is an important example of the southern cypress swamp but it has been greatly altered by logging, drainage, and levee construction. Other famous swamps in the United States are the forested portions of the Everglades, Okefenokee Swamp, Barley Barber Swamp, Great Cypress Swamp and the Great Dismal Swamp. The Okefenokee is located in extreme southeastern Georgia and extends slightly into northeastern Florida. The Great Cypress Swamp is mostly in Delaware, but extends into Maryland on the Delmarva Peninsula. Point Lookout State Park on the southern tip of Maryland contains many swamps and marshes. The Great Dismal Swamp lies in extreme southeastern Virginia and extreme northeastern North Carolina. Both are National Wildlife Refuges. Another swamp area, Reelfoot Lake of extreme western Tennessee and Kentucky, was created by the 1811–12 New Madrid earthquakes. Caddo Lake, the Great Dismal and Reelfoot are swamps centered at large lakes. Swamps are often associated with bayous in the southeastern United States, especially in the Gulf Coast region. A baygall is a type of swamp found in the forest of the Gulf Coast states in the USA.
List of major swamps
The world's largest wetlands include significant areas of swamp, such as in the Amazon and Congo River basins. Further north, however, the largest wetlands are bogs.
Africa
Bangweulu Swamps, Zambia
Mare aux Songes, Mauritius*
Niger Delta, Nigeria
Okavango Swamp, Botswana
Sudd, South Sudan
Asia
Asmat Swamp, Indonesia
Candaba Swamp in Apalit and Candaba, Pampanga and Pulilan, Bulacan, Philippines
Mangrove Swamp in Karachi, Pakistan
Myristica Swamp in Western Ghats, India
Ratargul Swamp Forest in Sylhet, Bangladesh
Sundarbans in India and Bangladesh
Vasyugan Swamp, Russia
Negombo Swamp, Sri Lanka
Australia
Banksia Swamp, Victoria, Australia
Becher Point Wetlands, Western Australia
Burraga Swamp, New South Wales, Australia
Coomonderry Swamp
Coastal Swamp Oak Forest, Queensland/New South Wales, Australia
Coastal Upland Swamps, New South Wales, Australia
Cumbung Swamp, New South Wales, Australia
Fivebough and Tuckerbil Swamps, New South Wales, Australia
Koo-Wee-Rup Swamp, Victoria, Australia
Noosa Everglades, Queensland, Australia
Toolibin Lake, Western Australia
West Melbourne Swamp, Victoria, Australia
Europe
Pripyat Marshes, Belarus
Šúr, Slovakia
Kopački rit, Croatia
North America
Atchafalaya National Wildlife Refuge, Louisiana, United States
Big Cypress National Preserve, Florida, United States
Barley Barber Swamp, Florida, United States
Cache River, Illinois, United States
Caddo Lake, Texas/Louisiana, United States
Cibuco Swamp, Puerto Rico
Congaree Swamp, South Carolina, United States
Everglades, Florida, United States
First Landing State Park, Virginia, United States
Grand Kankakee Marsh, Indiana, United States
Great Black Swamp, Indiana/Ohio, United States
Great Cypress Swamp, Delaware and Maryland, United States, also known as Great Pocomoke Swamp
Great Dismal Swamp, North Carolina/Virginia, United States
Great Swamp National Wildlife Refuge, New Jersey, United States
Green Swamp, Florida, United States
Green Swamp, North Carolina, United States
Honey Island Swamp, Louisiana, United States
Hudson Bay Lowlands, Ontario, Canada
Limberlost, Indiana, United States
Louisiana swamplands, Louisiana, United States
Mingo National Wildlife Refuge, Puxico, Missouri, United States
Mobile-Tensaw River Delta, Alabama, United States
Okefenokee Swamp, Georgia/Florida, United States
Pantanos de Centla, Tabasco/Campeche, Mexico
Reelfoot Lake, Tennessee/Kentucky, United States
Texas Swamplands, Texas, United States
Tortuguero National Park, Limón, Costa Rica
South America
Caribbean Lowlands, Colombia
Esteros del Iberá, Argentina
Lahuen Ñadi, Chile
Pantanal, Brazil, Bolivia and Paraguay
Paraná Delta, Argentina
| Physical sciences | Wetlands | null |
86367 | https://en.wikipedia.org/wiki/Megabat | Megabat | Megabats constitute the family Pteropodidae of the order Chiroptera. They are also called fruit bats, Old World fruit bats, or—especially the genera Acerodon and Pteropus—flying foxes. They are the only member of the superfamily Pteropodoidea, which is one of two superfamilies in the suborder Yinpterochiroptera. Internal divisions of Pteropodidae have varied since subfamilies were first proposed in 1917. From three subfamilies in the 1917 classification, six are now recognized, along with various tribes. As of 2018, 197 species of megabat had been described.
The leading theory of the evolution of megabats has been determined primarily by genetic data, as the fossil record for this family is the most fragmented of all bats. They likely evolved in Australasia, with the common ancestor of all living pteropodids existing approximately 31 million years ago. Many of their lineages probably originated in Melanesia, then dispersed over time to mainland Asia, the Mediterranean, and Africa. Today, they are found in tropical and subtropical areas of Eurasia, Africa, and Oceania.
The megabat family contains the largest bat species, with individuals of some species weighing up to and having wingspans up to . Not all megabats are large-bodied; nearly a third of all species weigh less than . They can be differentiated from other bats due to their dog-like faces, clawed second digits, and reduced uropatagium. A small number of species have tails. Megabats have several adaptations for flight, including rapid oxygen consumption, the ability to sustain heart rates of more than 700 beats per minute, and large lung volumes.
Most megabats are nocturnal or crepuscular, although a few species are active during the daytime. During the period of inactivity, they roost in trees or caves. Members of some species roost alone, while others form colonies of up to a million individuals. During the period of activity, they use flight to travel to food resources. With few exceptions, they are unable to echolocate, relying instead on keen senses of sight and smell to navigate and locate food. Most species are primarily frugivorous and several are nectarivorous. Other less common food resources include leaves, pollen, twigs, and bark.
They reach sexual maturity slowly and have a low reproductive output. Most species have one offspring at a time after a pregnancy of four to six months. This low reproductive output means that after a population loss their numbers are slow to rebound. A quarter of all species are listed as threatened, mainly due to habitat destruction and overhunting. Megabats are a popular food source in some areas, leading to population declines and extinction. They are also of interest to those involved in public health as they are natural reservoirs of several viruses that can affect humans.
Taxonomy and evolution
Taxonomic history
The family Pteropodidae was first described in 1821 by British zoologist John Edward Gray. He named the family "Pteropidae" (after the genus Pteropus) and placed it within the now-defunct order Fructivorae. Fructivorae contained one other family, the now-defunct Cephalotidae, containing one genus, Cephalotes (now recognized as a synonym of Dobsonia). Gray's spelling was possibly based on a misunderstanding of the suffix of "Pteropus". "Pteropus" comes from Ancient Greek meaning "wing" and meaning "foot". The Greek word pous of Pteropus is from the stem word pod-; therefore, Latinizing Pteropus correctly results in the prefix "Pteropod-". French biologist Charles Lucien Bonaparte was the first to use the corrected spelling Pteropodidae in 1838.
In 1875, the zoologist George Edward Dobson was the first to split the order Chiroptera (bats) into two suborders: Megachiroptera (sometimes listed as Macrochiroptera) and Microchiroptera, which are commonly abbreviated to megabats and microbats. Dobson selected these names to allude to the body size differences of the two groups, with many fruit-eating bats being larger than insect-eating bats. Pteropodidae was the only family he included within Megachiroptera.
A 2001 study found that the dichotomy of megabats and microbats did not accurately reflect their evolutionary relationships. Instead of Megachiroptera and Microchiroptera, the study's authors proposed the new suborders Yinpterochiroptera and Yangochiroptera. This classification scheme has been verified several times subsequently and remains widely supported as of 2019. Since 2005, this suborder has alternatively been called "Pteropodiformes". Yinpterochiroptera contained species formerly included in Megachiroptera (all of Pteropodidae), as well as several families formerly included in Microchiroptera: Megadermatidae, Rhinolophidae, Nycteridae, Craseonycteridae, and Rhinopomatidae. Two superfamilies comprise Yinpterochiroptera: Rhinolophoidea—containing the above families formerly in Microchiroptera—and Pteropodoidea, which only contains Pteropodidae.
In 1917, Danish mammalogist Knud Andersen divided Pteropodidae into three subfamilies: Macroglossinae, Pteropinae (corrected to Pteropodinae), and Harpyionycterinae. A 1995 study found that Macroglossinae as previously defined, containing the genera Eonycteris, Notopteris, Macroglossus, Syconycteris, Melonycteris, and Megaloglossus, was paraphyletic, meaning that the subfamily did not group all the descendants of a common ancestor. Subsequent publications consider Macroglossini as a tribe within Pteropodinae that contains only Macroglossus and Syconycteris. Eonycteris and Melonycteris are within other tribes in Pteropodinae, Megaloglossus was placed in the tribe Myonycterini of the subfamily Rousettinae, and Notopteris is of uncertain placement.
Other subfamilies and tribes within Pteropodidae have also undergone changes since Andersen's 1917 publication. In 1997, the pteropodids were classified into six subfamilies and nine tribes based on their morphology, or physical characteristics. A 2011 genetic study concluded that some of these subfamilies were paraphyletic and therefore they did not accurately depict the relationships among megabat species. Three of the subfamilies proposed in 1997 based on morphology received support: Cynopterinae, Harpyionycterinae, and Nyctimeninae. The other three clades recovered in this study consisted of Macroglossini, Epomophorinae + Rousettini, and Pteropodini + Melonycteris. A 2016 genetic study focused only on African pteropodids (Harpyionycterinae, Rousettinae, and Epomophorinae) also challenged the 1997 classification. All species formerly included in Epomophorinae were moved to Rousettinae, which was subdivided into additional tribes. The genus Eidolon, formerly in the tribe Rousettini of Rousettinae, was moved to its own subfamily, Eidolinae.
In 1984, an additional pteropodid subfamily, Propottininae, was proposed, representing one extinct species described from a fossil discovered in Africa, Propotto leakeyi. In 2018 the fossils were reexamined and determined to represent a lemur. As of 2018, there were 197 described species of megabat, around a third of which are flying foxes of the genus Pteropus.
Evolutionary history
Fossil record and divergence times
The fossil record for pteropodid bats is the most incomplete of any bat family. Although the poor skeletal record of Chiroptera is probably from how fragile bat skeletons are, Pteropodidae still have the most incomplete despite generally having the biggest and most sturdy skeletons. It is also surprising that Pteropodidae are the least represented because they were the first major group to diverge. Several factors could explain why so few pteropodid fossils have been discovered: tropical regions where their fossils might be found are under-sampled relative to Europe and North America; conditions for fossilization are poor in the tropics, which could lead to fewer fossils overall; and even when fossils are formed, they may be destroyed by subsequent geological activity. It is estimated that more than 98% of pteropodid fossil history is missing. Even without fossils, the age and divergence times of the family can still be estimated by using computational phylogenetics. Pteropodidae split from the superfamily Rhinolophoidea (which contains all the other families of the suborder Yinpterochiroptera) approximately 58 Mya (million years ago). The ancestor of the crown group of Pteropodidae, or all living species, lived approximately 31 Mya.
Biogeography
The family Pteropodidae likely originated in Australasia based on biogeographic reconstructions. Other biogeographic analyses have suggested that the Melanesian Islands, including New Guinea, are a plausible candidate for the origin of most megabat subfamilies, with the exception of Cynopterinae; the cynopterines likely originated on the Sunda Shelf based on results of a Weighted Ancestral Area Analysis of six nuclear and mitochondrial genes. From these regions, pteropodids colonized other areas, including continental Asia and Africa. Megabats reached Africa in at least four distinct events. The four proposed events are represented by (1) Scotonycteris, (2) Rousettus, (3) Scotonycterini, and (4) the "endemic Africa clade", which includes Stenonycterini, Plerotini, Myonycterini, and Epomophorini, according to a 2016 study. It is unknown when megabats reached Africa, but several tribes (Scotonycterini, Stenonycterini, Plerotini, Myonycterini, and Epomophorini) were present by the Late Miocene. How megabats reached Africa is also unknown. It has been proposed that they could have arrived via the Middle East before it became more arid at the end of the Miocene. Conversely, they could have reached the continent via the Gomphotherium land bridge, which connected Africa and the Arabian Peninsula to Eurasia. The genus Pteropus (flying foxes), which is not found on mainland Africa, is proposed to have dispersed from Melanesia via island hopping across the Indian Ocean; this is less likely for other megabat genera, which have smaller body sizes and thus have more limited flight capabilities.
Echolocation
Megabats are the only family of bats incapable of laryngeal echolocation. It is unclear whether the common ancestor of all bats was capable of echolocation, and thus echolocation was lost in the megabat lineage, or multiple bat lineages independently evolved the ability to echolocate (the superfamily Rhinolophoidea and the suborder Yangochiroptera). This unknown element of bat evolution has been called a "grand challenge in biology". A 2017 study of bat ontogeny (embryonic development) found evidence that megabat embryos at first have large, developed cochlea similar to echolocating microbats, though at birth they have small cochlea similar to non-echolocating mammals. This evidence supports that laryngeal echolocation evolved once among bats, and was lost in pteropodids, rather than evolving twice independently. Megabats in the genus Rousettus are capable of primitive echolocation through clicking their tongues. Some species—the cave nectar bat (Eonycteris spelaea), lesser short-nosed fruit bat (Cynopterus brachyotis), and the long-tongued fruit bat (Macroglossus sobrinus)—have been shown to create clicks similar to those of echolocating bats using their wings.
Both echolocation and flight are energetically expensive processes. Echolocating bats couple sound production with the mechanisms engaged for flight, allowing them to reduce the additional energy burden of echolocation. Instead of pressurizing a bolus of air for the production of sound, laryngeally echolocating bats likely use the force of the downbeat of their wings to pressurize the air, cutting energetic costs by synchronizing wingbeats and echolocation. The loss of echolocation (or conversely, the lack of its evolution) may be due to the uncoupling of flight and echolocation in megabats. The larger average body size of megabats compared to echolocating bats suggests a larger body size disrupts the flight-echolocation coupling and made echolocation too energetically expensive to be conserved in megabats.
List of genera
The family Pteropodidae is divided into six subfamilies represented by 46 genera:
Family Pteropodidae
subfamily Cynopterinae
genus Aethalops – pygmy fruit bats
genus Alionycteris
genus Balionycteris
genus Chironax
genus Cynopterus – dog-faced fruit bats or short-nosed fruit bats
genus Dyacopterus – Dayak fruit bats
genus Haplonycteris
genus Latidens
genus Megaerops
genus Otopteropus
genus Penthetor
genus Ptenochirus – musky fruit bats
genus Sphaerias
genus Thoopterus
subfamily Eidolinae
genus Eidolon – straw-colored fruit bats
subfamily Harpyionycterinae
genus Aproteles
genus Boneia
genus Dobsonia – naked-backed fruit bats
genus Harpyionycteris
subfamily Nyctimeninae
genus Nyctimene – tube-nosed fruit bats
genus Paranyctimene
subfamily Pteropodinae
genus Melonycteris
tribe Pteropodini
genus Acerodon
genus Pteralopex
genus Pteropus – flying foxes
genus Styloctenium
subfamily Rousettinae
tribe Eonycterini
genus Eonycteris – dawn fruit bats
tribe Epomophorini
genus Epomophorus – epauletted fruit bats
genus Epomops – epauletted bats
genus Hypsignathus
genus Micropteropus – dwarf epauletted bats
genus Nanonycteris
tribe incertae sedis
genus Pilonycteris
tribe Myonycterini
genus Megaloglossus
genus Myonycteris – little collared fruit bats
tribe Plerotini
genus Plerotes
tribe Rousettini
genus Rousettus – rousette fruit bats
tribe Scotonycterini
genus Casinycteris
genus Scotonycteris
tribe Stenonycterini
genus Stenonycteris
Incertae sedis
genus Notopteris – long-tailed fruit bats
genus Mirimiri
genus Neopteryx
genus Desmalopex
genus Turkanycteris
tribe Macroglossini
genus Macroglossus – long-tongued fruit bats
genus Syconycteris – blossom bats
Description
Appearance
Megabats take their name from their larger weight and size; the largest, the great flying fox (Pteropus neohibernicus), weighs up to ; some members of Acerodon and Pteropus have wingspans reaching up to . Despite the fact that body size was a defining characteristic that Dobson used to separate microbats and megabats, not all species of megabat are larger than microbats; the spotted-winged fruit bat (Balionycteris maculata), a megabat, weighs only . The flying foxes of Pteropus and Acerodon are often taken as exemplars of the whole family in terms of body size. In reality, these genera are outliers, creating a misconception of the true size of most megabat species. A 2004 review stated that 28% of megabat species weigh less than .
Megabats can be distinguished from microbats in appearance by their dog-like faces, by the presence of claws on the second digit (see Megabat#Postcrania), and by their simple ears. The simple appearance of the ear is due in part to the lack of tragi (cartilage flaps projecting in front of the ear canal), which are found in many microbat species. Megabats of the genus Nyctimene appear less dog-like, with shorter faces and tubular nostrils. A 2011 study of 167 megabat species found that while the majority (63%) have fur that is a uniform color, other patterns are seen in this family. These include countershading in four percent of species, a neck band or mantle in five percent of species, stripes in ten percent of species, and spots in nineteen percent of species.
Unlike microbats, megabats have a greatly reduced uropatagium, which is an expanse of flight membrane that runs between the hind limbs. Additionally, the tail is absent or greatly reduced, with the exception of Notopteris species, which have a long tail. Most megabat wings insert laterally (attach to the body directly at the sides). In Dobsonia species, the wings attach nearer the spine, giving them the common name of "bare-backed" or "naked-backed" fruit bats.
Skeleton
Skull and dentition
Megabats have large orbits, which are bordered by well-developed postorbital processes posteriorly. The postorbital processes sometimes join to form the postorbital bar. The snout is simple in appearance and not highly modified, as is seen in other bat families. The length of the snout varies among genera. The premaxilla is well-developed and usually free, meaning that it is not fused with the maxilla; instead, it articulates with the maxilla via ligaments, making it freely movable. The premaxilla always lack a palatal branch. In species with a longer snout, the skull is usually arched. In genera with shorter faces (Penthetor, Nyctimene, Dobsonia, and Myonycteris), the skull has little to no bending.
Megabat species have relatively small incisors and large canines. The premolars and molars are adapted to crush and pierce fruit, their primary food source.
The most complete dental formula is: I2/2, C 1/1, P3/3, M2/3 x 2 = 34. The dental formula of 34 teeth is a homologous trait for megabats. The total number of teeth varies among megabat species, and can range from 24 to 34. For example, some species of megabats have only 2 molars on either side of the lower jaw instead of 3. Others may lack one or more pairs of incisors on the upper or lower jaw.
All megabats have two to four each of upper and lower incisors, with the exception Bulmer's fruit bat (Aproteles bulmerae), which completely lacks incisors, and the São Tomé collared fruit bat (Myonycteris brachycephala), which has two upper and three lower incisors. This makes it the only mammal species with an asymmetrical dental formula.
All species have two upper and lower canine teeth. The number of premolars is variable, with four or six each of upper and lower premolars.
The first upper and lower molars are always present, meaning that all megabats have at least four molars. The remaining molars may be present, present but reduced, or absent. Megabat molars and premolars are simplified, with a reduction in the cusps and ridges resulting in a more flattened crown.
Like most mammals, megabats are diphyodont, meaning that the young have a set of deciduous teeth (milk teeth) that falls out and is replaced by permanent teeth. For most species, there are 20 deciduous teeth. As is typical for mammals, the deciduous set does not include molars.
Postcrania
The scapulae (shoulder blades) of megabats have been described as the most primitive of any chiropteran family. The shoulder is overall of simple construction, but has some specialized features. The primitive insertion of the omohyoid muscle from the clavicle (collarbone) to the scapula is laterally displaced (more towards the side of the body)—a feature also seen in the Phyllostomidae. The shoulder also has a well-developed system of muscular slips (narrow bands of muscle that augment larger muscles) that anchor the tendon of the occipitopollicalis muscle (muscle in bats that runs from base of neck to the base of the thumb) to the skin.
While microbats only have claws on the thumbs of their forelimbs, most megabats have a clawed second digit as well; only Eonycteris, Dobsonia, Notopteris, and Neopteryx lack the second claw. The first digit is the shortest, while the third digit is the longest. The second digit is incapable of flexion. Megabats' thumbs are longer relative to their forelimbs than those of microbats.
Megabats' hindlimbs have the same skeletal components as humans. Most megabat species have an additional structure called the calcar, a cartilage spur arising from the calcaneus. Some authors alternately refer to this structure as the uropatagial spur to differentiate it from microbats' calcars, which are structured differently. The structure exists to stabilize the uropatagium, allowing bats to adjust the camber of the membrane during flight. Megabats lacking the calcar or spur include Notopteris, Syconycteris, and Harpyionycteris. The entire leg is rotated at the hip compared to normal mammal orientation, meaning that the knees face posteriorly. All five digits of the foot flex in the direction of the sagittal plane, with no digit capable of flexing in the opposite direction, as in the feet of perching birds.
Internal systems
Flight is very energetically expensive, requiring several adaptations to the cardiovascular system. During flight, bats can raise their oxygen consumption by twenty times or more for sustained periods; human athletes can achieve an increase of a factor of twenty for a few minutes at most. A 1994 study of the straw-coloured fruit bat (Eidolon helvum) and hammer-headed bat (Hypsignathus monstrosus) found a mean respiratory exchange ratio (carbon dioxide produced:oxygen used) of approximately 0.78. Among these two species, the gray-headed flying fox (Pteropus poliocephalus) and the Egyptian fruit bat (Rousettus aegyptiacus), maximum heart rates in flight varied between 476 beats per minute (gray-headed flying fox) and 728 beats per minute (Egyptian fruit bat). The maximum number of breaths per minute ranged from 163 (gray-headed flying fox) to 316 (straw-colored fruit bat). Additionally, megabats have exceptionally large lung volumes relative to their sizes. While terrestrial mammals such as shrews have a lung volume of 0.03 cm3 per gram of body weight (0.05 in3 per ounce of body weight), species such as the Wahlberg's epauletted fruit bat (Epomophorus wahlbergi) have lung volumes 4.3 times greater at 0.13 cm3 per gram (0.22 in3 per ounce).
Megabats have rapid digestive systems, with a gut transit time of half an hour or less. The digestive system is structured to a herbivorous diet sometimes restricted to soft fruit or nectar. The length of the digestive system is short for a herbivore (as well as shorter than those of insectivorous microchiropterans), as the fibrous content is mostly separated by the action of the palate, tongue, and teeth, and then discarded. Many megabats have U-shaped stomachs. There is no distinct difference between the small and large intestine, nor a distinct beginning of the rectum. They have very high densities of intestinal microvilli, which creates a large surface area for the absorption of nutrients.
Biology and ecology
Genome size
Like all bats, megabats have much smaller genomes than other mammals. A 2009 study of 43 megabat species found that their genomes ranged from 1.86 picograms (pg, 978 Mbp per pg) in the straw-colored fruit bat to 2.51 pg in Lyle's flying fox (Pteropus lylei). All values were much lower than the mammalian average of 3.5 pg. Megabats have even smaller genomes than microbats, with a mean weight of 2.20 pg compared to 2.58 pg. It was speculated that this difference could be related to the fact that the megabat lineage has experienced an extinction of the LINE1—a type of long interspersed nuclear element. LINE1 constitutes 15–20% of the human genome and is considered the most prevalent long interspersed nuclear element among mammals.
Senses
Sight
With very few exceptions, megabats do not echolocate, and therefore rely on sight and smell to navigate. They have large eyes positioned at the front of their heads. These are larger than those of the common ancestor of all bats, with one study suggesting a trend of increasing eye size among pteropodids. A study that examined the eyes of 18 megabat species determined that the common blossom bat (Syconycteris australis) had the smallest eyes at a diameter of , while the largest eyes were those of large flying fox (Pteropus vampyrus) at in diameter. Megabat irises are usually brown, but they can be red or orange, as in Desmalopex, Mirimiri, Pteralopex, and some Pteropus.
At high brightness levels, megabat visual acuity is poorer than that of humans; at low brightness it is superior. One study that examined the eyes of some Rousettus, Epomophorus, Eidolon, and Pteropus species determined that the first three genera possess a tapetum lucidum, a reflective structure in the eyes that improves vision at low light levels, while the Pteropus species do not. All species examined had retinae with both rod cells and cone cells, but only the Pteropus species had S-cones, which detect the shortest wavelengths of light; because the spectral tuning of the opsins was not discernible, it is unclear whether the S-cones of Pteropus species detect blue or ultraviolet light. Pteropus bats are dichromatic, possessing two kinds of cone cells. The other three genera, with their lack of S-cones, are monochromatic, unable to see color. All genera had very high densities of rod cells, resulting in high sensitivity to light, which corresponds with their nocturnal activity patterns. In Pteropus and Rousettus, measured rod cell densities were 350,000–800,000 per square millimeter, equal to or exceeding other nocturnal or crepuscular animals such as the house mouse, domestic cat, and domestic rabbit.
Smell
Megabats use smell to find food sources like fruit and nectar. They have keen senses of smell that rival that of the domestic dog. Tube-nosed fruit bats such as the eastern tube-nosed bat (Nyctimene robinsoni) have stereo olfaction, meaning they are able to map and follow odor plumes three-dimensionally.
Along with most (or perhaps all) other bat species, megabats mothers and offspring also use scent to recognize each other, as well as for recognition of individuals. In flying foxes, males have enlarged androgen-sensitive sebaceous glands on their shoulders they use for scent-marking their territories, particularly during the mating season. The secretions of these glands vary by species—of the 65 chemical compounds isolated from the glands of four species, no compound was found in all species. Males also engage in urine washing, or coating themselves in their own urine.
Taste
Megabats possess the TAS1R2 gene, meaning they have the ability to detect sweetness in foods. This gene is present among all bats except vampire bats. Like all other bats, megabats cannot taste umami, due to the absence of the TAS1R1 gene. Among other mammals, only giant pandas have been shown to lack this gene. Megabats also have multiple TAS2R genes, indicating that they can taste bitterness.
Reproduction and life cycle
Megabats, like all bats, are long-lived relative to their size for mammals. Some captive megabats have had lifespans exceeding thirty years. Relative to their sizes, megabats have low reproductive outputs and delayed sexual maturity, with females of most species not giving birth until the age of one or two. Some megabats appear to be able to breed throughout the year, but the majority of species are likely seasonal breeders. Mating occurs at the roost. Gestation length is variable, but is four to six months in most species. Different species of megabats have reproductive adaptations that lengthen the period between copulation and giving birth. Some species such as the straw-colored fruit bat have the reproductive adaptation of delayed implantation, meaning that copulation occurs in June or July, but the zygote does not implant into the uterine wall until months later in November. The Fischer's pygmy fruit bat (Haplonycteris fischeri), with the adaptation of post-implantation delay, has the longest gestation length of any bat species, at up to 11.5 months. The post-implantation delay means that development of the embryo is suspended for up to eight months after implantation in the uterine wall, which is responsible for its very long pregnancies. Shorter gestation lengths are found in the greater short-nosed fruit bat (Cynopterus sphinx) with a period of three months.
The litter size of all megabats is usually one. There are scarce records of twins in the following species: Madagascan flying fox (Pteropus rufus), Dobson's epauletted fruit bat (Epomops dobsoni), the gray-headed flying fox, the black flying fox (Pteropus alecto), the spectacled flying fox (Pteropus conspicillatus), the greater short-nosed fruit bat, Peters's epauletted fruit bat (Epomophorus crypturus), the hammer-headed bat, the straw-colored fruit bat, the little collared fruit bat (Myonycteris torquata), the Egyptian fruit bat, and Leschenault's rousette (Rousettus leschenaultii). In the cases of twins, it is rare that both offspring survive. Because megabats, like all bats, have low reproductive rates, their populations are slow to recover from declines.
At birth, megabat offspring are, on average, 17.5% of their mother's post-partum weight. This is the smallest offspring-to-mother ratio for any bat family; across all bats, newborns are 22.3% of their mother's post-partum weight. Megabat offspring are not easily categorized into the traditional categories of altricial (helpless at birth) or precocial (capable at birth). Species such as the greater short-nosed fruit bat are born with their eyes open (a sign of precocial offspring), whereas the Egyptian fruit bat offspring's eyes do not open until nine days after birth (a sign of altricial offspring).
As with nearly all bat species, males do not assist females in parental care.
The young stay with their mothers until they are weaned; how long weaning takes varies throughout the family. Megabats, like all bats, have relatively long nursing periods: offspring will nurse until they are approximately 71% of adult body mass, compared to 40% of adult body mass in non-bat mammals. Species in the genus Micropteropus wean their young by seven to eight weeks of age, whereas the Indian flying fox (Pteropus medius) does not wean its young until five months of age. Very unusually, male individuals of two megabat species, the Bismarck masked flying fox (Pteropus capistratus) and the Dayak fruit bat (Dyacopterus spadiceus), have been observed producing milk, but there has never been an observation of a male nursing young. It is unclear if the lactation is functional and males actually nurse pups or if it is a result of stress or malnutrition.
Behavior and social systems
Many megabat species are highly gregarious or social. Megabats will vocalize to communicate with each other, creating noises described as "trill-like bursts of sound", honking, or loud, bleat-like calls in various genera. At least one species, the Egyptian fruit bat, is capable of a kind of vocal learning called vocal production learning, defined as "the ability to modify vocalizations in response to interactions with conspecifics". Young Egyptian fruit bats are capable of acquiring a dialect by listening to their mothers, as well as other individuals in their colonies. It has been postulated that these dialect differences may result in individuals of different colonies communicating at different frequencies, for instance.
Megabat social behavior includes using sexual behaviors for more than just reproduction. Evidence suggests that female Egyptian fruit bats take food from males in exchange for sex. Paternity tests confirmed that the males from which each female scrounged food had a greater likelihood of fathering the scrounging female's offspring.
Homosexual fellatio has been observed in at least one species, the Bonin flying fox (Pteropus pselaphon). This same-sex fellatio is hypothesized to encourage colony formation of otherwise-antagonistic males in colder climates.
Megabats are mostly nocturnal and crepuscular, though some have been observed flying during the day. A few island species and subspecies are diurnal, hypothesized as a response to a lack of predators.
Diurnal taxa include a subspecies of the black-eared flying fox (Pteropus melanotus natalis), the Mauritian flying fox (Pteropus niger), the Caroline flying fox (Pteropus molossinus), a subspecies of Pteropus pelagicus (P. p. insularis), and the Seychelles fruit bat (Pteropus seychellensis).
Roosting
A 1992 summary of forty-one megabat genera noted that twenty-nine are tree-roosting genera. A further eleven genera roost in caves, and the remaining six genera roost in other kinds of sites (human structures, mines, and crevices, for example). Tree-roosting species can be solitary or highly colonial, forming aggregations of up to one million individuals. Cave-roosting species form aggregations ranging from ten individuals up to several thousand. Highly colonial species often exhibit roost fidelity, meaning that their trees or caves may be used as roosts for many years. Solitary species or those that aggregate in smaller numbers have less fidelity to their roosts.
Diet and foraging
Most megabats are primarily frugivorous. Throughout the family, a diverse array of fruit is consumed from nearly 188 plant genera. Some species are also nectarivorous, meaning that they also drink nectar from flowers. In Australia, Eucalyptus flowers are an especially important food source. Other food resources include leaves, shoots, buds, pollen, seed pods, sap, cones, bark, and twigs. They are prodigious eaters and can consume up to 2.5 times their own body weight in fruit per night.
Megabats fly to roosting and foraging resources. They typically fly straight and relatively fast for bats; some species are slower with greater maneuverability. Species can commute in a night. Migratory species of the genera Eidolon, Pteropus, Epomophorus, Rousettus, Myonycteris, and Nanonycteris can migrate distances up to . Most megabats have below-average aspect ratios, which is measurement relating wingspan and wing area. Wing loading, which measures weight relative to wing area, is average or higher than average in megabats.
Seed dispersal
Megabats play an important role in seed dispersal. As a result of their long evolutionary history, some plants have evolved characteristics compatible with bat senses, including fruits that are strongly scented, brightly colored, and prominently exposed away from foliage. The bright colors and positioning of the fruit may reflect megabats' reliance on visual cues and inability to navigate through clutter. In a study that examined the fruits of more than forty fig species, only one fig species was consumed by both birds and megabats; most species are consumed by one or the other. Bird-consumed figs are frequently red or orange, while megabat-consumed figs are often yellow or green. Most seeds are excreted shortly after consumption due to a rapid gut transit time, but some seeds can remain in the gut for more than twelve hours. This heightens megabats' capacity to disperse seeds far from parent trees. As highly mobile frugivores, megabats have the capacity to restore forest between isolated forest fragments by dispersing tree seeds to deforested landscapes. This dispersal ability is limited to plants with small seeds that are less than in length, as seeds larger than this are not ingested.
Predators and parasites
Megabats, especially those living on islands, have few native predators. Non-native predators of flying foxes include domestic cats and rats. The mangrove monitor, which is a native predator for some megabat species but an introduced predator for others, opportunistically preys on megabats, as it is a capable tree climber. Another species, the brown tree snake, can seriously impact megabat populations; as a non-native predator in Guam, the snake consumes so many offspring that it reduced the recruitment of the population of the Mariana fruit bat (Pteropus mariannus) to essentially zero. The island is now considered a sink for the Mariana fruit bat, as its population there relies on bats immigrating from the nearby island of Rota to bolster it rather than successful reproduction. Predators that are naturally sympatric with megabats include reptiles such as crocodilians, snakes, and large lizards, as well as birds like falcons, hawks, and owls. The saltwater crocodile is a known predator of megabats, based on analysis of crocodile stomach contents in northern Australia. During extreme heat events, megabats like the little red flying fox (Pteropus scapulatus) must cool off and rehydrate by drinking from waterways, making them susceptible to opportunistic depredation by freshwater crocodiles.
Megabats are the hosts of several parasite taxa. Known parasites include Nycteribiidae and Streblidae species ("bat flies"), as well as mites of the genus Demodex. Blood parasites of the family Haemoproteidae and intestinal nematodes of Toxocaridae also affect megabat species.
Range and habitat
Megabats are widely distributed in the tropics of the Old World, occurring throughout Africa, Asia, Australia, and throughout the islands of the Indian Ocean and Oceania. As of 2013, fourteen genera of megabat are present in Africa, representing twenty-eight species. Of those twenty-eight species, twenty-four are only found in tropical or subtropical climates. The remaining four species are mostly found in the tropics, but their ranges also encompass temperate climates. In respect to habitat types, eight are exclusively or mostly found in forested habitat; nine are found in both forests and savannas; nine are found exclusively or mostly in savannas; and two are found on islands. Only one African species, the long-haired rousette (Rousettus lanosus), is found mostly in montane ecosystems, but an additional thirteen species' ranges extend into montane habitat.
Outside of Southeast Asia, megabats have relatively low species richness in Asia. The Egyptian fruit bat is the only megabat whose range is mostly in the Palearctic realm; it and the straw-colored fruit bat are the only species found in the Middle East. The northernmost extent of the Egyptian fruit bat's range is the northeastern Mediterranean. In East Asia, megabats are found only in China and Japan. In China, only six species of megabat are considered resident, while another seven are present marginally (at the edge of their ranges), questionably (due to possible misidentification), or as accidental migrants. Four megabat species, all Pteropus, are found on Japan, but none on its five main islands. In South Asia, megabat species richness ranges from two species in the Maldives to thirteen species in India. Megabat species richness in Southeast Asia is as few as five species in the small country of Singapore and seventy-six species in Indonesia. Of the ninety-eight species of megabat found in Asia, forest is a habitat for ninety-five of them. Other habitat types include human-modified land (66 species), caves (23 species), savanna (7 species), shrubland (4 species), rocky areas (3 species), grassland (2 species), and desert (1 species).
In Australia, five genera and eight species of megabat are present. These genera are Pteropus, Syconycteris, Dobsonia, Nyctimene, and Macroglossus. Pteropus species of Australia are found in a variety of habitats, including mangrove-dominated forests, rainforests, and the wet sclerophyll forests of the Australian bush. Australian Pteropus are often found in association with humans, as they situate their large colonies in urban areas, particularly in May and June when the greatest proportions of Pteropus species populations are found in these urban colonies.
In Oceania, the countries of Palau and Tonga have the fewest megabat species, with one each. Papua New Guinea has the greatest number of species with thirty-six. Of the sixty-five species of Oceania, forest is a habitat for fifty-eight. Other habitat types include human-modified land (42 species), caves (9 species), savanna (5 species), shrubland (3 species), and rocky areas (3 species). An estimated nineteen percent of all megabat species are endemic to a single island; of all bat families, only Myzopodidae—containing two species, both single-island endemics—has a higher rate of single-island endemism.
Relationship to humans
Food
Megabats are killed and eaten as bushmeat throughout their range. Bats are consumed extensively throughout Asia, as well as in islands of the West Indian Ocean and the Pacific, where Pteropus species are heavily hunted. In continental Africa where no Pteropus species live, the straw-colored fruit bat, the region's largest megabat, is a preferred hunting target.
In Guam, consumption of the Mariana fruit bat exposes locals to the neurotoxin beta-Methylamino-L-alanine (BMAA) which may later lead to neurodegenerative diseases. BMAA may become particularly biomagnified in humans who consume flying foxes; flying foxes are exposed to BMAA by eating cycad fruits.
As disease reservoirs
Megabats are the reservoirs of several viruses that can affect humans and cause disease. They can carry filoviruses, including the Ebola virus (EBOV) and Marburgvirus. The presence of Marburgvirus, which causes Marburg virus disease, has been confirmed in one species, the Egyptian fruit bat. The disease is rare, but the fatality rate of an outbreak can reach up to 88%. The virus was first recognized after simultaneous outbreaks in the German cities of Marburg and Frankfurt as well as Belgrade, Serbia, in 1967, where 31 people became ill and seven died. The outbreak was traced to laboratory work with vervet monkeys from Uganda. The virus can pass from a bat host to a human (who has usually spent a prolonged period in a mine or cave where Egyptian fruit bats live); from there, it can spread person-to-person through contact with infected bodily fluids, including blood and semen. The United States Centers for Disease Control and Prevention lists a total of 601 confirmed cases of Marburg virus disease from 1967 to 2014, of which 373 people died (62% overall mortality).
Species that have tested positive for the presence of EBOV include Franquet's epauletted fruit bat (Epomops franqueti), the hammer-headed fruit bat, and the little collared fruit bat. Additionally, antibodies against EBOV have been found in the straw-colored fruit bat, Gambian epauletted fruit bat (Epomophorus gambianus), Peters's dwarf epauletted fruit bat (Micropteropus pusillus), Veldkamp's dwarf epauletted fruit bat (Nanonycteris veldkampii), Leschenault's rousette, and the Egyptian fruit bat. Much of how humans contract the Ebola virus is unknown. Scientists hypothesize that humans initially become infected through contact with an infected animal such as a megabat or non-human primate. Megabats are presumed to be a natural reservoir of the Ebola virus, but this has not been firmly established. Microbats are also being investigated as the reservoir of the virus, with the greater long-fingered bat (Miniopterus inflatus) once found to harbor a fifth of the virus's genome (though not testing positive for the actual virus) in 2019. Due to the likely association between Ebola infection and "hunting, butchering and processing meat from infected animals", several West African countries banned bushmeat (including megabats) or issued warnings about it during the 2013–2016 epidemic; many bans have since been lifted.
Other megabats implicated as disease reservoirs are primarily Pteropus species. Notably, flying foxes can transmit Australian bat lyssavirus, which, along with the rabies virus, causes rabies. Australian bat lyssavirus was first identified in 1996; it is very rarely transmitted to humans. Transmission occurs from the bite or scratch of an infected animal but can also occur from getting the infected animal's saliva in a mucous membrane or an open wound. Exposure to flying fox blood, urine, or feces cannot cause infections of Australian bat lyssavirus. Since 1994, there have been three records of people becoming infected with it in Queensland—each case was fatal.
Flying foxes are also reservoirs of henipaviruses such as Hendra virus and Nipah virus. Hendra virus was first identified in 1994; it rarely occurs in humans. From 1994 to 2013, there have been seven reported cases of Hendra virus affecting people, four of which were fatal. The hypothesized primary route of human infection is via contact with horses that have come into contact with flying fox urine. There are no documented instances of direct transmission between flying foxes and humans. As of 2012, there is a vaccine available for horses to decrease the likelihood of infection and transmission.
Nipah virus was first identified in 1998 in Malaysia. Since 1998, there have been several Nipah outbreaks in Malaysia, Singapore, India, and Bangladesh, resulting in over 100 casualties. A 2018 outbreak in Kerala, India, resulted in 19 humans becoming infected—17 died. The overall fatality rate is 40–75%. Humans can contract Nipah virus from direct contact with flying foxes or their fluids, through exposure to an intermediate host such as domestic pigs, or from contact with an infected person. A 2014 study of the Indian flying fox and Nipah virus found that while Nipah virus outbreaks are more likely in areas preferred by flying foxes, "the presence of bats in and of itself is not considered a risk factor for Nipah virus infection." Rather, the consumption of date palm sap is a significant route of transmission. The practice of date palm sap collection involves placing collecting pots at date palm trees. Indian flying foxes have been observed licking the sap as it flows into the pots, as well as defecating and urinating in proximity to the pots. In this way, humans who drink palm wine can be exposed to henipaviruses. The use of bamboo skirts on collecting pots lowers the risk of contamination from bat urine.
Flying foxes can transmit several non-lethal diseases as well, such as Menangle virus and Nelson Bay virus. These viruses rarely affect humans, and few cases have been reported. Megabats are not suspected to be vectors of coronaviruses.
In culture
Megabats, particularly flying foxes, are featured in indigenous cultures and traditions. Folk stories from Australia and Papua New Guinea feature them.
They were also included in Indigenous Australian cave art, as evinced by several surviving examples.
Indigenous societies in Oceania used parts of flying foxes for functional and ceremonial weapons. In the Solomon Islands, people created barbs out of their bones for use in spears. In New Caledonia, ceremonial axes made of jade were decorated with braids of flying fox fur. Flying fox wings were depicted on the war shields of the Asmat people of Indonesia; they believed that the wings offered protection to their warriors.
There are modern and historical references to flying fox byproducts used as currency. In New Caledonia, braided flying fox fur was once used as currency.
On the island of Makira, which is part of the Solomon Islands, indigenous peoples still hunt flying foxes for their teeth as well as for bushmeat.
The canine teeth are strung together on necklaces that are used as currency. Teeth of the insular flying fox (Pteropus tonganus) are particularly prized, as they are usually large enough to drill holes in. The Makira flying fox (Pteropus cognatus) is also hunted, despite its smaller teeth. Deterring people from using flying fox teeth as currency may be detrimental to the species, with Lavery and Fasi noting, "Species that provide an important cultural resource can be highly treasured." Emphasizing sustainable hunting of flying foxes to preserve cultural currency may be more effective than encouraging the abandonment of cultural currency. Even if flying foxes were no longer hunted for their teeth, they would still be killed for bushmeat; therefore, retaining their cultural value may encourage sustainable hunting practices. Lavery stated, "It's a positive, not a negative, that their teeth are so culturally valuable. The practice of hunting bats shouldn't necessarily be stopped, it needs to be managed sustainably."
Conservation
Status
As of 2014, the International Union for Conservation of Nature (IUCN) evaluated a quarter of all megabat species as threatened, which includes species listed as critically endangered, endangered, and vulnerable. Megabats are substantially threatened by humans, as they are hunted for food and medicinal uses.
Additionally, they are culled for actual or perceived damage to agriculture, especially to fruit production. As of 2019, the IUCN had evaluations for 187 megabat species. The status breakdown is as follows:
Extinct: 4 species (2.1%)
Critically endangered: 8 species (4.3%)
Endangered: 16 species (8.6%)
Vulnerable: 37 species (19.8%)
Near-threatened: 13 species (7.0%)
Least-concern: 89 species (47.6%)
Data deficient: 20 species (10.7%)
Factors causing decline
Anthropogenic sources
Megabats are threatened by habitat destruction by humans. Deforestation of their habitats has resulted in the loss of critical roosting habitat. Deforestation also results in the loss of food resource, as native fruit-bearing trees are felled. Habitat loss and resulting urbanization leads to construction of new roadways, making megabat colonies easier to access for overharvesting. Additionally, habitat loss via deforestation compounds natural threats, as fragmented forests are more susceptible to damage from typhoon-force winds. Cave-roosting megabats are threatened by human disturbance at their roost sites. Guano mining is a livelihood in some countries within their range, bringing people to caves. Caves are also disturbed by mineral mining and cave tourism.
Megabats are also killed by humans, intentionally and unintentionally. Half of all megabat species are hunted for food, in comparison to only eight percent of insectivorous species, while human persecution stemming from perceived damage to crops is also a large source of mortality. Some megabats have been documented to have a preference for native fruit trees over fruit crops, but deforestation can reduce their food supply, causing them to rely on fruit crops. They are shot, beaten to death, or poisoned to reduce their populations. Mortality also occurs via accidental entanglement in netting used to prevent the bats from eating fruit. Culling campaigns can dramatically reduce megabat populations. In Mauritius, over 40,000 Mauritian flying foxes were culled between 2014 and 2016, reducing the species' population by an estimated 45%. Megabats are also killed by electrocution. In one Australian orchard, it is estimated that over 21,000 bats were electrocuted to death in an eight-week period. Farmers construct electrified grids over their fruit trees to kill megabats before they can consume their crop. The grids are questionably effective at preventing crop loss, with one farmer who operated such a grid estimating they still lost of fruit to flying foxes in a year. Some electrocution deaths are also accidental, such as when bats fly into overhead power lines.
Climate change causes flying fox mortality and is a source of concern for species persistence. Extreme heat waves in Australia have been responsible for the deaths of more than 30,000 flying foxes from 1994 to 2008. Females and young bats are most susceptible to extreme heat, which affects a population's ability to recover. Megabats are threatened by sea level rise associated with climate change, as several species are endemic to low-lying atolls.
Natural sources
Because many species are endemic to a single island, they are vulnerable to random events such as typhoons. A 1979 typhoon halved the remaining population of the Rodrigues flying fox (Pteropus rodricensis). Typhoons result in indirect mortality as well: because typhoons defoliate the trees, they make megabats more visible and thus more easily hunted by humans. Food resources for the bats become scarce after major storms, and megabats resort to riskier foraging strategies such as consuming fallen fruit off the ground. There, they are more vulnerable to depredation by domestic cats, dogs, and pigs. As many megabat species are located in the tectonically active Ring of Fire, they are also threatened by volcanic eruptions. Flying foxes, including the endangered Mariana fruit bat, have been nearly exterminated from the island of Anatahan following a series of eruptions beginning in 2003.
| Biology and health sciences | Bats | null |
86387 | https://en.wikipedia.org/wiki/Vespertilionidae | Vespertilionidae | Vespertilionidae is a family of microbats, of the order Chiroptera, flying, insect-eating mammals variously described as the common, vesper, or simple nosed bats. The vespertilionid family is the most diverse and widely distributed of bat families, specialised in many forms to occupy a range of habitats and ecological circumstances, and it is frequently observed or the subject of research. The facial features of the species are often simple, as they mainly rely on vocally emitted echolocation. The tails of the species are enclosed by the lower flight membranes between the legs. Over 300 species are distributed all over the world, on every continent except Antarctica. It owes its name to the genus Vespertilio, which takes its name from a word for bat, , derived from the Latin term meaning 'evening'; they are termed "evening bats" and were once referred to as "evening birds". (The term "evening bat" also often refers more specifically to one of the species, Nycticeius humeralis.)
Evolution
They are allied to the suborder Microchiroptera, the families of microbats separated from the flying foxes and fruit bats of the megabat group Megachiroptera. The treatments of bat taxonomy have also included a placement amongst the Vespertilioniformes, or Yangochiroptera, as suborder Vespertilionoidea.
Molecular data indicate the Vespertilionidae diverged from the Molossidae in the early Eocene period. The family is thought to have originated somewhere in Laurasia, possibly North America. A recently extinct species, Synemporion keana, is known from the Holocene of Hawaii.
Characteristics
All species are carnivorous and most are insectivores, exceptions are bats of genera Myotis and Pizonyx that catch fish and the larger Nyctalus species known to capture small passerine birds in flight. The dentition of the family varies between species; the dental formula of the family is:
They rely mainly on echolocation to navigate and obtain food, but they lack the elaborate nose appendages of microbats that focus nasal emitted ultrasound. The ultrasound signal is usually produced orally, and many species have large external ears to capture and reflect sound, enabling them to discriminate and extract information.
The vespertilionids employ a range of flight techniques. The wing surface is extended to the lower limbs, and the tails of this family are enclosed in an interfemoral membrane. Some are relatively slow-flying genera, such as Pipistrellus, that manipulate the configuration of their broader wing shape and may give a fluttery appearance as they forage and glean. Others are specialised as long-winged genera, such as Lasiurus and Nyctalus, that use rapid pursuit to capture insects. The size range of the family is in head and body length; this excludes the tail, which is itself quite long in many species. They are generally brown or grey in colour, often an indiscriminate appearance as a 'little brown bat', although some species have fur that is brightly colored, with reds, oranges, and yellows all being known. The patterns of the superficial appearance include white patches or stripes that may distinguish some species.
Most species roost in caves, although some make use of hollow trees, rocky crevices, animal burrows, or other forms of shelter. Colony sizes also vary greatly, with some roosting alone, and others in groups up to a million individuals. Species native to temperate latitudes typically hibernate to avoid cooler weather, while a few of the tropical species employ aestivation as a method of evading extremes of climate.
Systematics
The four subfamilies of Vespertilionidae separate the presumably related taxa, tribes, and genera of extant and extinct taxa.
The subfamilial treatments, based on morphological, geographical, and ecological comparisons have been recombined since the inclusion of the phylogenetic implications of molecular genetics; only the Murininae and Kerivoulinae have not been changed in light of genetic analysis.
Subfamilies that were once recognized as valid, such as the Nyctophilinae, are considered dubious, as molecular evidence suggests they are paraphyletic in their arrangements.
Within the concept Yangochiroptera, an acknowledged cladistic treatment, the closest relatives to the family are the free-tailed bats of family Molossidae.
The monotypic genus Tomopeas, represented by the blunt-eared bat (Tomopeas ravum), is acknowledged as the potentially closest link between the Vespertilionidae and Molossidae, as it is the most basal member of the Molossidae and has intermediate characteristics of both families.
Classification
The grouping of these subfamilies is the classification published by the American Society of Mammalogists. Other authorities raise three subfamilies more: Antrozoinae (which is here the separate family of pallid bats), Tomopeatinae (now regarded as a subfamily of the free-tailed bats), and Nyctophilinae (here included in Vespertilioninae).
Four subfamilies are recognized by Mammal Species of the World (2005), the highly diverse Vespertilioninae are also separated as tribes. Newer or resurrected genera are noted. The genus Cistugo is no longer included following its move to the separate family Cistugidae. Miniopterinae is additionally no longer recognized as a subfamily, as it was elevated to family status.
A 2021 study attempted to resolve the systematic relationships among the pipistrelle-like bats of sub-Saharan Africa and Madagascar, with systematic inferences based on genetic and morphological analyses of more than 400 individuals across all named genera and the majority of described African pipistrelle-like bat species, with a focus on previously unstudied samples of East African bats. The study proposed a revision of the pipistrelle-like bats in East Africa and described multiple new genera and species.
Family Vespertilionidae
subfamily Kerivoulinae
genus Kerivoula – painted bats
genus Phoniscus
subfamily Myotinae
genus Eudiscopus
genus Myotis – mouse-eared bats
genus Submyotodon – broad-muzzled bats
subfamily Murininae
genus Harpiocephalus – hairy-winged bats
genus Harpiola
genus Murina – tube-nosed insectivorous bats
subfamily Vespertilioninae
tribe Antrozoini
genus Antrozous
genus Bauerus
genus Rhogeessa
tribe Eptesicini
genus Arielulus
genus Eptesicus – house bats
genus Glauconycteris – butterfly bats
genus Hesperoptenus – false serotine bats
genus Histiotus – big-eared brown bats
genus Ia
genus Lasionycteris
genus Scoteanax – greater broad-nosed bats
genus Scotomanes
genus Scotorepens – lesser broad-nosed bats
genus Thainycteris
tribe incertae sedis
genus Rhyneptesicus
tribe Lasiurini
genus Aeorestes – hoary bats
genus Dasypterus – yellow bats
genus Lasiurus – hairy-tailed bats
tribe Nycticeiini
genus Nycticeius – evening bats
tribe Perimyotini
genus Parastrellus
genus Perimyotis
tribe Pipistrellini
genus Glischropus – thick-thumbed bats
genus Nyctalus – noctule bats
genus Pipistrellus – true pipistrelles
genus Scotoecus – house bats
genus Scotozous
genus Vansonia
tribe Plecotini
genus Barbastella – barbastelles or barbastelle bats
genus Corynorhinus – American lump-nosed bats
genus Euderma
genus Idionycteris
genus Otonycteris
genus Plecotus – lump-nosed bats
tribe Scotophilini
genus Scotophilus – Old World yellow bats
tribe Vespertilionini
genus Afronycteris
genus Cassistrellus – helmeted bats
genus Chalinolobus – wattled bats
genus Falsistrellus – false pipistrelles
genus Hypsugo – Asian pipistrelles
genus Laephotis – long-eared bats
genus Mimetillus – mimic bats
genus Mirostrellus
genus Neoromicia
genus Nycticeinops
genus Nyctophilus – New Guinean and Australian big-eared bats
genus Pharotis
genus Philetor
genus Pseudoromicia
genus Tylonycteris – bamboo bats
genus Vespadelus
genus Vespertilio – frosted bats
| Biology and health sciences | Bats | Animals |
86681 | https://en.wikipedia.org/wiki/Tandem%20bicycle | Tandem bicycle | A tandem bicycle or twin is a bicycle (occasionally a tricycle) designed to be ridden by more than one person. The term tandem refers to the seating arrangement (fore to aft, not side by side), not the number of riders. Patents related to tandem bicycles date from the mid-1880s. Tandems can reach higher speeds than the same riders on single bicycles, and tandem bicycle racing exists. As with bicycles for single riders, there are many variations that have been developed over the years.
Terminology
The term tandem refers to the seating arrangement (fore to aft, not side by side), not the number of riders. A bike with two riders side by side is called a sociable.
Tandem bicycles are sometimes called "Daisy Bells". This is in reference to "Daisy Bell (Bicycle Built for Two)" which is a popular song, written in 1892 by British songwriter Harry Dacre, with the well-known chorus, "Daisy, Daisy / Give me your answer, do. / I'm half crazy / all for the love of you", ending with the words, "a bicycle built for two".
On conventional tandems, the front rider steers as well as pedals the bicycle and is known as the captain, pilot, or steersman; the rear rider only pedals and is known as the stoker, navigator or rear admiral. On most tandems the two sets of cranks are mechanically linked by a timing chain and turn at the same rate.
History
Patents related to tandem bicycles date from the mid-1880s. In approximately 1898, Mikael Pedersen developed a two-rider tandem version of his Pedersen bicycle that weighed 24 pounds, and a four-rider, or "quad", that weighed 64 pounds. They were also used in the Second Anglo-Boer War. Tandem popularity began to decline after World War II until a revival started in the late 1960s. In the UK The Tandem Club was founded in 1971, new tandems came on to the market from the French companies Lejeune and Gitane, and in the USA Bill McCready founded Santana Cycles in 1976. Modern technology has improved component and frame designs, and many tandems are as well-built as modern high-end road and off-road bikes.
In popular culture
A song written in 1892 has a man asking "Daisy Bell" to marry him, saying, "It won't be a stylish marriage. I can't afford a carriage. But you'll look sweet upon the seat of a bicycle built for two!"
In the Columbia Pictures comedy shorts, The Three Stooges occasionally ride on a tandem bicycle for their recurring gags.
In the BBC and LWT TV series, The Goodies are famously well known to ride their main tranportion, the tandem bicycle for travelling English countrysides and London.
Performance
Compared to a conventional bicycle, a tandem has double the pedalling power, without necessarily doubling the speed, and with only slightly more frictional loss in the drivetrain. It has about the same wind resistance as a conventional bicycle. High-performance tandems may weigh less than twice as much as a single bike, so the power-to-weight ratio may be slightly better than that of a single bike and rider. On flat terrain and downhill, most of the power produced by cyclists is used to overcome wind resistance, so tandems can reach higher speeds than the same riders on single bicycles. They are not necessarily slower on climbs, but are perceived as such, in part due to the need for a high level of coordination between the riders, especially if the physical abilities of the two riders are very different, requiring a compromise on cadence.
The tandem velomobile bicycle record was set at 83 km/h in 2013.
Uses
Tandem bicycles are used in competitions such as the Paralympics with blind and visually impaired cyclists riding as stokers with fully sighted captains.
Cycling at the Summer Olympics featured a men's tandem event in 1908 and from 1920 to 1972.
Tandems may also be used for bicycle touring and may provide a solution to the problem of riders with different abilities that wish to tour together. Each rider may exert themselves as they wish and all riders travel at the same speed.
The UK has specialist time trial events that have National competition records over 10, 25, 50 and 100-mile events against the clock. There are also 12 and 24-hour time trials ran by the CTT and VTTA riders. There are also place to place records that are run by the RRA
Variations
More than two riders
Tandems can have more than two riders – tandem refers to the arrangement of the riders one behind the other rather than the number of riders. Bicycles for three, four, or five riders are referred to as "triples" or "triplets", "quads" or "quadruplets", and "quints" or "quintuplets" respectively. One such familiar to UK TV viewers was the "trandem" ridden by The Goodies. Originally a two-rider tandem with an extra "dummy" seat attached, a full three-rider version was built for them by Raleigh.
A marching band in Bruges, Belgium, uses a six-place tandem bicycle fitted to carry certain of their instruments in a way that allows them to play music while underway. In the '80s or '90s, an eight-seat tandem bicycle was built and demonstrated in Philadelphia.
Independent pedaling
Some designs such as the DaVinci can allow independent pedaling through the use of multiple freewheels. In another design, the rear rider steers and propels the rear wheel with pedals, and the front rider propels the front wheel with both hands and feet.
Seating arrangements
Tandems come with both upright and recumbent seating.
The Bilenky Viewpoint and the Hase Pino are hybrid upright/recumbent tandems steered by the captain who sits upright in the rear, while the stoker rides in a recumbent position in the front. Both also feature independent stoker pedaling.
The "Buddy Bike" is designed to allow a child to sit on the front saddle with an adult on the rear saddle and steering with extra-long handlebars.
Double steering
Both riders, in the case of just two, may be able to steer. The Star Cycle Company, of Wolverhampton, England, marketed its "Combination Roadster tandem" in 1896. It had a link from the second set of handlebars to the front fork. Others include the 1897 Geneva, and the 1898 Stearns.
Tricycles
Tandems are also available as tricycles; the conventional tandem trike has a small but devoted following in the United Kingdom and is available in one-wheel and two-wheel drive designs. Recumbent tandem tricycles are also gaining popularity throughout the world.
Short wheelbase
There are short wheelbase models, with the rear rider sitting over the rear wheel, either just in front of or even behind the rear axle.
Folding
Several manufacturers offer folding tandems, either with small wheels or not, to facilitate packing and travelling.
Couplers
It is possible to add couplers either during manufacturing or as a retrofit so that the frame can be disassembled into smaller pieces to facilitate packing and travel. Santana manufactures a "triplet" (or quad) that can be transformed into a tandem by simply removing the center section of the frame."
Tandem specific components
Tandems are subjected to unique stresses caused by additional riders and weight requiring solutions specific to tandem construction. The phrase "Tandem Specific" was popularized by its use in Santana tandem catalogs during the 1990s.
Drive train
To transfer power from all pedals to the rear wheel requires a drive train. Typically, the forward crankset is connected by a left-side timing chain to the rear crankset, which in turn is connected by a right-side chain to the rear wheel. This configuration is called crossover rear drive, and requires both of the rear cranks to have chainrings. To work reliably, both of the left-side cranks must be tandem- or left-drive specific to accept the left-hand threading used on left pedals.
The second most popular solution, due to not requiring tandem-specific cranks, is called single side rear drive. The forward crankset is connected by a right-side timing chain to the rear crankset, which in turn is connected by a right-side chain to the rear wheel. This requires that one of the rear chainrings be devoted to the timing chain and limits shifting options.
The least popular solution is to run a drive chain from the forward crankset all the way to the rear wheel, and also run a timing chain from the front crankset to the rear crankset. This is less popular because it requires considerably more chain than the first two arrangements. Such a setup is called a crossover front drive.
A rare solution to the requirement of coordinated pedaling is the use of a jackshaft plus two freehubs, thus allowing one rider to coast while the other continues to pedal. This also allows riders to select different crank positions, such as inphase (IP), or Out-Of-Phase (OOP), while pedaling together. Davinci Tandems use a unique "Independent Drive" whereby the intermediate shaft transfers the power from the stoker and captain cranks into a converter which allows up to four chainrings. This variant also allows stoker and captain cranks to freewheel (coast) independently.
Crankset
The front crankset typically has only one chainring. The rear crankset typically has many chainrings, sometimes on both sides. On a tandem where the pedaling is designed to be in sync, both cranksets will use a chainring for the timing chain of the same size. The drive chain chainrings can be single gear or use a derailleur.
To maintain the necessary tension on the timing chain, many tandems use an eccentric that is placed in the front rider's bottom bracket shell. An alternate solution is to implement a pulley, or idler, on the bottom of the timing chain to take up slack. Idlers add friction and a potential point of failure to the drive train.
Fork
Tandems have very different weight distribution and loads on wheels, brakes, and forks. A tandem-specific fork is designed to handle this. Custom tandem makers such as Co-Motion make specific forks for tandem, triple, and quad-bikes. Brake forces can be substantial. On any bicycle, the front brake (and thus fork) are critical to safe and efficient braking. On disc brakes tandems, it is important that heat transfer and dissipation has been engineered properly. In particular, carbon tandem forks can provide all the benefits of comfort and control, but are also designed to handle the increased heat load.
Handle bars and stem
Stoker handlebars are typically connected to a stoker stem that is clamped around the captain's seatpost. The stoker handlebars are typically bullhorns or drop bars with "dummy levers" instead of brake levers for gripping.
Wheels
Because of the extra weight and stresses, tandem wheels may use a higher spoke count, sturdier rims, higher pressure tires, a stronger freewheel, dishless spoke configuration, or asymmetric wheels. Tandems wear out rear wheels faster than front wheels; therefore, they may use non-symmetrical wheel setups, such as more spokes or a sturdier rim on the rear wheel.
The dish of a wheel measures the amount of asymmetry between the rim and the hub flanges. To accommodate a large cassette, more space is needed on the drive side of the axle; this increases the complexity of manufacturing and truing the wheel. Tandem rear wheels tend to run a wider hub/axle to allow the right-side hub flange to be further right of wheel center and thus reduce the total dish of the wheel. Some modern tandems use a 160-mm-wide axle that allows a wheel that is completely "dishless" (i.e. symmetric). The disadvantage is this may increase the Q-factor of the stoker's cranks and may also cause "heel-strike" of the stoker's shoes on the chain stays. Others use shorter axles (often 145 mm wide) thereby trading a little decrease in the strength of the wheel for the advantage of a similar decrease in the bending moment of the axle spindle. Rear hubs may also be threaded on the left side to allow the use of a drum brake.
Specialty wheels such as Aerospoke or Shimano "Sweet-16" may build "tandem certified" racing wheelsets. The Aerospoke tandem wheelset is built up more than their roadset with special tandem hubs that can be removed and which facilitates stacking the rims flat into a travel case.
Brakes
A tandem bicycle has about twice the kinetic energy as a single bicycle traveling at the same forward speed. This may be more than can be handled by the same brakes, especially rim brakes, as a single bicycle. Two alternatives have been employed to solve this problem: drum brakes and disc brakes.
The Arai drum brake is used during long downhill descents where a rim brake might overheat the tire and possibly cause it to fail. The drum of the brake screws onto the left side of the tandem hub, which must be threaded for the drum. The shoe plate slips over the axle and a small reaction arm from the shoe plate engages with the bicycle frame to prevent the plate from turning. The drum brake is typically controlled by a friction shift lever like a BARCON or similar. The brake is designed to be engaged continuously during a descent to maintain a steady speed. The standard brakes can be used in addition as necessary.
Some modern tandems use disc brakes, with all the advantages and disadvantages that entails.
Riding techniques
The rear rider starts clipped in while the front rider holds the tandem upright. For those who can get accustomed to the rear rider always being clipped in, the distinct advantage to this technique will become obvious when trying to start at the foot of a bridge or on a hill. If the tandem team does not practice this, then they often reserve this type of start for when they are faced with a bridge or hill. This technique allows the rear rider to apply continuous power as the front rider steadies the tandem during the initial take-off. This reduces the risk of the tandem toppling over due to starting on an incline. The rear rider will continue to pedal as the front rider attempts to get the foot used for steadying the tandem clipped into the pedal.
Crank phase
Riders may choose to synchronise their pedalling through in-phase (IP) or out-of-phase (OOP) pedalling. In in-phase pedalling, each rider's cranks are the same or opposite clock positions at any point in time. In out-of-phase pedalling, both riders have their cranks in differing non-opposite positions. This has the potential for a wide range of variation. Some tandem riders arrange their cranks so that they are 90° out of phase to produce what is called the "4 banger arrangement". In practice, OOP setups range from a mere two-tooth phase difference between cranks to a full 90° phase difference. Generally, OOP provides the greatest benefits to the tandem team that has disparate leg-strength. When the tandem is pedalled IP it is possible, and often happens, that the stronger rider literally drops the pedals out from beneath the feet of the weaker rider and cause the latter to be unable to contribute meaningfully. Using OOP makes a significant difference in gearing choice as each rider has the full mass of the tandem in their power stroke, so lower gears are preferred. However, using OOP can help develop leg strength for the very same reason. Some argue that this produces a smoother power stroke, or that it reduces stress on the drive train because the point of maximum power is reduced to roughly half and distributed over the chainrings.
Manufacturers
Since the market for tandem bicycles is significantly smaller than the market for single bikes, there are far fewer tandem bicycle manufacturers than single-bicycle manufacturers. There are a few builders who specialize in tandems, as well as single-bike makers who offer tandem models. Current notable tandem bicycle manufacturers include:
Bike Friday
Bilenky Cycle Works
Bohemian Bicycles
Burley Design (now out of production)
Calfee Design
Cannondale Bicycle Corporation
Co-Motion Cycles
Cyfac International
da Vinci Designs
Dawes Cycles
Gazelle
Hase Spezialräder (Hase Bikes)
KHS Bicycles
MTBT Tandems (Fandango)
Órbita bicycles
Ridebo
Santana Cycles
Schwinn Bicycle Company
Torker
Trek Bicycles
Ventana Mountain Bikes USA
| Technology | Human-powered transport | null |
86777 | https://en.wikipedia.org/wiki/Northern%20pike | Northern pike | The northern pike (Esox lucius) is a species of carnivorous fish of the genus Esox (pikes). They are commonly found in moderately salty and fresh waters of the Northern Hemisphere (i.e. holarctic in distribution). They are known simply as a pike (: pike) in Great Britain, Ireland, most of Eastern Europe, Canada and the U.S., although in the Midwest, they may be called a Northern.
Pike can grow to a relatively large size. Their average length is about , with maximum recorded lengths of up to and maximum weights of . The IGFA currently recognises a pike caught by Lothar Louis on Greffern Lake, Germany, on 16 October 1986, as the all-tackle world-record holding northern pike. Northern pike grow to larger sizes in Eurasia than in North America, and in coastal Eurasian regions than inland ones.
Etymology
The northern pike gets its common name from its resemblance to the pole-weapon known as the pike (from the Middle English for 'pointed'). Various other unofficial trivial names are common pike, Lakes pike, great northern pike, great northern, northern (in the U.S. Upper Midwest and in the Canadian provinces of Alberta, Manitoba and Saskatchewan), jackfish, jack, slough shark, snake, slimer, slough snake, gator (due to a head similar in shape to that of an alligator), hammer handle, and other such names as "long head" or "pointy nose". Numerous other names can be found in Field Museum Zool. Leaflet Number 9. Its earlier common name, the luci (now lucy) or luce when fully grown, was used to form its taxonomic name (Esox lucius) and is used in heraldry.
Description
Northern pike are most often olive green, shading from yellow to white along the belly. The flank is marked with short, light bar-like spots and a few to many dark spots on the fins. Sometimes, the fins are reddish. Younger pike have yellow stripes along a green body; later, the stripes divide into light spots and the body turns from green to olive green. The lower half of the gill cover lacks scales, and it has large sensory pores on its head and on the underside of its lower jaw which are part of the lateral line system. Unlike the similar-looking and closely related muskellunge, the northern pike has light markings on a dark body background and fewer than six sensory pores on the underside of each side of the lower jaw.
A hybrid between northern pike and muskellunge is known as a tiger muskellunge (Esox masquinongy × lucius or Esox lucius × masquinongy, depending on the sex of each of the contributing species). In the hybrids, the males are invariably sterile, while females are often fertile, and may back-cross with the parent species. Another form of northern pike, the silver pike, is not a subspecies but rather a mutation that occurs in scattered populations. Silver pike, sometimes called silver muskellunge, lack the rows of spots and appear silver, white, or silvery-blue in color. When ill, silver pike have been known to display a somewhat purplish hue; long illness is also the most common cause of male sterility.
In Italy, the newly identified species Esox cisalpinus ("southern pike") was long thought to be a color variation of the northern pike, but was in 2011 announced to be a species of its own.
Length and weight
Northern pike in North America seldom reach the size of their European counterparts; one of the largest specimens known was a specimen from New York. It was caught in Great Sacandaga Lake on 15 September 1940 by Peter Dubuc. Reports of far larger pike have been made, but these are either misidentifications of the pike's larger relative, the muskellunge, or simply have not been properly documented and belong in the realm of legend.
As northern pike grow longer, they increase in weight, and the relationship between length and weight is not linear. The relationship between total length (L, in inches) and total weight (W, in pounds) for nearly all species of fish can be expressed by an equation of the form
Invariably, b is close to 3.0 for all species, and c is a constant that varies among species. For northern pike, b = 3.096 and c = 0.000180 (c = 7.089 enables one to put length in meters and weight in kilograms). The relationship described in this section suggests a northern pike will weigh about , while a northern pike will weigh about .
Age
Northern Pike typically live to 10–15 years, but sometimes up to 25 years.
Habitat
Pike are found in sluggish streams and shallow, weedy places in lakes and reservoirs, as well as in cold, clear, rocky waters. They are typical ambush predators; they lie in wait for prey, holding perfectly still for long periods, and then exhibit remarkable acceleration as they strike. They inhabit any water body that contains fish, but suitable places for spawning are also essential. Because of their cannibalistic nature, young pike need places where they can take shelter between plants so they are not eaten. In both cases, rich submerged vegetation is needed. Pike are seldom found in brackish water, except for the Baltic Sea area, here they can be found spending time both in the mouths of rivers and in the open brackish waters of the Baltic Sea. It is normal for pike to return to fresh water after a period in these brackish waters. They seem to prefer water with less turbidity, but that is likely related to their dependence on the presence of vegetation.
Geographic distribution
Esox lucius is found in fresh water throughout the Northern Hemisphere, including Russia, Europe, and North America. It has also been introduced to lakes in Morocco, and is even found in brackish water of the Baltic Sea, but they are confined to the low-salinity water at the surface of the sea, and are seldom seen in brackish water elsewhere.
Within North America, northern pike populations are found in Maine, New Hampshire, Vermont, Massachusetts, Rhode Island, Connecticut, New York, New Jersey, Pennsylvania, Maryland, West Virginia, Ohio, Michigan, Indiana, Illinois, Wisconsin, Minnesota, Iowa, Missouri, North Dakota, South Dakota, Nebraska, Kansas, Montana, Idaho, Utah, Colorado, Oklahoma, northern Texas, northern New Mexico, northern Arizona, Alaska, the Yukon, the Northwest Territories, Alberta, Saskatchewan, Manitoba, Ontario, and Québec (pike are rare in British Columbia and east coast provinces). Watersheds in which pike are found include the Ohio Valley, the upper Mississippi River and its tributaries, and the Great Lakes Basin. They are also stocked in, or have been introduced to, some western lakes and reservoirs for sport fishing, although some fisheries managers believe this practice often threatens other species of fish such as bass, trout, and salmon, causing government agencies to attempt to exterminate the pike by poisoning lakes, such as Stormy Lake, Alaska. E. lucius is a severe invasive predator in Box Canyon Reservoir on the Pend Oreille River in northeastern Washington.
Behaviour
Aggression
The northern pike is a relatively aggressive species, especially with regard to feeding. For example, when food sources are scarce, cannibalism develops, starting around five weeks in a small percentage of populations. This cannibalism occurs when the ratio of predator to prey is two to one. One can expect this because when food is scarce, Northern pike fight for survival, such as turning on smaller pike to feed; this is seen in other species such as tiger salamanders. Usually, pike tend to feed on smaller fish, such as the banded killifish. However, when pike exceed long, they feed on larger fish.
Because of cannibalism when food is short, pike suffer a fairly high young mortality rate. Cannibalism is more prevalent in cool summers, as the upcoming pike have slow growth rates in that season and might not be able to reach a size to deter the larger pike. Cannibalism is likely to arise in low growth and low food conditions. Pike do not discriminate siblings well, so cannibalism between siblings is likely.
Aggression also arises from a need for space. Young pike tend to have their food stolen by larger pike. Pike are aggressive if not given enough space because they are territorial. They use a form of foraging known as ambush foraging. Unlike species such as perch, pike undergo bursts of energy instead of actively chasing down prey. As such, a fair amount of inactive time occurs until they find prey. Hunting efficiency decreases with competition; the larger the pike, the larger the area controlled by that particular pike. An inverse relation to vegetation density and pike size exists, which is due to the possibility of cannibalism from the largest pike. This makes sense, as the smaller pike need more vegetation to avoid being eaten. Large pike do not have this worry and can afford the advantage of a large line of sight. They prefer a tree structure habitat.
There has been at least one instance of a pike attacking a dog.
Pike are occasionally preyed upon by otters.
Physical behavioural traits
Pike are capable of "fast start" movements, which are sudden high-energy bursts of unsteady swimming. Many other fish exhibit this movement as well. Most fish use this mechanism to avoid life-threatening situations. For the pike, however, it is a tool used to capture prey from their sedentary positions. They flash out in such bursts and capture their prey. These fast starts terminate when the pike has reached maximum velocity. During such motions, pike make "S" conformations while swimming at high rates. To decelerate, they, simply make a "C" conformation, exponentially slowing down their speed so that they can "stop". An interesting behavioural trait that pike have is that they have short digestion times and long feeding periods. They can undergo many of these fast bursts to collect as much prey as they can. Pike are least active during the night.
Reproduction
Pike have a strong homing behaviour; they inhabit certain areas by nature. During the summer, they tend to group closer to vegetation than during the winter. The exact reason is not clear, but likely is a result of foraging or possibly reproductive needs to safeguard young. Pike diel rhythm changes significantly over the year. On sunny days, pike stay closer to the shallow shore. On windy days, they are further from shore. When close to the shore, pike have a preference for shallow, vegetated areas. Pike are more stationary in reservoirs than lakes. A possibility is that lakes have more prey to feed upon, or possibly in reservoirs prey will ultimately cross paths with the pike. As such, this could be a form of energy conservation. Pike breed in the spring.
Pike are physically capable of breeding at an age of about two years, spawning in spring when the water temperature first reaches about . They have a tendency to lay a large number of eggs. A likely explanation for such actions is to produce as many surviving offspring as possible, as many most likely die early in life. In females, the gonads enlarge when it is time to shed her eggs. However, after they are shed, these eggs will not hatch if the water is below . Male pike arrive at the breeding grounds before females do, preceding them by a few weeks. In addition, the males stay after the spawning is finished. Parental stock is vital for pike success. Egg survival has been shown to be positively correlated with number of eggs laid. For breeding, the more stable the water, the greater the fitness of the pike. Mortality results from toxic concentrations of iron or rapid temperature changes, and adult abundance and the strength of the resulting year classes are not related. It is based upon two points of development: one during embryo stage between fertilization and closure of the blastopore, and the second between hatching and the termination of the alevin stage.
The colour of the sticky eggs is yellow to orange; the diameter is . The embryos are in length and able to swim after hatching, but stay on the bottom for some time. The embryonic stage is five to 16 days, dependent on water temperature (at and , respectively). Under natural circumstances, the survival from free-swimming larva to 75-mm pike is around 5%.
Food
The young, free-swimming pike feed on small invertebrates starting with Daphnia, and quickly move on to bigger prey, such as Asellus and Gammarus. When the body length is , they start feeding on small fish.
A pike has a very typical hunting behaviour; it is able to remain stationary in the water by moving the last fin rays of the dorsal fins and the pectoral fins. Before striking, it bends its body and darts out to the prey using the large surface of its caudal fin, dorsal fin, and anal fin to propel itself. The fish has a distinctive habit of catching its prey sideways in the mouth, immobilising it with its sharp, backward-pointing teeth, and then turning the prey headfirst to swallow it. It eats mainly fish and frogs, but also small mammals and birds fall prey to pike. Young pike have been found dead from choking on a pike of a similar size, an observation referred to by the English poet Ted Hughes in his poem "Pike". Northern pike also feed on insects, crayfish, and leeches. They are not very particular and eat spiny fish like perch, and will even take fish as small as sticklebacks if they are the only available prey.
Pike are known to occasionally hunt and consume larger water birds, such as an incident in 2016 when an individual was observed trying to drown and eat a great crested grebe, an incident in which a pike choked to death after killing and attempting to eat a tufted duck, as well as an incident in 2015 where an attack by a large pike between three and four feet long was implicated as a possible cause for the injury and death of an adult mute swan on Lower Lough Erne, Northern Ireland, but it is generally believed that such attacks are only rare occurrences.
The northern pike is a largely solitary predator. It migrates during a spawning season, and it follows prey fish like common roaches to their deeper winter quarters. Sometimes, divers observe groups of similar-sized pike that cooperate some to start hunting at the same time, so "wolfpack" theories are given. Large pike can be caught on dead immobile fish, so these pike are thought to move about in a rather large territory to find food. Large pike are also known to cruise large water bodies at a few metres deep, probably pursuing schools of prey fish. Smaller pike are more of ambush predators, probably because of their vulnerability to cannibalism. Pike are often found near the exit of culverts, which can be attributed to the presence of schools of prey fish and the opportunity for ambush. Being potamodromous, all esocids tend to display limited migration, although some local movement may be of key significance for population dynamics. In the Baltic, they are known to follow herring schools, so have some seasonal migration.
Importance to humans
Although it is generally known as a "sporting" quarry, some anglers release pike they have caught because the flesh is considered bony, especially due to the substantial (epipleural) "Y-bones". The white and mild-tasting flesh of pikes nonetheless has a long and distinguished history in cuisine and is popular fare in Europe and parts of North America. Among fishing communities where pike is popular fare, the ability of a filleter to effectively remove the bones from the fillets while minimizing the amount of flesh lost in the process (known as "de-boning") is a highly valued skill. There are methods for filleting pike and leaving the "y-bones" in the fish's body; this does leave some flesh on the fish but avoids the sometimes difficult process of "de-boning". Larger fish are more easily filleted (and much easier to de-bone), while smaller ones are often processed as forcemeat to eliminate their many small bones, and then used in preparations such as quenelles and fish mousses. Historical references to cooking pike go as far back as the Romans. Fishing for pike is said to be very exciting with their aggressive hits and aerial acrobatics. Pike are among the largest North American freshwater game fish.
Because of their prolific and predatory nature, laws have been enacted in some places to help stop the spread of northern pike outside of their native range. For instance, in California, anglers are required by law to remove the head from a pike once it has been caught. In Alaska, pike are native north and west of the Alaska Range, but have been illegally introduced to south-central Alaska by game fishermen. In south-central Alaska, no limit is imposed in most areas. Pike are seen as a threat to native wild stocks of salmon by some fishery managers.
Notably in Britain and Ireland, pike are greatly admired as a sporting fish and they are returned alive to the water to safeguard future sport and maintain the balance of a fishery. The Pike Anglers Club has campaigned to preserve pike since 1977, arguing that the removal of pike from waters can lead to an explosion of smaller fish, and to ensure pike removal stops, which is damaging to both the sport fishery and the environment.
Sport fishing
Pike angling is becoming an increasingly popular pastime in Europe. Effective methods for catching include dead baits, lure fishing, and jerk baiting. They are prized as game fish for their large size and aggressive nature.
Lake fishing for pike from the shore is especially effective during spring, when the big pike move into the shallows to spawn in weedy areas, and later many remain there to feed on other spawning coarse fish species to regain their condition after spawning. Smaller jack pike often remain in the shallows for their own protection, and for the small fish food available there. For the hot summer and during inactive phases, the larger female pike tend to retire to deeper water and/or places with better cover. This gives the boat angler good fishing during the summer and winter seasons. Trolling (towing a fairy or bait behind a moving boat) is a popular technique.
The use of float tubes is another method of fishing for pike on small to medium-sized still waters. Fly fishing for pike is another eligible way of catching these fish, and the float tube is now recognized as an especially suitable water craft for pike fly-fishing. Also they have been caught this way by using patterns that imitate small fry or invertebrates.
In recent decades, more pike are released back to the water after catching (catch and release), but they can easily be damaged when handled. Handling those fish with dry hands can easily damage their mucus-covered skin and possibly lead to their deaths from infections.
Since they have very sharp and numerous teeth, care is required in unhooking a pike. Barbless trebles are recommended when angling for this species, as they simplify unhooking. This is undertaken using long forceps, with 30-cm artery clamps the ideal tool. When holding the pike from below on the lower jaw, it will open its mouth. It should be kept out of the water for the minimum amount of time possible, and should be given extra time to recover if being weighed and photographed before release. It's also recommended that anglers use an unhooking mat to prevent it from harm. If practicing live release, calling the fish "caught" when it is alongside a boat is recommended. Remove the hook by grabbing it with needle-nosed pliers while the fish is still submerged and giving it a flip in the direction that turns the hook out of the mouth. This avoids damage to the fish and the stress of being out of water.
In Finland, catching a kymppihauki, a pike weighing at least , is considered the qualification as a master fisherman.
Many countries have banned the use of live fish for bait, but pike can be caught with dead fish, which they locate by smell. For this technique, fat marine fish like herring, sardines and mackerel are often used. Compared to other fish like the eel, the pike does not have a good sense of smell, but it is still more than adequate to find the baitfish. Baitfish can be used as groundbait, but also below a float carried by the wind. This method is often used in wintertime and best done in lakes near schools of preyfish or at the deeper parts of shallow water bodies, where pike and preyfish tend to gather in great numbers.
Pike make use of the lateral line system to follow the vortices produced by the perceived prey, and the whirling movement of the spinner is probably a good way to imitate or exaggerate these. Jerkbaits are also effective and can produce spectacular bites with pike attacking these erratic-moving lures at full speed. For trolling, big plugs or softbaits can be used. Spoons with mirror finishes are very effective when the sun is at a sharp angle to the water in the mornings or evenings because they generate the vibrations previously discussed and cause a glint of reflective sunlight that mimics the flash of white-bellied prey.
When fishing in shallow water for smaller pike, lighter and smaller lures are frequently used. The humble 'woolly bugger' fly is a favourite lure among keen fly fisherman of the southern hemisphere. Fly fishing for pike is an established aspect of the sport and there are now numerous dedicated products to use specifically to target these fish.
In mythology
In the Finnish epic poetry Kalevala, wise demigod Väinämöinen creates a magical kantele (string instrument) from the jawbone of a giant pike.
| Biology and health sciences | Fishes | null |
86778 | https://en.wikipedia.org/wiki/Pike%20%28weapon%29 | Pike (weapon) | A pike is a long thrusting spear formerly used in European warfare from the Late Middle Ages and most of the early modern period, and wielded by foot soldiers deployed in pike square formation, until it was largely replaced by bayonet-equipped muskets. The pike was particularly well known as the primary weapon of Spanish tercios, Swiss mercenary, German Landsknecht units and French sans-culottes. A similar weapon, the sarissa, had been used in antiquity by Alexander the Great's Macedonian phalanx infantry.
Design
The pike was a long weapon, varying considerably in size, from long. Generally, a spear becomes a pike when it is too long to be wielded with one hand in combat. It was approximately in weight, with the 16th-century military writer Sir John Smythe recommending lighter rather than heavier pikes. It had a wooden shaft with an iron or steel spearhead affixed. The shaft near the head was often reinforced with metal strips called "cheeks" or langets. When the troops of opposing armies both carried the pike, it often grew in a sort of arms race, getting longer in both shaft and head length to give one side's pikemen an edge in combat. The extreme length of such weapons required a strong wood such as well-seasoned ash for the pole, which was tapered towards the point to prevent the pike from sagging on the ends, although drooping or slight flection of the shaft was always a problem in pike handling. It is a common mistake to refer to a bladed polearm as a pike; such weapons are more generally known as halberds, glaives, ranseurs, bills, or voulges.
The great length of the pikes allowed a great concentration of spearheads to be presented to the enemy, with their wielders at a greater distance, but also made pikes unwieldy in close combat. This meant that pikemen had to be equipped with an additional, shorter weapon such as a dagger or sword in order to defend themselves should the fighting degenerate into a melee. In general, however, pikemen attempted to avoid such disorganized combat, in which they were at a disadvantage. To compound their difficulties in a melee, the pikeman often did not have a shield, or had only a small shield which would be of limited use in close-quarters fighting.
Tactics
The pike, being unwieldy, was typically used in a deliberate, defensive manner, often alongside other missile and melee weapons. However, better-trained troops were capable of using the pike in an aggressive attack with each rank of pikemen being trained to hold their pikes so that they presented enemy infantry with four or five layers of spearheads bristling from the front of the formation.
As long as it kept good order, such a formation could roll right over enemy infantry, but it did have weaknesses. The men were all moving forward facing in a single direction and could not turn quickly or efficiently to protect the vulnerable flanks or rear of the formation. Nor could they maintain cohesion over uneven ground, as the Scots discovered to their cost at the Battle of Flodden. The huge block of men carrying such unwieldy spears could be difficult to maneuver in any way other than straightforward movement.
As a result, such mobile pike formations sought to have supporting troops protect their flanks or would maneuver to smash the enemy before they could be outflanked themselves. There was also the risk that the formation would become disordered, leading to a confused melee in which pikemen had the vulnerabilities mentioned above.
According to Sir John Smythe, there were two ways for two opposing pike formations to confront one another: cautious or aggressive. The cautious approach involved fencing at the length of the pike, while the aggressive approach involved quickly closing distance, with each of the first five ranks giving a single powerful thrust. In the aggressive approach, the first rank would then immediately resort to swords and daggers if the thrusts from the first five ranks failed to break the opposing pike formation. Smythe considered the cautious approach laughable.
Although primarily a military weapon, the pike could be surprisingly effective in single combat and a number of 16th-century sources explain how it was to be used in a dueling situation; fencers of the time often practiced with and competed against each other with long staves in place of pikes. George Silver considered the pike one of the more advantageous weapons for single combat in the open, giving it odds over all weapons shorter than or the sword and dagger/shield combination.
History
Ancient Europe
Although very long spears had been used since the dawn of organized warfare (notably illustrated in art showing Sumerian and Minoan warriors and hunters), the earliest recorded use of a pike-like weapon in the tactical method described above involved the Macedonian sarissa, used by the troops of Alexander the Great's father, Philip II of Macedon, and successive dynasties, which dominated warfare for several centuries in many countries.
After the fall of the last successor of Macedon, the pike largely fell out of use for the next 1,000 or so years. The one exception to this appears to have been in Germany, where Tacitus recorded Germanic tribesmen in the 2nd century AD as using "over-long spears". He consistently refers to the spears used by the Germans as being "massive" and "very long" suggesting that he is describing in essence a pike. Julius Caesar, in his De Bello Gallico, describes the Helvetii as fighting in a tight, phalanx-like formation with spears jutting out over their shields. Caesar was probably describing an early form of the shieldwall so popular in later times.
Medieval Europe revival
In the Middle Ages, the principal users of the pike were urban militia troops such as the Flemings or the peasant array of the lowland Scots. For example, the Scots used a spear formation known as the schiltron in several battles during the Wars of Scottish Independence including the Battle of Bannockburn in 1314, and the Flemings used their geldon long spear to absorb the attack of French knights at the Battle of the Golden Spurs in 1302, before other troops in the Flemish formation counterattacked the stalled knights with goedendags. Both battles were seen by contemporaries as stunning victories of commoners over superbly equipped, mounted, military professionals, where victory was owed to the use of the pike and the brave resistance of the commoners who wielded them.
These formations were essentially immune to the attacks of mounted men-at-arms as long as the knights obligingly threw themselves on the spear wall and the foot soldiers remained steady under the morale challenge of facing a cavalry charge, but the closely packed nature of pike formations rendered them vulnerable to enemy archers and crossbowmen who could shoot them down with impunity, especially when the pikemen did not have adequate armor. Many defeats, such as at Roosebeke and Halidon Hill, were suffered by the militia pike armies when faced by cunning foes who employed their archers and crossbowmen to thin the ranks of the pike blocks before charging in with their (often dismounted) men-at-arms.
Medieval pike formations tended to have better success when they operated in an aggressive fashion. The Scots at the Battle of Stirling Bridge (1297), for example, utilized the momentum of their charge to overrun an English army while the Englishmen were crossing a narrow bridge. At the Battle of Laupen (1339), Bernese pikemen overwhelmed the infantry forces of the opposing Habsburg/Burgundian army with a massive charge before wheeling over to strike and rout the Austro-Burgundian horsemen as well. At the same time however such aggressive action required considerable tactical cohesiveness or suitable terrain to protect the vulnerable flanks of the pike formations especially from the attack of mounted man-at-arms. When these features were not available, militia often suffered costly failures, such as at the battles of Mons-en-Pevele (1304), Cassel (1328), Roosebeke (1382) and Othee (1408). The constant success of the Swiss mercenaries in the later period was attributed to their extreme discipline and tactical unity due to semi-professional nature, allowing a pike block to somewhat alleviate the threat presented by flanking attacks.
Perhaps copying the nearby Swiss model, the pike had a certain diffusion also in the duchy of Milan in the last two years of the 14th century. In 1391, a decree by Gian Galeazzo Visconti ordered the pikes to be at least 10 feet long in Milan, equivalent to 4.35 m (14.3 ft) and their tips to be reinforced with iron strips to prevent enemies, given their length, from cutting or breaking them. A second decree of 1397 provided that half the infantry of the duchy were armed with pikes.
It was not uncommon for aggressive pike formations to be composed of dismounted men-at-arms, as at the Battle of Sempach (1386), where the dismounted Austrian vanguard, using their lances as pikes, had some initial success against their predominantly halberd-equipped Swiss adversaries. Dismounted Italian men-at-arms also used the same method to defeat the Swiss at the Battle of Arbedo (1422). Equally, well-armored Scottish nobles (accompanied even by King James IV) were recorded as forming the leading ranks of Scottish pike blocks at the Battle of Flodden (1513), incidentally rendering the whole formation resistant to English archery.
Renaissance Europe heyday
The Swiss solved the pike's earlier problems and brought a renaissance to pike warfare in the 15th century, establishing strong training regimens to ensure they were masters of handling the Spiess (the German term for "skewer") on maneuvers and in combat; they also introduced marching to drums for this purpose. This meant that the pike blocks could rise to the attack, making them less passive and more aggressive formations, but sufficiently well trained that they could go on the defensive when attacked by cavalry. German soldiers known as Landsknechts later adopted Swiss methods of pike handling.
The Scots predominantly used shorter spears in their schiltron formation; their attempt to adopt the longer Continental pike was dropped for general use after its ineffective use led to humiliating defeat at the Battle of Flodden.
Such Swiss and Landsknecht phalanxes also contained men armed with two-handed swords, or Zweihänder, and halberdiers for close combat against both infantry and attacking cavalry.
The Swiss were confronted with the German Landsknecht who used similar tactics as the Swiss, but more pikes in the more difficult German thrust (: holding a pike that had its weight in the lower 1/3 at the end with two hands), which was utilized in a more flexible attacking column.
The high military reputation of the Swiss and the Landsknechts again led to the employment of mercenary units across Europe in order to train other armies in their tactics. These two, and others who had adopted their tactics, faced off in several wars, leading to a series of developments as a result.
These formations had great successes on the battlefield, starting with the astonishing victories of the Swiss cantons against Charles the Bold of Burgundy in the Burgundian Wars, in which the Swiss participated in 1476 and 1477. In the Battles of Grandson, Morat, and Nancy, the Swiss not only successfully resisted the attacks of enemy knights, as the relatively passive Scottish and Flemish infantry squares had done in the earlier Middle Ages, but also marched to the attack with great speed and in good formation, their attack columns steamrolling the Burgundian forces, sometimes with great massacre.
The deep pike attack column remained the primary form of effective infantry combat for the next forty years, and the Swabian War saw the first conflict in which both sides had large formations of well-trained pikemen. After that war, its combatants—the Swiss (thereafter generally serving as mercenaries) and their Landsknecht imitators—would often face each other again in the Italian Wars, which would become in many ways the military proving ground of the Renaissance.
The so-called Schefflin was a polearm, closely related to the pike, which from the late 1400s and throughout the 16th century saw widespread use in the German-speaking world. It served as a multipurpose weapon for both infantry (in the manner of pikes) and light cavalry (in the manner of demi-lances). Characteristically, it featured a large, hollow-made and leaf-shaped head of about or more, which was attached to a long and slender shaft. Apart from being used by soldiers in battle, a tassel fixed to the socket of the head together with optional further embellishment made the Schefflin an appropriate main weapon for princely bodyguards and courtly officials. There seems to be a close relation between the contemporary German term Schefflin and the West European terms javeline (French) and javelin (English), both referring to some type of cavalry spear. Although rarely noticed, many of these weapons have survived to this day. Some pieces, of which many are said to have been used by the personal entourage of Henry VIII, are kept at the Royal Armouries in Leeds.
Ancient China
Pikes and long halberds were in use in ancient China from the Warring States period since the 5th century BC. Infantrymen used a variety of long polearm weapons, but the most popular was the dagger-axe, pike-like long spear, and the ji. The dagger-axe and ji came in various lengths, from ; the weapon consisted of a thrusting spear with a slashing blade appended to it. Dagger-axes and ji were an extremely popular weapon in various kingdoms, especially for the Qin state and Qin dynasty, and possibly the succeeding Han dynasty, who produced halberd and pike-like weapons, as well as long pikes during the war against Xiongnu.
Classical Japan
During the continuous European development of the pike, Japan experienced a parallel evolution of pole weapons.
In Classical Japan, the Japanese style of warfare was generally fast-moving and aggressive, with far shallower formations than their European equivalents. The naginata and yari were more commonly used than swords for Japanese ashigaru foot soldiers and dismounted samurai due to their greater reach. Naginata, first used around 750 AD, had curved sword-like blades on wooden shafts with often spiked metal counterweights. They were typically used with a slashing action and forced the introduction of shin guards as cavalry battles became more important. Yari were spears of varying lengths; their straight blades usually had sharpened edges or protrusions from the central blade, and were fitted to a hollowed shaft with an extremely long tang.
Medieval Japan
During the later half of the 16th century in Medieval Japan, pikes used were generally long, but sometimes up to in length. By this point, pikemen were becoming the main forces in armies. They formed lines, combined with arquebusiers and spearmen. Formations were generally only two or three rows deep.
Pike and shot
In the aftermath of the Italian Wars, from the late 15th century to the late 16th century, most European armies adopted the use of the pike, often in conjunction with primitive firearms such as the arquebus and caliver, to form large pike and shot formations.
The quintessential example of this development was the Spanish tercio, which consisted of a large square of pikemen with small, mobile squadrons of arquebusiers moving along its perimeter, as well as traditional men-at-arms. These three elements formed a mutually supportive combination of tactical roles: the arquebusiers harried the enemy line, the pikemen protected the arquebusiers from enemy cavalry charges, and the men-at-arms, typically armed with swords and javelins, fought off enemy pikemen when two opposing squares made contact. The Tercio deployed smaller numbers of pikemen than the huge Swiss and Landsknecht columns, and their formation ultimately proved to be much more flexible on the battlefield.
Mixed formations of men quickly became the norm for European infantrymen, with many, but not all, seeking to imitate the Tercio; in England, a combination of billmen, longbowmen, and men-at-arms remained the norm, though this changed when the supply of yew on the island dwindled.
The percentage of men who were armed with firearms in Tercio-like formations steadily increased as firearms advanced in technology. This advance is believed to be the demise of cavalry when in fact it revived it. From the late 16th century and into the 17th century, smaller pike formations were used, invariably defending attached musketeers, often as a central block with two sub-units of shooters, called "sleeves of shot", on either side of the pikes. Although the cheaper and versatile infantry increasingly adopted firearms, cavalry's proportion in the army remained high.
During the English Civil War (1642–1651) the New Model Army (1646–1660) initially had two musketeers for each pikeman. Two musketeers for each pikeman was not the agreed mix used throughout Europe, and when in 1658, Oliver Cromwell, by then the Lord Protector, sent a contingent of the New Model Army to Flanders to support his French allies under the terms of their treaty of friendship (the Treaty of Paris, 1657) he supplied regiments with equal numbers of musketeers and pikemen.} On the battlefield, the musketeers lacked protection against enemy cavalry, and the two types of foot soldier supported each other.
The post Restoration English Army used pikemen and by 1697 (the last year of the Nine Years' War) English infantry battalions fighting in the Low Countries still had two musketeers to every pikemen and fought in the now traditional style of pikemen five ranks deep in the centre, with six ranks of musketeers on each side.
According to John Kersey in 1706, the pike was typically in length.
End of the pike era
The mid-17th century to the early 18th century saw the decline of the pike in most European armies. This started with the proliferation of the flintlock musket, which gave the musketeer a faster rate of fire than he before possessed, incentivizing a higher ratio of shot to pikes on the battlefield. It continued with development of the plug bayonet, followed by the socket bayonet in the 1680s and 1690s. The plug bayonet did not replace the pike as it required a soldier surrender his ability to shoot or reload to fix it, but the socket bayonet solved that issue. The bayonet added a long blade of up to to the end of the musket, allowing the musket to act as a spear-like weapon when held out with both hands. Although they did not have the full reach of pikes, bayonets were effective against cavalry charges, which used to be the main weakness of musketeer formations, and allowed armies to massively expand their potential firepower by giving every infantryman a firearm; pikemen were no longer needed to protect musketeers from cavalry. Furthermore, improvements in artillery caused most European armies to abandon large formations in favor of multiple staggered lines, both to minimize casualties and to present a larger frontage for volley fire. Thick hedges of bayonets proved to be an effective anti-cavalry solution, and improved musket firepower was now so deadly that combat was often decided by shooting alone.
A common end date for the use of the pike in most infantry formations is 1700, such as the Prussian and Austrian armies. Others, including the Swedish and Russian armies, continued to use the pike as an effective weapon for several more decades, until the 1720s and 1730s (the Swedes of King Charles XII in particular using it to great effect until 1721). At the start of the Great Northern War in 1700, Russian line infantry companies had 5 NCOs, 84 musketeers, and 18 pikemen, the musketeers initially being equipped with sword-like plug bayonets; they did not fully switch to socket bayonets until 1709. A Swedish company consisted of 82 musketeers, 48 pikemen, and 16 grenadiers. The Army of the Holy Roman Empire maintained a ratio of 2 muskets to 1 pike in the middle to late 17th century, officially abandoning the pike in 1699. The French, meanwhile, had a ratio of 3-4 muskets to 1 pike by 1689. Both sides of the Wars of the Three Kingdoms in the 1640s and 1650s preferred a ratio of 2 muskets to 1 pike, but this was not always possible.
During the American Revolution (1775–1783), pikes called "trench spears" made by local blacksmiths saw limited use until enough bayonets could be procured for general use by both Continental Army and attached militia units.
Throughout the Napoleonic era, the spontoon, a type of shortened pike that typically had a pair of blades or lugs mounted to the head, was retained as a symbol by some NCOs; in practice it was probably more useful for gesturing and signaling than as a weapon for combat.
As late as Poland's Kościuszko Uprising in 1794, the pike reappeared as a child of necessity which became, for a short period, a surprisingly effective weapon on the battlefield. In this case, General Thaddeus Kosciuszko, facing a shortage of firearms and bayonets to arm landless serf partisans recruited straight from the wheat fields, had their sickles and scythes heated and straightened out into something resembling crude "war scythes". These weaponized agricultural accouterments were then used in battle as both cutting weapons, as well as makeshift pikes. The peasant "pikemen" armed with these crude instruments played a pivotal role in securing a near impossible victory against a far larger and better equipped Russian army at the Battle of Racławice, which took place on 4 April 1794.
Civilian pikeman played a similar role, though outnumbered and outgunned, in the 1798 rising in Ireland four years later. Here, especially in the Wexford Rebellion and in Dublin, the pike was useful mainly as a weapon by men and women fighting on foot against cavalry armed with guns.
Improvised pikes, made from bayonets on poles, were used by escaped convicts during the Castle Hill rebellion of 1804.
As late as the Napoleonic Wars, at the beginning of the 19th century, even the Russian militia (mostly landless peasants, like the Polish partisans before them) could be found carrying shortened pikes into battle. As the 19th century progressed, the obsolete pike would still find a use in such countries as Ireland, Russia, China, and Australia, generally in the hands of desperate peasant rebels who did not have access to firearms. John Brown purchased a large number of pikes and brought them to his raid on Harpers Ferry.
One attempt to resurrect the pike as a primary infantry weapon occurred during the American Civil War (1861–1865) when the Confederate States of America planned to recruit twenty regiments of pikemen in 1862. In April 1862 it was authorised that every Confederate infantry regiment would include two companies of pikemen, a plan supported by Robert E. Lee. Many pikes were produced but were never used in battle and the plan to include pikemen in the army was abandoned.
Shorter versions of pikes called boarding pikes were also used on warships—typically to repel boarding parties, up to the late 19th century.
The great Hawaiian warrior king Kamehameha I had an elite force of men armed with very long spears who seem to have fought in a manner identical to European pikemen, despite the usual conception of his people's general disposition for individualistic dueling as their method of close combat. It is not known whether Kamehameha himself introduced this tactic or if it was taken from the use of traditional Hawaiian weapons.
The pike was issued as a British Home Guard weapon in 1942 after the War Office acted on a letter from Winston Churchill saying "every man must have a weapon of some kind, be it only a mace or pike". However, these hand-held weapons never left the stores after the pikes had "generated an almost universal feeling of anger and disgust from the ranks of the Home Guard, demoralised the men and led to questions being asked in both Houses of Parliament". The pikes, made from obsolete Lee–Enfield rifle bayonet blades welded to a steel tube, took the name of "Croft's Pikes" after Henry Page Croft, the Under-Secretary of State for War who attempted to defend the fiasco by stating that they were a "silent and effective weapon".
In Spain, beginning in 1715 and ending in 1977, there were night patrol guards in cities called serenos who carried a short pike of called chuzo.
Pikes live on today only in ceremonial roles, being used to carry the colours of an infantry regiment and with the Company of Pikemen and Musketeers of the Honourable Artillery Company, or by some of the infantry units on duty during their rotation as guard for the President of the Italian Republic at the Quirinal Palace in Rome, Italy.
| Technology | Polearms | null |
86801 | https://en.wikipedia.org/wiki/Iris%20%28anatomy%29 | Iris (anatomy) | The iris (: irides or irises) is a thin, annular structure in the eye in most mammals and birds that is responsible for controlling the diameter and size of the pupil, and thus the amount of light reaching the retina. In optical terms, the pupil is the eye's aperture, while the iris is the diaphragm. Eye color is defined by the iris.
Etymology
The word "iris" is derived from the Greek word for "rainbow", also its goddess plus messenger of the gods in the Iliad, because of the many colours of this eye part.
Structure
The iris consists of two layers: the front pigmented fibrovascular layer known as a stroma and, behind the stroma, pigmented epithelial cells.
The stroma is connected to a sphincter muscle (sphincter pupillae), which contracts the pupil in a circular motion, and a set of dilator muscles (dilator pupillae), which pull the iris radially to enlarge the pupil, pulling it in folds.
The sphincter pupillae is the opposing muscle of the dilator pupillae. The pupil's diameter, and thus the inner border of the iris, changes size when constricting or dilating. The outer border of the iris does not change size. The constricting muscle is located on the inner border.
The back surface is covered by a heavily pigmented epithelial layer that is two cells thick (the iris pigment epithelium), but the front surface has no epithelium. This anterior surface projects as the dilator muscles. The high pigment content blocks light from passing through the iris to the retina, restricting it to the pupil. The outer edge of the iris, known as the root, is attached to the sclera and the anterior ciliary body. The iris and ciliary body together are known as the anterior uvea. Just in front of the root of the iris is the region referred to as the trabecular meshwork, through which the aqueous humour constantly drains out of the eye, with the result that diseases of the iris often have important effects on intraocular pressure and indirectly on vision. The iris along with the anterior ciliary body provide a secondary pathway for aqueous humour to drain from the eye.
The iris is divided into two major regions:
The pupillary zone is the inner region whose edge forms the boundary of the pupil.
The ciliary zone is the rest of the iris that extends to its origin at the ciliary body.
The collarette is the thickest region of the iris, separating the pupillary portion from the ciliary portion. The collarette is a vestige of the coating of the embryonic pupil. It is typically defined as the region where the sphincter muscle and dilator muscle overlap. Radial ridges extend from the periphery to the pupillary zone, to supply the iris with blood vessels. The root of the iris is the thinnest and most peripheral.
The muscle cells of the iris are smooth muscle in mammals and amphibians, but are striated muscle in reptiles (including birds). Many fish have neither, and, as a result, their irises are unable to dilate and contract, so that the pupil always remains of a fixed size.
Front
The crypts of Fuchs are a series of openings located on either side of the collarette that allow the stroma and deeper iris tissues to be bathed in aqueous humor. Collagen trabeculae that surround the border of the crypts can be seen in blue irises.
The midway between the collarette and the origin of the iris: These folds result from changes in the surface of the iris as it dilates.
Crypts on the base of the iris are additional openings that can be observed close to the outermost part of the ciliary portion of the iris.
Back
The radial contraction folds of Schwalbe are a series of very fine radial folds in the pupillary portion of the iris extending from the pupillary margin to the collarette. They are associated with the scalloped appearance of the pupillary ruff.
The structural folds of Schwalbe are radial folds extending from the border of the ciliary and pupillary zones that are much broader and more widely spaced, continuous with the "valleys" between the ciliary processes.
Some of the circular contraction folds are a fine series of ridges that run near the pupillary margin and vary in thickness of the iris pigment epithelium; others are in ciliary portion of iris.
Microanatomy
From anterior (front) to posterior (back), the layers of the iris are:
Anterior limiting layer
Stroma of iris
Iris sphincter muscle
Iris dilator muscle (myoepithelium)
Anterior pigment epithelium
Posterior pigment epithelium
Development
The stroma and the anterior border layer of the iris are derived from the neural crest, and behind the stroma of the iris, the sphincter pupillae and dilator pupillae muscles, as well as the iris epithelium, develop from optic cup neuroectoderm.
Function
The iris controls the size of the pupil by means of contracting the iris sphincter muscle and/or the iris dilator muscle. The size of the pupils is dependent on many factors (including light, emotional state, cognitive load, arousal, stimulation), and can range from less than 2 mm in diameter, to as large as 9 mm in diameter. However, there is considerable variation in maximal pupil diameter by individual humans, and decreases with age. The irises also contract the pupils when accommodation is initiated, to increase the depth of field.
Very few humans possess the ability to exert direct voluntary control over their iris muscles, which grants them the ability to dilate and constrict their pupils on command. However, there is no clear purpose or advantage to this.
Eye color
The iris is usually strongly pigmented, with the color typically ranging between brown, hazel, green, gray, and blue. Occasionally, the color of the iris is due to a lack of pigmentation, as in the pinkish-white of oculocutaneous albinism, or to obscuration of its pigment by blood vessels, as in the red of an abnormally vascularised iris. Despite the wide range of colors, the only pigment that contributes substantially to normal iris color is the dark pigment melanin. The quantity of melanin pigment in the iris is one factor in determining the phenotypic eye color of an organism. Structurally, this huge molecule is only slightly different from its equivalent found in skin and hair. Iris color is due to variable amounts of eumelanin (brown/black melanins) and pheomelanin (red/yellow melanins) produced by melanocytes. More of the former is found in brown-eyed people and of the latter in blue- and green-eyed people. The limbal ring appears as a dark ring encircling the iris on some individuals, but is a result of the optical properties of the region between the cornea and sclera, not of pigments in the iris.
Genetic and physical factors determining iris color
Iris color is a highly complex phenomenon consisting of the combined effects of texture, pigmentation, fibrous tissue, and blood vessels within the iris stroma, which together make up an individual's epigenetic constitution in this context. An organism's "eye color" is actually the color of one's iris, the cornea being transparent and the white sclera entirely outside the area of interest.
Melanin is yellowish to dark hazel in the stromal pigment cells, and black in the iris pigment epithelium, which lies in a thin but very opaque layer across the back of the iris. Most human irises also show a condensation of the brownish stromal melanin in the thin anterior border layer, which by its position has an overt influence on the overall color. The degree of dispersion of the melanin, which is in subcellular bundles called melanosomes, has some influence on the observed color, but melanosomes in the iris of humans and other vertebrates are not mobile, and the degree of pigment dispersion cannot be reversed. Abnormal clumping of melanosomes does occur in disease and may lead to irreversible changes in iris color (see heterochromia, below). Colors other than brown or black are due to selective reflection and absorption from the other stromal components. Sometimes, lipofuscin, a yellow "wear and tear" pigment, also enters into the visible eye color, especially in aged or diseased green eyes.
The optical mechanisms by which the nonpigmented stromal components influence eye color are complex, and many erroneous statements exist in the literature. Simple selective absorption and reflection by biological molecules (hemoglobin in the blood vessels, collagen in the vessel and stroma) is the most important element. Rayleigh scattering and Tyndall scattering, (which also happen in the sky) and diffraction also occur. Raman scattering, and constructive interference, as in the feathers of birds, do not contribute to the color of the eye, but interference phenomena are important in the brilliantly colored iris pigment cells (iridophores) in many animals. Interference effects can occur at both molecular and light-microscopic scales, and are often associated (in melanin-bearing cells) with quasicrystalline formations, which enhance the optical effects. Interference is recognised by characteristic dependence of color on the angle of view, as seen in eyespots of some butterfly wings, although the chemical components remain the same.
White babies are usually born blue-eyed since no pigment is in the stroma, and their eyes appear blue due to scattering and selective absorption from the posterior epithelium. If melanin is deposited substantially, brown or black color is seen; if not, they will remain blue or gray.
All the contributing factors towards eye color and its variation are not fully understood. Autosomal recessive/dominant traits in iris color are inherent in other species, but coloration can follow a different pattern.
Different colors in the two eyes
Heterochromia (also known as a heterochromia iridis or heterochromia iridum) is an ocular condition in which one iris is a different color from the other iris (complete heterochromia), or where the part of one iris is a different color from the remainder (partial heterochromia or sectoral heterochromia). Uncommon in humans, it is often an indicator of ocular disease, such as chronic iritis or diffuse iris melanoma, but may also occur as a normal variant. Sectors or patches of strikingly different colors in the same iris are less common. Anastasius the First was dubbed dikoros (having two irises) for his patent heterochromia since his right iris had a darker color than the left one.
In contrast, heterochromia and variegated iris patterns are common in veterinary practice. Siberian Husky dogs show heterochromia, possibly analogous to the genetically determined Waardenburg syndrome of humans. Some white cat fancies (e.g., white Turkish Angora or white Turkish Van cats) may show striking heterochromia, with the most common pattern being one uniformly blue, the other copper, orange, yellow, or green. Striking variation within the same iris is also common in some animals, and is the norm in some species. Several herding breeds, particularly those with a blue merle coat color (such as Australian Shepherds and Border Collies) may show well-defined blue areas within a brown iris, as well as separate blue and darker eyes. Some horses (usually within the white, spotted, palomino, or cremello groups of breeds) may show amber, brown, white and blue all within the same eye, without any sign of eye disease.
One eye with a white or bluish-white iris is also known as a "walleye".
Clinical significance
Angle closure glaucoma
Aniridia
Anisocoria
Horner's syndrome
Iridocyclitis
Iridoplegia
Iritis
Miosis/Mydriasis
Synechia
Third nerve palsy
Alternative medicine
Iridology
Iridology (also known as iridodiagnosis) is an alternative medicine technique whose proponents believe that patterns, colors, and other characteristics of the iris can be examined to determine information about a patient's systemic health. Practitioners match their observations to "iris charts", which divide the iris into zones corresponding to specific parts of the human body. Iridologists see the eyes as "windows" into the body's state of health.
Iridology is not supported by quality research studies, and is considered pseudoscience.
Graphics
| Biology and health sciences | Visual system | Biology |
86996 | https://en.wikipedia.org/wiki/Greenhouse | Greenhouse | A greenhouse is a structure that is designed to regulate the temperature and humidity of the environment inside. There are different types of greenhouses, but they all have large areas covered with transparent materials that let sunlight pass and block it as heat. The most common materials used in modern greenhouses for walls and roofs are rigid plastic made of polycarbonate, plastic film made of polyethylene, or glass panes. When the inside of a greenhouse is exposed to sunlight, the temperature increases, providing a sheltered environment for plants to grow even in cold weather.
The terms greenhouse, glasshouse, and hothouse are often used interchangeably to refer to buildings used for cultivating plants. The specific term used depends on the material and heating system used in the building. Nowadays, greenhouses are more commonly constructed with a variety of materials, such as wood and polyethylene plastic. A glasshouse, on the other hand, is a traditional type of greenhouse made only of glass panes that allow light to enter. The term hothouse indicates that the greenhouse is artificially heated. However, both heated and unheated structures can generally be classified as greenhouses.
Greenhouses can range in size from small sheds to industrial-sized buildings and enormous glasshouses. The smallest example is a miniature greenhouse known as a cold frame, typically used at home, whereas large commercial greenhouses are high tech production facilities for vegetables, flowers or fruits. The glass greenhouses are filled with equipment including screening installations, heating, cooling, and lighting, and may be controlled by a computer to optimize conditions for plant growth. Different techniques are then used to manage growing conditions, including air temperature, relative humidity and vapour-pressure deficit, in order to provide the optimum environment for cultivation of a specific crop.
History
Roman Empire
Before the development of greenhouses, agricultural practices were constrained to weather conditions. According to the climatic zone of communities, people were limited to a select range of species and time of the year in which they could grow plants. Yet around 30 CE, the Roman Empire built the first recorded attempt of an artificial environment. Due to emperor Tiberius's declining health, the royal physicians recommended that the emperor eat one cucumber a day. Cucumbers, however, are quite tender plants and do not grow easily year-round. Therefore, the Romans designed an artificial environment, like a greenhouse, to have cucumbers available for the emperor all year. Cucumbers were planted in wheeled carts which were put in the sun daily, then taken inside to keep them warm at night. The cucumbers were stored under frames or in cucumber houses glazed with either oiled cloth known as specularia or with sheets of selenite (a.k.a. lapis specularis), according to the description by Pliny the Elder.
15th-century Korea
The next biggest breakthrough in greenhouse design came from Korea in the 15th century during the Joseon dynasty. In the 1450s, Soon ui Jeon described the first artificially heated greenhouse in his manuscript called Sangayorok. Soon ui Jeon was a physician to the royal family, and Sangayorok was intended to provide the nobility with important agricultural and housekeeping knowledge. Within the section of agricultural techniques, Soon ui Jeon wrote how to build a greenhouse that was able to cultivate vegetables and other plants in the winter. The Korean design adds an ondol system to the structure. An ondol is a Korean heating system used in domestic spaces, which runs a flue pipe from a heat source underneath the flooring. In addition to the ondol, a cauldron filled with water was also heated to create steam and increase the temperature and humidity in the greenhouse. These Korean greenhouses were the first active greenhouses that controlled temperature, rather than only relying on energy from the sun. The design still included passive heating methods, such as semi-transparent oiled hanji windows to capture light and cob walls to retain heat, but the furnace provided extra control over the artificial environment. The Annals of the Joseon Dynasty confirm that greenhouse-like structures incorporating ondol were constructed to provide heat for mandarin orange trees during the winter of 1438.
17th century
The concept of greenhouses also appeared in the Netherlands and then England in the 17th century, along with the plants. Some of these early attempts required enormous amounts of work to close up at night or to winterize. There were serious problems with providing adequate and balanced heat in these early greenhouses. The first 'stove' (heated) greenhouse in the UK was completed at Chelsea Physic Garden by 1681. Today, the Netherlands has many of the largest greenhouses in the world, some of them so vast that they are able to produce millions of vegetables every year.
Experimentation with greenhouse design continued during the 17th century in Europe, as technology produced better glass and construction techniques improved. The greenhouse at the Palace of Versailles was an example of their size and elaborateness; it was more than long, wide, and high.
18th century
Andrew Faneuil, a prosperous Boston merchant, built the first American greenhouse in 1737.
When returning to Mount Vernon after the war, George Washington learned of the greenhouse built at the Carroll estate of Mount Clare (Maryland). It was designed by Margaret Tilghman Carroll, an industrious gardener who cultivated citrus trees in this orangery.
In 1784 Washington wrote requesting details about the design of her greenhouse, and she complied. Washington wrote:
19th century
The French botanist Charles Lucien Bonaparte is often credited with building the first practical modern greenhouse in Leiden, Holland, during the 1800s to grow medicinal tropical plants.
Originally only on the estates of the rich, the growth of the science of botany caused greenhouses to spread to the universities. The French called their first greenhouses orangeries, since they were used to protect orange trees from freezing. As pineapples became popular, pineries, or pineapple pits, were built.
19th-century England
The largest glasshouses yet conceived were constructed in England during the Victorian era. As a direct result of colonial expansion, the purpose of glasshouses changed from agriculture to horticulture. The accelerated transfer of plants and horticultural knowledge between colonies contributed to the Victorian fascination with 'exotic' plants and environments. Glasshouses became spectacles to entertain the general public. The curated environments in glasshouses aimed to capture "the Western imagination of an idealised landscape" and support the fantasy of the cultural 'other'. As a consequence, the collection of plants are believed to be true reflections of the world, yet are actually stereotypical arrangements of 'exotic' plants to symbolize exactly where British colonies are and how far their authority reaches. To uphold British hegemony, glasshouses became arguments of colonial power which flaunt the "absolute control of colonized environments and flora...[using plants] as a symbol of British Imperial power.
A prominent design from the 19th century were glasshouses with sufficient height for sizeable trees, called palm houses. These were normally in public gardens or parks and exemplified the 19th-century development of glass and iron architecture. This technology was widely used in railway stations, markets, exhibition halls, and other large buildings that needed large, open internal area. One of the earliest examples of a palm house is in the Belfast Botanic Gardens. Designed by Charles Lanyon, the building was completed in 1840. It was constructed by iron-maker Richard Turner, who would later build the Palm House, Kew Gardens at the Royal Botanic Gardens, Kew, London, in 1848. This came shortly after the Chatsworth Great Conservatory (1837–40) and shortly before The Crystal Palace (1851), both designed by Joseph Paxton, and both now lost.
Other large greenhouses built in the 19th century included the New York Crystal Palace, Munich's Glaspalast and the Royal Greenhouses of Laeken (1874–1895) for King Leopold II of Belgium. In Japan, the first greenhouse was built in 1880 by Samuel Cocking, a British merchant who exported herbs.
20th century
In the 20th century, the geodesic dome was added to the many types of greenhouses. Notable examples are the Eden Project in Cornwall, The Rodale Institute in Pennsylvania, the Climatron at the Missouri Botanical Garden in St. Louis, Missouri, and Toyota Motor Manufacturing Kentucky. The pyramid is another popular shape for large, high greenhouses; there are several pyramidal greenhouses at the Muttart Conservatory in Alberta ().
Greenhouse structures adapted in the 1960s when wider sheets of polyethylene (polythene) film became widely available. Hoop houses were made by several companies and were also frequently made by the growers themselves. Constructed of aluminum extrusions, special galvanized steel tubing, or even just lengths of steel or PVC water pipe, construction costs were greatly reduced. This resulted in many more greenhouses being constructed on smaller farms and garden centers. Polyethylene film durability increased greatly when more effective UV-inhibitors were developed and added in the 1970s; these extended the usable life of the film from one or two years up to three and eventually four or more years.
Gutter-connected greenhouses became more prevalent in the 1980s and 1990s. These greenhouses have two or more bays connected by a common wall, or row of support posts. Heating inputs were reduced as the ratio of floor area to exterior wall area was increased substantially. Gutter-connected greenhouses are now commonly used both in production and in situations where plants are grown and sold to the public as well. Gutter-connected greenhouses are commonly covered with structured polycarbonate materials, or a double layer of polyethylene film with air blown between to provide increased heating efficiencies.
Theory of operation
The warmer temperature in a greenhouse occurs because incident solar radiation passes through the transparent roof and walls and is absorbed by the floor, earth, and contents, which become warmer. These in turn warm up the surrounding air within the greenhouse. As the structure is not open to the atmosphere, the warmed air cannot escape via convection due to the presence of roof and walls, so the temperature inside the greenhouse rises.
Quantitative studies suggest that the effect of infrared radiative cooling is not negligibly small, and may have economic implications in a heated greenhouse. Analysis of issues of near-infrared radiation in a greenhouse with screens of a high coefficient of reflection concluded that installation of such screens reduced heat demand by about 8%, and application of dyes to transparent surfaces was suggested. Composite less-reflective glass, or less effective but cheaper anti-reflective coated simple glass, also produced savings.
Ventilation
Ventilation is one of the most important components in a successful greenhouse. If there is no proper ventilation, greenhouses and their growing plants can become prone to problems. The main purposes of ventilation is to regulate the temperature and humidity to the optimal level, and to ensure movement of air and thus prevent the build-up of plant pathogens (such as Botrytis cinerea) that prefer still air conditions. Ventilation also ensures a supply of fresh air for photosynthesis and plant respiration, and may enable important pollinators to access the greenhouse crop.
Ventilation can be achieved via the use of vents – often controlled automatically via a computer – and recirculation fans.
Heating
Heating or electricity is one of the most considerable costs in the operation of greenhouses across the globe, especially in colder climates. The main problem with heating a greenhouse as opposed to a building that has solid opaque walls is the amount of heat lost through the greenhouse covering. Since the coverings need to allow light to filter into the structure, they conversely cannot insulate very well. With traditional plastic greenhouse coverings having an R-value of around 2, a great amount of money is therefore spent to continually replace the heat lost. Most greenhouses, when supplemental heat is needed use natural gas or electric furnaces.
Passive heating methods exist which seek heat using low energy input. Solar energy can be captured from periods of relative abundance (day time/summer), and released to boost the temperature during cooler periods (night time/winter). Waste heat from livestock can be used to heat greenhouses, e.g., placing a chicken coop inside a greenhouse recovers the heat generated by the chickens, which would otherwise be wasted. Some greenhouses also rely on geothermal heating.
Cooling
Cooling is typically done by opening windows in the greenhouse when it gets too warm for the plants inside it. This can be done manually, or in an automated manner. Window actuators can open windows due to temperature difference or can be opened by electronic controllers. Electronic controllers are often used to monitor the temperature and adjusts the furnace operation to the conditions. This can be as simple as a basic thermostat, but can be more complicated in larger greenhouse operations.
For very hot situations, a shade house providing cooling by shade may be used.
Lighting
During the day, light enters the greenhouse via the windows and is used by the plants. Some greenhouses are also equipped with grow lights (often LED lights) which are switched on at night to increase the amount of light the plants get, hereby increasing the yield with certain crops.
Carbon dioxide enrichment
The benefits of carbon dioxide enrichment to about 1100 parts per million in greenhouse cultivation to enhance plant growth has been known for nearly 100 years. After the development of equipment for the controlled serial enrichment of carbon dioxide, the technique was established on a broad scale in the Netherlands. Secondary metabolites, e.g., cardiac glycosides in Digitalis lanata, are produced in higher amounts by greenhouse cultivation at enhanced temperature and at enhanced carbon dioxide concentration. Carbon dioxide enrichment can also reduce greenhouse water usage by a significant fraction by mitigating the total air-flow needed to supply adequate carbon for plant growth and thereby reducing the quantity of water lost to evaporation. Commercial greenhouses are now frequently located near appropriate industrial facilities for mutual benefit. For example, Cornerways Nursery in the UK is strategically placed near a major sugar refinery, consuming both waste heat and CO2 from the refinery which would otherwise be vented to atmosphere. The refinery reduces its carbon emissions, whilst the nursery enjoys boosted tomato yields and does not need to provide its own greenhouse heating.
Enrichment only becomes effective where, by Liebig's law, carbon dioxide has become the limiting factor. In a controlled greenhouse, irrigation may be trivial, and soils may be fertile by default. In less-controlled gardens and open fields, rising CO2 levels only increase primary production to the point of soil depletion (assuming no droughts, flooding, or both), as demonstrated prima facie by CO2 levels continuing to rise. In addition, laboratory experiments, free air carbon enrichment (FACE) test plots, and field measurements provide replicability.
Types
In domestic greenhouses, the glass used is typically 3mm (or ⅛″) 'horticultural glass' grade, which is good quality glass that should not contain air bubbles (which can produce scorching on leaves by acting like lenses).
Plastics mostly used are polyethylene film and multi-wall sheets of polycarbonate material, or PMMA acrylic glass.
Commercial glass greenhouses are often high-tech production facilities for vegetables or flowers. The glass greenhouses are filled with equipment such as screening installations, heating, cooling and lighting, and may be automatically controlled by a computer.
Dutch Light
In the UK and other Northern European countries a pane of horticultural glass referred to as "Dutch Light" was historically used as a standard unit of construction, having dimensions of 28¾″ x 56″ (approx. 730 mm x 1422 mm). This size gives a larger glazed area when compared with using smaller panes such as the 600 mm width typically used in modern domestic designs which then require more supporting framework for a given overall greenhouse size. A style of greenhouse having sloped sides (resulting in a wider base than at eaves height) and using these panes uncut is also often referred to as "Dutch Light design", and a cold frame using a full- or half-pane as being of "Dutch" or "half-Dutch" size.
Greenhouses with spectrally selective solar modules
Chinese Solar Greenhouse
Chinese solar greenhouses are designed to maximize solar energy, making them highly efficient in colder climates without the need for additional heating systems. Originating in 1978, these greenhouses feature three solid walls, often made of brick or clay, with a transparent south-facing side that captures solar heat during the day. This design can keep the interior up to 25°C (45°F) warmer than the outside, even in winter. Over time, innovations like modern insulation materials and automated night curtains have been incorporated, enhancing their efficiency and maintaining a stable environment for crops.
Despite their simplicity and cost-effectiveness, Chinese solar greenhouses have some limitations, such as the need for proper orientation to maximize sunlight and challenges with the durability of plastic film coverings. Nevertheless, they remain a practical solution for year-round farming in regions with significant temperature variations, and are widely used across northern China.
Uses
Greenhouses allow for greater control over the growing environment of plants. Depending upon the technical specification of a greenhouse, key factors that may be controlled include temperature, levels of light and shade, irrigation, fertilizer application, and atmospheric humidity. Greenhouses may be used to overcome shortcomings in the growing qualities of a piece of land, such as a short growing season or poor light levels, and they can thereby improve food production in marginal environments. Shade houses are used specifically to provide shade in hot, dry climates.
As they may enable certain crops to be grown throughout the year, greenhouses are increasingly important in the food supply of high-latitude countries. One of the largest complexes in the world is in Almería, Andalucía, Spain, where greenhouses cover almost .
Greenhouses are often used for growing flowers, vegetables, fruits, and transplants. Special greenhouse varieties of certain crops, such as tomatoes, are generally used for commercial production.
Many vegetables and flowers can be grown in greenhouses in late winter and early spring, and then transplanted outside as the weather warms. Seed tray racks can also be used to stack seed trays inside the greenhouse for later transplanting outside. Hydroponics (especially hydroponic A-frames) can be used to make the most use of the interior space when growing crops to mature size inside the greenhouse.
Bumblebees can be used as pollinators for pollination, but other types of bees have also been used, as well as artificial pollination.
The relatively closed environment of a greenhouse has its unique management requirements, compared with outdoor production. Pests and diseases, and extremes of temperature and humidity, have to be controlled, and irrigation is necessary to provide water. Most greenhouses use sprinklers or drip lines. Significant inputs of heat and light may be required, particularly with winter production of warm-weather vegetables.
Greenhouses also have applications outside of the agriculture industry. GlassPoint Solar, located in Fremont, California, encloses solar fields in greenhouses to produce steam for solar-enhanced oil recovery. For example, in November 2017 GlassPoint announced that it is developing a solar enhanced oil recovery facility near Bakersfield, CA that uses greenhouses to enclose its parabolic troughs.
An "alpine house" is a specialized greenhouse used for growing alpine plants. The purpose of an alpine house is to mimic the conditions in which alpine plants grow; particularly to protect from wet conditions in winter. Alpine houses are often unheated since the plants grown there are hardy, or require at most protection from hard frost in the winter. They are designed to have excellent ventilation.
Adoption
Worldwide, there are an estimated nine million acres (about thirty-six and a half thousand square kilometers) of greenhouses.
Netherlands
The Netherlands has some of the largest greenhouses in the world. Such is the scale of food production in the country that in 2017, greenhouses occupied nearly 5,000 hectares.
Greenhouses began to be built in the Westland region of the Netherlands in the mid-19th century. The addition of sand to bogs and clay soil created fertile soil for agriculture, and around 1850, grapes were grown in the first greenhouses, simple glass constructions with one of the sides consisting of a solid wall. By the early 20th century, greenhouses began to be constructed with all sides built using glass, and they began to be heated. This also allowed for the production of fruits and vegetables that did not ordinarily grow in the area. Today, the Westland and the area around Aalsmeer have the highest concentration of greenhouse agriculture in the world. The Westland produces mostly vegetables, besides plants and flowers; Aalsmeer is noted mainly for the production of flowers and potted plants. Since the 20th century, the area around Venlo and parts of Drenthe have also become important regions for greenhouse agriculture.
Since 2000, technical innovations have included the "closed greenhouse", a completely closed system allowing the grower complete control over the growing process while using less energy. Floating greenhouses are used in watery areas of the country.
The Netherlands has around 4,000 greenhouse enterprises that operate over 9,000 hectares of greenhouses and employ some 150,000 workers, producing €7.2 billion worth of vegetables, fruit, plants, and flowers, some 80% of which is exported.
| Technology | Buildings and infrastructure | null |
87019 | https://en.wikipedia.org/wiki/Ductility | Ductility | Ductility refers to the ability of a material to sustain significant plastic deformation before fracture. Plastic deformation is the permanent distortion of a material under applied stress, as opposed to elastic deformation, which is reversible upon removing the stress. Ductility is a critical mechanical performance indicator, particularly in applications that require materials to bend, stretch, or deform in other ways without breaking. The extent of ductility can be quantitatively assessed using the percent elongation at break, given by the equation:
where is the length of the material after fracture and is the original length before testing. This formula helps in quantifying how much a material can stretch under tensile stress before failure, providing key insights into its ductile behavior. Ductility is an important consideration in engineering and manufacturing. It defines a material's suitability for certain manufacturing operations (such as cold working) and its capacity to absorb mechanical overload like in an engine. Some metals that are generally described as ductile include gold and copper, while platinum is the most ductile of all metals in pure form. However, not all metals experience ductile failure as some can be characterized with brittle failure like cast iron. Polymers generally can be viewed as ductile materials as they typically allow for plastic deformation.
Inorganic materials, including a wide variety of ceramics and semiconductors, are generally characterized by their brittleness. This brittleness primarily stems from their strong ionic or covalent bonds, which maintain the atoms in a rigid, densely packed arrangement. Such a rigid lattice structure restricts the movement of atoms or dislocations, essential for plastic deformation. The significant difference in ductility observed between metals and inorganic semiconductor or insulator can be traced back to each material’s inherent characteristics, including the nature of their defects, such as dislocations, and their specific chemical bonding properties. Consequently, unlike ductile metals and some organic materials with ductility (%EL) from 1.2% to over 1200%, brittle inorganic semiconductors and ceramic insulators typically show much smaller ductility at room temperature.
Malleability, a similar mechanical property, is characterized by a material's ability to deform plastically without failure under compressive stress. Historically, materials were considered malleable if they were amenable to forming by hammering or rolling. Lead is an example of a material which is relatively malleable but not ductile.
Materials science
Ductility is especially important in metalworking, as materials that crack, break or shatter under stress cannot be manipulated using metal-forming processes such as hammering, rolling, drawing or extruding. Malleable materials can be formed cold using stamping or pressing, whereas brittle materials may be cast or thermoformed.
High degrees of ductility occur due to metallic bonds, which are found predominantly in metals; this leads to the common perception that metals are ductile in general. In metallic bonds valence shell electrons are delocalized and shared between many atoms. The delocalized electrons allow metal atoms to slide past one another without being subjected to strong repulsive forces that would cause other materials to shatter.
The ductility of steel varies depending on the alloying constituents. Increasing the levels of carbon decreases ductility. Many plastics and amorphous solids, such as Play-Doh, are also malleable. The most ductile metal is platinum and the most malleable metal is gold. When highly stretched, such metals distort via formation, reorientation and migration of dislocations and crystal twins without noticeable hardening.
Quantification
Basic definitions
The quantities commonly used to define ductility in a tension test are relative elongation (in percent, sometimes denoted as ) and reduction of area (sometimes denoted as ) at fracture. Fracture strain is the engineering strain at which a test specimen fractures during a uniaxial tensile test. Percent elongation, or engineering strain at fracture, can be written as:
Percent reduction in area can be written as:
where the area of concern is the cross-sectional area of the gauge of the specimen.
According to Shigley's Mechanical Engineering Design, 'significant' denotes about 5.0 percent elongation.
Effect of sample dimensions
An important point concerning the value of the ductility (nominal strain at failure) in a tensile test is that it commonly exhibits a dependence on sample dimensions. However, a universal parameter should exhibit no such dependence (and, indeed, there is no dependence for properties such as stiffness, yield stress and ultimate tensile strength). This occurs because the measured strain (displacement) at fracture commonly incorporates contributions from both the uniform deformation occurring up to the onset of necking and the subsequent deformation of the neck (during which there is little or no deformation in the rest of the sample). The significance of the contribution from neck development depends on the "aspect ratio" (length / diameter) of the gauge length, being greater when the ratio is low. This is a simple geometric effect, which has been clearly identified. There have been both experimental studies and theoretical explorations of the effect, mostly based on Finite Element Method (FEM) modelling. Nevertheless, it is not universally appreciated and, since the range of sample dimensions in common use is quite wide, it can lead to highly significant variations (by factors of up to 2 or 3) in ductility values obtained for the same material in different tests.
A more meaningful representation of ductility would be obtained by identifying the strain at the onset of necking, which should be independent of sample dimensions. This point can be difficult to identify on a (nominal) stress-strain curve, because the peak (representing the onset of necking) is often relatively flat. Moreover, some (brittle) materials fracture before the onset of necking, such that there is no peak. In practice, for many purposes it is preferable to carry out a different kind of test, designed to evaluate the toughness (energy absorbed during fracture), rather than use ductility values obtained in tensile tests.
In an absolute sense, "ductility" values are therefore virtually meaningless. The actual (true) strain in the neck at the point of fracture bears no direct relation to the raw number obtained from the nominal stress-strain curve; the true strain in the neck is often considerably higher. Also, the true stress at the point of fracture is usually higher than the apparent value according to the plot. The load often drops while the neck develops, but the sectional area in the neck is also dropping (more sharply), so the true stress there is rising. There is no simple way of estimating this value, since it depends on the geometry of the neck. While the true strain at fracture is a genuine indicator of "ductility", it cannot readily be obtained from a conventional tensile test.
The Reduction in Area (RA) is defined as the decrease in sectional area at the neck (usually obtained by measurement of the diameter at one or both of the fractured ends), divided by the original sectional area. It is sometimes stated that this is a more reliable indicator of the "ductility" than the elongation at failure (partly in recognition of the fact that the latter is dependent on the aspect ratio of the gauge length, although this dependence is far from being universally appreciated). There is something in this argument, but the RA is still some way from being a genuinely meaningful parameter. One objection is that it is not easy to measure accurately, particularly with samples that are not circular in section. Rather more fundamentally, it is affected by both the uniform plastic deformation that took place before necking and by the development of the neck. Furthermore, it is sensitive to exactly what happens in the latter stages of necking, when the true strain is often becoming very high and the behavior is of limited significance in terms of a meaningful definition of strength (or toughness). There has again been extensive study of this issue.
Ductile–brittle transition temperature
Metals can undergo two different types of fractures: brittle fracture or ductile fracture. Failure propagation occurs faster in brittle materials due to the ability for ductile materials to undergo plastic deformation. Thus, ductile materials are able to sustain more stress due to their ability to absorb more energy prior to failure than brittle materials are. The plastic deformation results in the material following a modification of the Griffith equation, where the critical fracture stress increases due to the plastic work required to extend the crack adding to the work necessary to form the crack - work corresponding to the increase in surface energy that results from the formation of an addition crack surface. The plastic deformation of ductile metals is important as it can be a sign of the potential failure of the metal. Yet, the point at which the material exhibits a ductile behavior versus a brittle behavior is not only dependent on the material itself but also on the temperature at which the stress is being applied to the material. The temperature where the material changes from brittle to ductile or vice versa is crucial for the design of load-bearing metallic products. The minimum temperature at which the metal transitions from a brittle behavior to a ductile behavior, or from a ductile behavior to a brittle behavior, is known as the ductile-brittle transition temperature (DBTT). Below the DBTT, the material will not be able to plastically deform, and the crack propagation rate increases rapidly leading to the material undergoing brittle failure rapidly. Furthermore, DBTT is important since, once a material is cooled below the DBTT, it has a much greater tendency to shatter on impact instead of bending or deforming (low temperature embrittlement). Thus, the DBTT indicates the temperature at which, as temperature decreases, a material's ability to deform in a ductile manner decreases and so the rate of crack propagation drastically increases. In other words, solids are very brittle at very low temperatures, and their toughness becomes much higher at elevated temperatures.
For more general applications, it is preferred to have a lower DBTT to ensure the material has a wider ductility range. This ensures that sudden cracks are inhibited so that failures in the metal body are prevented. It has been determined that the more slip systems a material has, the wider the range of temperatures ductile behavior is exhibited at. This is due to the slip systems allowing for more motion of dislocations when a stress is applied to the material. Thus, in materials with a lower amount of slip systems, dislocations are often pinned by obstacles leading to strain hardening, which increases the materials strength which makes the material more brittle. For this reason, FCC (face centered cubic) structures are ductile over a wide range of temperatures, BCC (body centered cubic) structures are ductile only at high temperatures, and HCP (hexagonal closest packed) structures are often brittle over wide ranges of temperatures. This leads to each of these structures having different performances as they approach failure (fatigue, overload, and stress cracking) under various temperatures, and shows the importance of the DBTT in selecting the correct material for a specific application. For example, zamak 3 exhibits good ductility at room temperature but shatters when impacted at sub-zero temperatures. DBTT is a very important consideration in selecting materials that are subjected to mechanical stresses. A similar phenomenon, the glass transition temperature, occurs with glasses and polymers, although the mechanism is different in these amorphous materials. The DBTT is also dependent on the size of the grains within the metal, as typically smaller grain size leads to an increase in tensile strength, resulting in an increase in ductility and decrease in the DBTT. This increase in tensile strength is due to the smaller grain sizes resulting in grain boundary hardening occurring within the material, where the dislocations require a larger stress to cross the grain boundaries and continue to propagate throughout the material. It has been shown that by continuing to refine ferrite grains to reduce their size, from 40 microns down to 1.3 microns, that it is possible to eliminate the DBTT entirely so that a brittle fracture never occurs in ferritic steel (as the DBTT required would be below absolute zero).
In some materials, the transition is sharper than others and typically requires a temperature-sensitive deformation mechanism. For example, in materials with a body-centered cubic (bcc) lattice the DBTT is readily apparent, as the motion of screw dislocations is very temperature sensitive because the rearrangement of the dislocation core prior to slip requires thermal activation. This can be problematic for steels with a high ferrite content. This famously resulted in serious hull cracking in Liberty ships in colder waters during World War II, causing many sinkings. DBTT can also be influenced by external factors such as neutron radiation, which leads to an increase in internal lattice defects and a corresponding decrease in ductility and increase in DBTT.
The most accurate method of measuring the DBTT of a material is by fracture testing. Typically four-point bend testing at a range of temperatures is performed on pre-cracked bars of polished material. Two fracture tests are typically utilized to determine the DBTT of specific metals: the Charpy V-Notch test and the Izod test. The Charpy V-notch test determines the impact energy absorption ability or toughness of the specimen by measuring the potential energy difference resulting from the collision between a mass on a free-falling pendulum and the machined V-shaped notch in the sample, resulting in the pendulum breaking through the sample. The DBTT is determined by repeating this test over a variety of temperatures and noting when the resulting fracture changes to a brittle behavior which occurs when the absorbed energy is dramatically decreased. The Izod test is essentially the same as the Charpy test, with the only differentiating factor being the placement of the sample; In the former the sample is placed vertically, while in the latter the sample is placed horizontally with respect to the bottom of the base.
For experiments conducted at higher temperatures, dislocation activity increases. At a certain temperature, dislocations shield the crack tip to such an extent that the applied deformation rate is not sufficient for the stress intensity at the crack-tip to reach the critical value for fracture (KiC). The temperature at which this occurs is the ductile–brittle transition temperature. If experiments are performed at a higher strain rate, more dislocation shielding is required to prevent brittle fracture, and the transition temperature is raised.
| Physical sciences | Solid mechanics | Physics |
87026 | https://en.wikipedia.org/wiki/Tapestry | Tapestry | Tapestry is a form of textile art, traditionally woven by hand on a loom. Normally it is used to create images rather than patterns. Tapestry is relatively fragile, and difficult to make, so most historical pieces are intended to hang vertically on a wall (or sometimes in tents), or sometimes horizontally over a piece of furniture such as a table or bed. Some periods made smaller pieces, often long and narrow and used as borders for other textiles. Most weavers use a natural warp thread, such as wool, linen, or cotton. The weft threads are usually wool or cotton but may include silk, gold, silver, or other alternatives.
In late medieval Europe, tapestry was the grandest and most expensive medium for figurative images in two dimensions, and despite the rapid rise in importance of painting it retained this position in the eyes of many Renaissance patrons until at least the end of the 16th century, if not beyond. The European tradition continued to develop and reflect wider changes in artistic styles until the French Revolution and Napoleonic Wars, before being revived on a smaller scale in the 19th century.
Technically, tapestry is weft-faced weaving, in which all the warp threads are hidden in the completed work, unlike most woven textiles, where both the warp and the weft threads may be visible. In tapestry weaving, weft yarns are typically discontinuous (unlike brocade); the artisan interlaces each coloured weft back and forth in its own small pattern area. It is a plain weft-faced weave having weft threads of different colours worked over portions of the warp to form the design. European tapestries are normally made to be seen only from one side, and often have a plain lining added on the back. However, other traditions, such as Chinese kesi and that of pre-Columbian Peru, make tapestry to be seen from both sides.
Tapestry should be distinguished from the different technique of embroidery, although large pieces of embroidery with images are sometimes loosely called "tapestry", as with the famous Bayeux Tapestry, which is in fact embroidered. From the Middle Ages on European tapestries could be very large, with images containing dozens of figures. They were often made in sets, so that a whole room could be hung with them.
Terms and etymology
In English, "tapestry" has two senses, both of which apply to most of the works discussed here. Firstly it means work using the tapestry weaving technique described above and below, and secondly it means a rather large textile wall hanging with a figurative design. Some embroidered works, like the Bayeux Tapestry, meet the second definition but not the first. The situation is complicated by the French equivalent tapisserie also covering needlepoint work, which can lead to confusion, especially with pieces such as furniture covers, where both techniques are used.
According to the Oxford English Dictionary, the earliest use in English was in a will of 1434, mentioning a "Lectum meum de tapstriwerke cum leonibus cum pelicano". They give a wide definition, covering: "A textile fabric decorated with designs of ornament or pictorial subjects, painted, embroidered, or woven in colours, used for wall hangings, curtains, covers for seats, ..." before mentioning "especially" those woven in a tapestry weave.
The word tapestry derives from Old French , from , meaning "to cover with heavy fabric, to carpet", in turn from , "heavy fabric", via Latin ( ), which is the Latinisation of the Greek (; , ), "carpet, rug". The earliest attested form of the word is the Mycenaean Greek , , written in the Linear B syllabary.
"Tapestry" was not the common English term until near the end of the classic period for them. If not just called "hangings" or "cloths", they were known as "arras", from the period when Arras was the leading production centre. Arazzo is still the term for tapestry in Italian, while a number of European languages use variants based on Gobelins, after the French factory; for example both Danish and Hungarian use gobelin (and in Danish tapet means wallpaper). Thomas Campbell argues that in documents relating to the Tudor royal collection from 1510 onwards "arras" specifically meant tapestries using gold thread.
Production
Tapestry is a type of weaving. Various designs of looms can be used, including upright or "high-warp" looms, where the tapestry is stretched vertically in front of the weaver, or horizontal "low-warp" looms, which were usual in large medieval and Renaissance workshops, but later mostly used for smaller pieces. The weaver always works on the back of the piece, and is normally following a full-size drawn or painted cartoon, or possibly another tapestry; depending on the set up, this reverses (is a mirror image of) the tapestry image. The cartoon was generally created from a smaller modello, which in "industrial" workshops from at least the late Middle Ages on was produced by a professional artist, who often had little or no further involvement in the process. The cartoon was traced onto the warp lines by the weaver, and then placed where it could still be seen, sometimes through a mirror, when it hung behind the weaver. With low-warp looms the cartoon was usually cut into strips and placed beneath the weaving, where the weaver could see it through the "web" of threads. The Raphael Cartoons, which are very rare examples of surviving cartoons, were cut in this way.
In European "industrial" tapestries the warp threads were normally wool, but in more artisanal settings, and older ones, linen was often used. The weft threads were wool, with silk, silver or gold thread used in the most expensive tapestries. Some famous designs, such as the Sistine Chapel tapestries and the Story of Abraham set probably first made for King Henry VIII, survive in versions with precious metals and other versions without. Using silk might increase the cost by four times, and adding gold thread increased the cost enormously, to perhaps fifty times that of wool alone.
The weavers were usually male, as the work was physically demanding; spinning the threads was usually a female preserve. Apart from the design and materials, the quality of tapestries varies with the tightness of the weaving. One modern measure of this is the number of warp threads per centimetre. It is estimated that a single weaver could produce a square yard of medium quality tapestry in a month, but only half that of the finest quality.
Function
The success of decorative tapestry can be partially explained by its portability (Le Corbusier once called tapestries "nomadic murals"). The fully hand-woven tapestry form is more suitable for creating new figurative designs than other types of woven textile, and the looms could be much larger. Kings and noblemen could fold up and transport tapestries from one residence to another. Many kings had "wardrobe" departments with their own buildings devoted to the care, repair, and movement of tapestries, which were folded into large canvas bags and carried on carts. In churches, they were displayed on special occasions. Tapestries were also draped on the walls of palaces and castles for insulation during winter, as well as for decorative display. For special ceremonial processions such as coronations, royal entries and weddings, they would sometimes be displayed outside. The largest and best tapestries, designed for more public spaces in palaces, were only displayed on special occasions, reducing wear and fading. Presumably the smaller personal rooms were hung permanently.
Many smaller pieces were made as covers for furniture or cushions, or curtains and bed hangings. Others, especially in the case of those made for patrons outside the top of the elite, were cut up and reused for such functions when they, or tapestries in general, came to seem old-fashioned. Bags, and sometimes clothing were other re-uses. The Beauvais Manufactory became rather a specialist in furniture upholstery, which enabled it to survive after the French Revolution when this became the main remaining market. In the case of tapestries with precious metal thread, they might be burned to recover the metal, as Charles V's soldiers did to some of the Sistine Chapel tapestries, and the French Directory government did in the 1790s to most of the royal collection from the Renaissance.
In the Middle Ages and the Renaissance, a rich tapestry panel woven with symbolic emblems, mottoes, or coats of arms called a baldachin, canopy of state or cloth of state was hung behind and over a throne as a symbol of authority. The seat under such a canopy of state would normally be raised on a dais.
As paintings came to be regarded as more important works of art, typically by the 17th century, tapestries in palaces were moved less, and came to be regarded as more or less permanent fittings for a particular room. It was at this point that many old tapestries were cut to allow fitting around doors and windows. They also often suffered the indignity of having paintings hung on top of them. Some new tapestries were made to fit around a specific room; the design of the Gobelins set from Croome Court, now in New York, has a large field with an ornamental design that could easily be adjusted in size to fit the measurements of the customer's room.
Early history
Ancient
Much is unclear about the early history of tapestry, as actual survivals are very rare, and literary mentions in Greek, Roman and other literature almost never give enough detail to establish that a tapestry technique is being described. From ancient Egypt, tapestry weave pieces using linen were found in the tombs of both Thutmose IV (d. 1391 or 1388 BC) and Tutankhamen (c. 1323 BC), the latter a glove and a robe.
Pieces in wool, given a wide range of dates around two millennia ago, have been found in a cemetery at Sanpul (Shampula) and other sites near Khotan in the Tarim Basin. They appear to have been made in a variety of places, including the Hellenistic world. The largest fragments, known as the Sampul tapestry and probably Hellenistic in origin, apparently came from a large wall-hanging, but had been reused to make a pair of trousers.
Early and High medieval
The Hestia Tapestry from Byzantine Egypt around 500–550, is a largely intact wool piece with many figures around the enthroned goddess Hestia, who is named in Greek letters. It is 114 x 136.5 cm (44.9 x 53.7 inches) with a rounded top, and was presumably hung in a home, showing the persistence of Greco-Roman paganism at this late date. The Cleveland Museum of Art has a comparable enthroned Virgin Mary of similar date. Many of the small borders and patches with images with which the early Byzantine world liked to decorate their clothing were in tapestry.
A number of survivals from around the year 1000 show the development of a frieze shape, of a large long tapestry that is relatively short in height. These were apparently designed to hang around a hall or church, probably rather high; surviving examples have nearly all been preserved in churches, but may originally have been secular. The Cloth of Saint Gereon, from around 1000, has a repeat pattern centred on medallions with a motif of a bull being attacked by a griffin, taken from Byzantine silk (or its Persian equivalent) but probably woven locally in the Rhineland. It survived in a church in Cologne, Germany.
The five strips of Överhogdal tapestries, from Sweden and dated to within 70 years of 1100, have designs in which animals greatly outnumber human figures, and have been given various interpretations. One strip has geometrical motifs. The Skog tapestry, also from Sweden but probably early 14th-century, is comparable in style.
The most famous frieze hanging is the Bayeux Tapestry, actually an embroidery, which is 68.38 metres long and 0.5 metres wide () and would have been even longer originally. This was made in England, probably in the 1070s, and the narrative of the Norman Conquest of England in 1066 is very clear, explained by tituli in Latin. This may have been an Anglo-Saxon genre, as the Liber Eliensis records that the widow of the Anglo-Saxon commander Byrhtnoth gave Ely Abbey a tapestry or hanging celebrating his deeds, presumably in the style of the Bayeux Tapestry, the only surviving example of such a work. This was given immediately after his death in 991 at the Battle of Maldon, so had probably been hanging in his home previously.
A group with narrative religious scenes in a clearly Romanesque style that relates to Rhineland illuminated manuscripts of the same period was made for Halberstadt Cathedral in Germany around 1200, and shaped differently to fit specific spaces. These may well have been made by nuns, or the secular canonesses of nearby Quedlinburg Abbey.
In this period repeated decorative motifs, increasingly often heraldic, and comparable to the styles of imported luxury fabrics such as Byzantine silk, seem to have been the common designs. Of the tapestries mentioned above, the Cloth of St Gereon best represents this style.
Peak period, after about 1350
A decisive shift in European tapestry history came around 1350, and in many respects set the pattern for the industry until the end of its main period of importance, in the upheavals following the French Revolution. The tapestries made for the very small number of customers able to commission the best pieces were now extremely large, and extremely expensive, very often made in sets, and often showed complicated narrative or allegorical scenes with large numbers of figures. They were made in large workshops concentrated in a number of cities in a relatively small region of northern France and the Southern Netherlands (partly to be near supplies of English wool). By convention all these are often called "Flemish tapestries", although most of the production centres were not in fact in the County of Flanders.
Before reaching the weaving workshop, the commissioning process typically involved a patron, an artist, and a merchant or dealer who sorted out the arrangements and contracts. Some tapestries seem to have been made for stock, before a customer had emerged. The financing of the considerable costs of setting up a workshop is often obscure, especially in the early period, but rulers supported some workshops, or other wealthy people. The merchants or dealers were very likely also involved.
Weaving centres
Where surviving tapestries from before around 1600 were made is often unclear; from 1528 Brussels, by then clearly the main centre, required its weavers to mark tapestries of any size with the city's mark and that of the weaver or merchant. At any one time from 1350 to 1600 probably only one or two centres could produce the largest and finest royal orders, and groups of highly skilled weavers migrated to new centres, often driven to move by wars or the plague. At first Paris led the field, but the English occupation there after 1418 sent many to Arras, already a centre. Arras in turn was sacked in 1477, leading to the rise of Tournai, until a serious plague early in the next century. Brussels had been growing in importance, and now became the most important centre, which it remained until the Eighty Years War disrupted all the Netherlands. Brussels had a revival in the early 17th century, but from around 1650 the French factories were increasingly overtaking it, and remained dominant until both fashion and the upheavals of the French Revolution and the Napoleonic Wars brought the virtual end of the traditional demand for large tapestries.
There was always some tapestry weaving, mostly in rather smaller workshops making smaller pieces, in other towns in northern France and the Low Countries. This was also the case in other parts of Europe, especially Italy and Germany. From the mid-16th century many rulers encouraged or directly established workshops capable of high-quality work in their domains. This was most successful in France, but Tuscany, Spain, England and eventually Russia had high-quality workshops, normally beginning with the importation of a group of skilled workers from the "Flemish" centres.
Patrons
The main weaving centres were ruled by the French and Burgundian branches of the House of Valois, who were extremely important patrons in the Late Medieval period. This began with the four sons of John II of France (d. 1362), whose inventories reveal they owned hundreds of tapestries between them. Almost the only clear survival from these collections, and the most famous tapestry from the 14th century, is the huge Apocalypse Tapestry, a very large set made for Louis I, Duke of Anjou in Paris between 1377 and 1382.
Another of the brothers, Philip the Bold, Duke of Burgundy (d. 1404) was probably an even more extravagant spender, and presented many tapestries to other rulers around Europe. Several of the tapestry-weaving centres were in his territories, and his gifts can be seen as a rather successful attempt to spread the taste for large Flemish tapestries to other courts, as well as being part of his attempt to promote the status of his duchy. Apart from Burgundy and France, tapestries were given to several of the English Plantagenets, and the rulers of Austria, Prussia, Aragon, Milan, and at his specific request, to the Ottoman Sultan Bazajet I (as part of a ransom deal for the duke's son). None of the tapestries Philip commissioned appear to survive. Philip's taste for tapestries was to continue very strongly in his descendants, including the Spanish Habsburgs.
Subjects and style
The new style of grand tapestries that were large and often in sets mostly showed subjects with large numbers of figures representing narrative subjects. The iconography of a high proportion of narrative tapestries goes back to written sources, the Bible and Ovid's Metamorphoses being two popular choices.
It is a feature of tapestry weaving, in contrast to painting, that weaving an area of the work containing only relatively plain areas of the composition, such as sky, grass or water, still involves a relatively large amount of slow and skilled work. This, together with the client's expectation of an effect of overpowering magnificence, and the remoteness of the main centres from Italian influence, led to northern compositions remaining crammed with figures and other details long after classicizing trends in Italian Renaissance painting had reduced the crowding in paintings.
An important challenge to the northern style was the arrival in Brussels, probably in 1516, of the Raphael Cartoons for the pope's Sistine Chapel commission of a grand set depicting the Acts of the Apostles. These were sent from Rome and used the latest monumental classicizing High Renaissance style, which was also reaching the north through prints.
Hunting
Hunting scenes were also very popular. These were usually given no specific setting, although sometimes the commissioner and other figures might be given portraits. The four Devonshire Hunting Tapestries (1430-1450, V&A), probably made in Arras, are perhaps the largest set of 15th-century survivals, showing the hunting of bears, boars, deer, swans, otters, and falconry. Very fashionably dressed ladies and gentlemen stroll around beside the slaughter. Another set, from after 1515, show a similar late-medieval style, although partly made with silk, so extra-expensive.
But the twelve pieces in Les Chasses de Maximilien (1530s, Louvre), made in Brussels for a Habsburg patron, show an advanced Renaissance compositional style adapted to tapestries. These have a hunting scene for each month in the year, and also show specific locations around the city. Goya was still designing hunting scenes in the 1770s.
Military
After a probable gap since the 11th century, in the late 14th century sets of tapestries returned as the grandest medium for "official military art", usually celebrating the victories of the person commissioning them. Philip the Bold commissioned a Battle of Roosbeke set two years after his victory in 1382, which was five metres high and totalled over 41 metres in width. John of Gaunt, Duke of Lancaster insisted it was changed when Philip displayed it at a diplomatic meeting in Calais in 1393 to negotiate a peace treaty; Gaunt regarded the subject-matter as inappropriate for the occasion. The Portuguese Pastrana Tapestries (1470s) were an early example, and a rare survival from so early.
Many sets were produced of the lives of classical heroes that included many battle scenes. Not only the Trojan War, Alexander the Great, Julius Caesar and Constantine I were commemorated, but also less likely figures such as Cyrus the Great of ancient Persia.
There were many 15th-century sets of contemporary wars, especially celebrating Habsburg victories. Charles V commissioned a large set after his decisive victory at the Battle of Pavia in 1525; a set is now in the Museo di Capodimonte in Naples. When he led an expedition to North Africa, culminating in the Conquest of Tunis in 1535 (no more lasting than that of Tangier depicted in the Pastrana tapestries), he took the Flemish artist Jan Cornelisz Vermeyen with him, mainly to produce drawings for the set of tapestries ordered on his return.
Contemporary military subjects became rather less popular as many 16th-century wars became religious, sometimes allegorical subjects were chosen to cover these. But the Battle of Lepanto was commemorated with a Brussels set, and the defeat of the Spanish Armada with the Armada Tapestries (1591); these were made in Delft, by a team who also made many tapestries of Dutch naval victories. The Armada set were destroyed in the Burning of Parliament in 1834, but are known from prints. Both sets adopted a high and distant aerial view, which continued in many later sets of land battles, often combined with a few large figures in the foreground. The French tapestries commissioned by Louis XIV of the victories early in his reign were of this type. Right at the end of the 16th century, a set (now in Madrid) was commissioned of the Triumphs and battles of Archduke Albert, who had just been made sovereign of the Spanish Netherlands (his military career had in fact been rather unsuccessful). The city council of Antwerp ordered it from the workshop of Maarten Reymbouts the Younger in Brussels, to be first seen on the occasion of his Royal entry to Antwerp in late 1599.
A set produced for John Churchill, 1st Duke of Marlborough showing his victories was varied for different clients, and even sold to one of his opponents, Maximilian II Emanuel, Elector of Bavaria, after reworking the generals' faces and other details.
Millefleur style
Millefleur (or millefleurs) was a background style of many different small flowers and plants, usually shown on a green ground, as though growing in grass. Often various animals are added, usually all at about the same size, so that a rabbit or dove and a unicorn are not much different in size. Trees are usually far too small and out of scale with the flowers around them, a feature also generally found in medieval painting.
The millefleur style was used for a range of different subjects from about 1400 to 1550, but mainly between about 1480 and 1520. In many subjects the millefleur background stretches to the top of the tapestry, eliminating any sky; the minimization of sky was already a feature of tapestry style; the Devonshire Hunting Tapestries show an early stage of the style. Prominent millefleur backgrounds, as opposed to those mostly covered with figures, are especially a feature of allegorical and courtly subjects. The Lady and the Unicorn set in Paris are famous examples, from around 1500.
Millefleur backgrounds became very common for heraldic tapestries, which were one of the most popular relatively small types, usually more tall than wide. These usually featured the coat of arms of the patron in the centre, with a wide floral field. They would often be hung behind the patron when he sat in state or dined, and were made for many nobles who could not afford the huge narrative sets bought by royalty. Enghien was a smaller weaving centre that seems to have specialized in these. Earlier types of heraldic tapestries had often repeated elements of the heraldry in patterns.
Landscape
After about 1520 the top workshops moved away from millefleur settings towards naturalistic landscape, with all the elements sized at a consistent perspective scale. Tapestries whose main content was landscape and animals are known as verdure subjects (from the French for "greenery"). This genre has suffered more than most from colour changes as the greens of tapestries are especially prone to fade, or turn to blues. Smaller tapestries of this type remained popular until the 18th century, and had the advantage that workshops could make them without a specific order, and distribute them across Europe via a network of dealers. From about 1600 they followed the wider trends in European landscape painting and prints. Oudenarde specialized in these, but they were produced in many towns. As with paintings, the addition of a figure or two could elevate such pieces to a depiction of a story from classical mythology, or a hunting subject.
Arrival of Renaissance style and subjects
Tapestry weavers in the Netherlands had become very comfortable working with the Gothic style by the late 15th century, and were slow to reflect the stylistic changes of the Italian Renaissance; perhaps pressure from the customers for tapestries led the way. Prints enabled Italian designs to be seen in the north.
A distinctive Italian subject was the Petrarchan triumph, derived from his poem-cycle I trionfi (before 1374). The first recorded tapestries were a three piece set ordered by Duke Philip the Bold of Burgundy from Paris in 1399. A set made in the 1450s for Giovanni de' Medici, a leading patron of the latest Florentine style, used cartoons sent from Italy to the Netherlandish weavers. But the subjects suited the tapestry weavers style, as most designs included packed crowds of elaborately-dressed figures, and there were moral messages to be drawn.
16th century
The 16th century continued the taste for tapestry, and was arguably the finest period in the history of the medium. By now the tapestry-producing towns were mostly ruled by the Habsburg family, who replaced the Valois as the dominant patrons. At the start of the century Tournai was perhaps still the largest weaving centre, but after a plague it was replaced by Brussels, which as the Netherlandish administrative capital of the Valois and Habsburgs in recent decades was probably already the main centre for the highest quality weaving by 1500. But there were many other towns where tapestries were woven.
Tapestries were commissioned in the Netherlands by rulers across Europe, from King Henry VIII in England, to Pope Leo X and Sigismund II Augustus of Poland and Lithuania. Ownership of smaller tapestries was also spreading more widely through the nobility and bourgeoisie. From 1528 tapestries of larger sizes made in Brussels had to be so marked, and with the maker's or dealer's mark, making the task of the historian much easier. After an agreement between the relevant guilds in 1476, the cartoons for the main designs had to be supplied by a member of the painters' guild, while the weavers could elaborate these with detail, especially in millefeur designs. This ensured a high quality of design for Brussels pieces.
At the beginning of the century Late Gothic styles held sway, and both the most famous sets of millefleur "unicorn" tapestries were made around 1500, perhaps to designs from Paris: The Lady and the Unicorn (now Paris), and The Hunt of the Unicorn (now New York). Pope Leo's set for the Sistine Chapel, designed by Raphael in 1515–16, marked the introduction of the full Italian High Renaissance style to tapestry, and the top northern designers now attempted to adopt it, which was rather a struggle for them, although the wide distribution of prints across Europe gave them one easy route, which many took. Les Chasses de Maximilien (The Hunts of Maximilian) was a series of twelve huge Brussels tapestries designed by Bernard van Orley in the 1530s for the Habsburgs, one of the most successful efforts to achieve an up-to-date Renaissance style. Technically, Brussels tapestries in the last quarter of the 15th century had already become sophisticated enough to begin to incorporate more illusionistic elements, distinguishing between different textures in their subject-matter, and including portraits of individuals (now mostly unknown) rather than generic figures.
Over the century oil paintings mostly moved from a panel support to canvas, allowing a far greater size, and began to compete seriously with tapestries. The authenticity of the master's touch that paintings allowed, but tapestry did not, became appreciated by the most sophisticated patrons, including the Habsburgs. However, Charles V and Philip II of Spain continued to spend huge sums on tapestries, apparently believing them the most magnificent form of decoration, and one that maintained continuity with their Burgundian ancestors.
17th century
The early part of the 17th century saw the taste for tapestry among the elite continuing, although painting was steadily gaining ground. Brussels remained much the most important weaving centre, and Rubens, mostly based in Antwerp not far away, brought the grand Baroque style to the medium, with Jacob Jordaens and others also designing many. In later generations important designers included Justus van Egmont (d. 1674), Ludwig van Schoor (d. 1702) and Jan van Orley (d. 1735, the last of a long-lasting dynasty). The Brussels workshops declined somewhat in the second half of the century, both as large Flemish Baroque paintings took some of their market, and French competition squeezed the remaining niche for tapestries.
Production in Paris revived from 1608, flagging in the civil wars of the 1640s, but starting again in 1658 when Nicolas Fouquet founded a workshop. After his fall Colbert mostly merged this to the new Gobelins Manufactory he founded for the king in 1663, which continues to this day. The Beauvais Manufactory, always a private enterprise, was founded by Colbert in 1664, but only became significant from twenty years later. Aubusson tapestry, probably a continuation of earlier small workshops, continued but was to become more significant in the next century. The Gobelins works, fed designs in the latest Style Louis XIV by the court artists, became increasingly dominant over the rest of the century, and by 1700 was the most admired and imitated workshop in Europe.
The Mortlake Tapestry Works outside London were founded in 1619, with encouragement from King Charles I of England, using Flemish weavers at the start, and in the 1620s and 1630s were producing some of the best quality tapestry in Europe. The Medici workshop in Florence continued, and from 1630 was joined by one in Rome, started by Cardinal Francesco Barberini with the inevitable imported Flemish director. Both the Mortlake and Rome workshops petered out around the end of the century. In Germany, workshops were established in Munich in 1604, and some nine further cities by the end of the century, many sponsored by the local ruler.
18th century
Around the start of the century there was increased interest in landscape subjects, some still with hunting scenes, but others showing genre subjects of rural life.
Few new workshops were begun in the century, the main exception being the Royal Tapestry Factory in Madrid. This was started in 1720, soon after Spain lost its territories in Flanders under the Treaty of Utrecht. Philip V of Spain brought Jacob van der Goten and six of his sons to Madrid. Much the best known tapestries are those designed by Francisco Goya from 1775. These mostly show genre scenes of lovers or country people recreating. Both his cartoons and the tapestries made from them mostly survive, with many of the cartoons in the Prado, and the tapestries still in the royal palaces. As with Raphael's cartoons for the Sistine Chapel tapestries, modern critics tend to prefer the cartoons. The works were privately owned by the van der Gotens and descendants until 1997, and the last member of the family resigned as chair in 2002. Apart from pauses during wars, the works has continued to produce tapestries.
Around the mid-century, the new Rococo style proved very effective in tapestries, now a good deal smaller than before. François Boucher produced 45 cartoons for Beauvais, and then by 1753 followed the animal painter Jean-Baptiste Oudry as artistic director at Gobelins. Oudry's best known set was the eight-strong The Pastoral Amusements made from the 1720s onwards in many repetitions.
During the second half of the century, the main Brussels workshops gradually closed, the last in 1794. Tapestry suited neither Neoclassicism nor Romanticism very well, and this together with the disruptions of the French Revolution and Napoleonic Wars brought the production of large figurative tapestries almost to a halt across Europe.
19th century
In the 19th century, William Morris resurrected the art of tapestry-making in the medieval style at Merton Abbey. Morris & Co. made successful series of tapestries for home and ecclesiastical uses, with figures based on cartoons by Edward Burne-Jones. The set of six Holy Grail tapestries of the 1890s, repeated a number of times, are the largest they made, and perhaps the most successful.
Traditional tapestries are still made at the Gobelins factory in Paris, and the royal factory in Madrid. They and a few other old European workshops also repair and restore old tapestries; the main British workshop is at Hampton Court Palace, a department of the Royal Collection Trust.
Outside Europe
The Chinese kesi is a tapestry weave, normally using silk on a small scale compared to European wall-hangings. Clothing for the court was one of the main uses. The density of knots is typically very high, with a gown of the best quality perhaps involving as much work as a much larger European tapestry. Initially used for small pieces, often with animal, bird and flower decoration, or dragons for imperial clothing, under the Ming dynasty it was used to copy paintings.
The Story of Troy is an unusual set of seven large tapestry hangings made in China for the Portuguese governor of Macao in the 1620s, blending Western and Chinese styles. Most of the hangings are embroidery, but the faces and flesh parts of the figures are appliqué painted silk satin pieces, reflecting a Chinese technique often used for Buddhist banners, and the larger forms of thangka.
Kilims and Navajo rugs are also types of tapestry work, the designs of both mostly restricted to geometrical patterns similar to those of other rug weaving techniques.
Contemporary tapestry
What distinguishes the contemporary field from its pre-World War II history is the predominance of the artist as weaver in the contemporary medium. This trend has its roots in France during the 1950s, where one of the "cartoonists" for the Aubusson tapestry studios, Jean Lurçat spearheaded a revival of the medium by streamlining colour selection, thereby simplifying production, and by organizing a series of Biennial exhibits held in Lausanne, Switzerland. The Polish work submitted to the first Biennale, which opened in 1962, was quite novel. Traditional workshops in Poland had collapsed as a result of the war. Also art supplies in general were hard to acquire. Many Polish artists had learned to weave as part of their art school training and began creating highly individualistic work by using atypical materials like jute and sisal. With each Biennale the popularity of works focusing on exploring innovative constructions from a wide variety of fiber resounded around the world.
There were many weavers in pre-war United States, but there had never been a prolonged system of workshops for producing tapestries. Therefore, weavers in America were primarily self-taught and chose to design as well as weave their art. Through these Lausanne exhibitions, US artists/weavers, and others in countries all over the world, were excited about the Polish trend towards experimental forms. Throughout the 1970s almost all weavers had explored some manner of techniques and materials in vogue at the time. What this movement contributed to the newly realized field of art weaving, termed "contemporary tapestry", was the option for working with texture, with a variety of materials and with the freedom for individuality in design
In the 1980s it became clear that the process of weaving weft-faced tapestry had another benefit, that of stability. The artists who chose tapestry as their medium developed a broad range of personal expression, styles and subject matter, stimulated and nourished by an international movement to revive and renew tapestry traditions from all over the world. Competing for commissions and expanding exhibition venues were essential factors in how artists defined and accomplished their goals.
Much of the impetus in the 1980s for working in this more traditional process came from the Bay Area in Northern California where, twenty years earlier, Mark Adams, an eclectic artist, had two exhibitions of his tapestry designs. He went on to design many large tapestries for local buildings. Hal Painter, another well-respected artist in the area became a prolific tapestry artist during the decade weaving his own designs. He was one of the main artists to "...create the atmosphere which helped give birth to the second phase of the contemporary textile movement – textiles as art – that recognition that textiles no longer had to be utilitarian, functional, to serve as interior decoration."
Early in the 1980s many artists committed to getting more professional and often that meant travelling to attend the rare educational programmes offered by newly formed ateliers, such as the San Francisco Tapestry Workshop, or to far-away institutions they identified as fitting their needs. This phenomenon was happening in Europe and Australia as well as in North America.
Opportunities for entering juried tapestry exhibitions were beginning to happen by 1986, primarily because the American Tapestry Alliance (ATA), founded in 1982, organised biennial juried exhibitions starting in 1986. The biennials were planned to coincide with the Handweavers Guild or America's "Convergence" conferences. The new potential for seeing the work of other tapestry artists and the ability to observe how one's own work might fare in such venues profoundly increased the awareness of a community of like-minded artists. Regional groups were formed for producing exhibits and sharing information.
The desire of many artists for greater interaction escalated as an international tapestry symposium in Melbourne, Australia in 1988 lead to a second organization committed to tapestry, the International Tapestry Network (ITNET). Its goal was to connect American tapestry artists with the burgeoning international community. The magazines were discontinued in 1997 as communicating digitally became a more useful tool for interactions. As the world has moved into the digital age, tapestry artists around the world continue to share and inspire each other's work.
By the new millennium however, fault lines had surfaced within the field. Many universities that previously had strong weaving components in their art departments, such as San Francisco State University, no longer offered handweaving as an option as they shifted their focus to computerized equipment. A primary cause for discarding the practice was the fact that only one student could use the equipment for the duration of a project whereas in most media, like painting or ceramics, the easels or potters wheels were used by several students in a day. Worldwide, people from all different cultures began adopting these forms of decor for profession and personal use.
At the same time, "fiber art" had become one of the most popular mediums in their art programmes. Young artists were interested in exploring a wider scope of processes for creating art through the materials classified as fibre. This shift to more multimedia and sculptural forms and the desire to produce work more quickly had the effect of pushing contemporary tapestry artists inside and outside the academic institutions to ponder how they might keep pace in order to sustain visibility in their art form.
Susan Iverson, a professor in the School of the Arts at Virginia Commonwealth University, explains her reasons:
I came to tapestry after several years of exploring complex weaves. I became enamored with tapestry because of its simplicity — its straightforward qualities. It allowed me to investigate form or image or texture, and it had the structural integrity to hold its own form. I loved the substantial quality of a tapestry woven with heavy threads—its object quality.
Another prominent artist, Joan Baxter, states:
My passion for tapestry arrived suddenly on the first day of my introduction to it in my first year at ECA [Edinburgh College of Art.] I don't remember ever having consciously thought about tapestry before that day but I somehow knew that eventually I'd be really good at this. From that day I have been able to plough a straight path deeper and deeper into tapestry, through my studies in Scotland and Poland, my 8 years as a studio weaver in England and Australia and since 1987 as an independent tapestry artist. The demanding creative ethos of the tapestry department gave me the confidence, motivation and self-discipline I needed to move out into the world as a professional tapestry weaver and artist. What was most inspiring for me as a young student was that my tutors in the department were all practising, exhibiting artists engaging positively with what was then a cutting edge international Fibre Art movement.
Archie Brennan, now in his sixth decade of weaving, says of tapestry:
500 years ago it was already extremely sophisticated in its development-- aesthetically, technically and in diversity of purpose. Today, its lack of a defined purpose, its rarity, gives me an opportunity to seek new roles, to extend its historic language and, above all, to dominate my compulsive, creative drive. In 1967, I made a formal decision to step away from the burgeoning and exciting fiber arts movement and to refocus on woven tapestry's long-established graphic pictorial role.
Jacquard tapestries, colour and the human eye
The term tapestry may also be used to describe large figurative weft-faced textiles made on Jacquard looms. Before the 1990s tapestry upholstery fabrics and reproductions of the famous tapestries of the Middle Ages had been produced using Jacquard techniques but more recently, artists such as Chuck Close, Patrick Lichty, and the workshop Magnolia Editions have adapted the
computerised Jacquard process to producing fine art.
Typically, tapestries are translated from the original design via a process resembling paint-by-numbers: a cartoon is divided into regions, each of which is assigned a solid colour based on a standard palette. However, in Jacquard weaving, the repeating series of multicoloured warp and weft threads can be used to create colours that are optically blended – i.e., the human eye apprehends the threads' combination of values as a single colour.
This method can be likened to pointillism, which originated from discoveries made in the tapestry medium. The style's emergence in the 19th century can be traced to the influence of Michel Eugène Chevreul, a French chemist responsible for developing the colour wheel of primary and intermediary hues. Chevreul worked as the director of the dye works at Les Gobelins tapestry works in Paris, where he noticed that the perceived colour of a particular thread was influenced by its surrounding threads, a phenomenon he called "simultaneous contrast". Chevreul's work was a continuation of theories of colour elaborated by Leonardo da Vinci and Goethe; in turn, his work influenced painters including Eugène Delacroix and Georges-Pierre Seurat.
The principles articulated by Chevreul also apply to contemporary television and computer displays, which use tiny dots of red, green and blue (RGB) light to render colour, with each composite being called a pixel.
List of famous tapestries
The Trojan War tapestry referred to by Homer in Book III of the Iliad, where Iris disguises herself as Laodice and finds Helen "working at a great web of purple linen, on which she was embroidering the battles between Trojans and Achaeans, that Ares had made them fight for her sake." Though the composition of the Iliad spanned a period of approximately 700 years, it is worth noting that this method of weaving was in common use in or before the eighth century BC.
The Sampul tapestry, woollen wall hanging, 3rd–2nd century BC, Sampul, Ürümqi Xinjiang Museum.
The Hestia Tapestry, 6th century, Byzantine Egypt, Dumbarton Oaks Collection.
The Cloth of Saint Gereon – early 11th-century, the oldest European tapestry still extant.
Tapestry of Creation, 11th-century, Spain. Large needlework hanging with religious scenes
The Överhogdal tapestries - Viking hangings of 1040 to 1170.
The Bayeux Tapestry is an embroidered cloth — not an actual tapestry — nearly 70 metres (230 ft) long, which depicts the events leading up to the Norman conquest of England, likely made in England — not Bayeux — in the 1070s
The Apocalypse Tapestry depicts scenes from the Book of Revelation. It was woven between 1373 and 1382. Originally 140 m (459 ft), the surviving 100m are displayed in the Château d'Angers, in Angers.
The six-part piece La Dame à la Licorne (The Lady and the Unicorn), stored in l'Hôtel de Cluny, Paris.
The Devonshire Hunting Tapestries, four Flemish tapestries dating from the mid-fifteenth century depict men and women in fashionable dress of the early fifteenth century hunting in a forest. The tapestries formerly belonged to the Duke of Devonshire and are now in the Victoria and Albert Museum.
The Justice of Trajan and Herkinbald, a tapestry dating from about 1450.
The Triumph of Fame, a tapestry made in Flanders in the 1500s.
The Hunt of the Unicorn is a seven piece tapestry from 1495 to 1505, currently displayed at The Cloisters, Metropolitan Museum of Art in New York.
Les Chasses de Maximilien (The Hunts of Maximilian) is a series of twelve tapestries woven in Brussels after the designs of Bernard van Orley.
The Life and Miracles of St Adelphus, a late 15th-century or early 16th-century cycle of tapestries (four surviving parts), possibly based on designs by Jost Haller, total length , in the Église Saint-Pierre-et-Saint-Paul, Neuwiller-lès-Saverne.
The tapestries for the Sistine Chapel, designed by Raphael in 1515–16, for which the Raphael Cartoons, or painted designs, also survive.
The Jagiellonian tapestries, (mid 16th century) a collection of 134 tapestries at the Wawel Castle in Kraków, Poland displaying various religious, natural, and royal themes. These famous tapestries, created in Arras, were collected by Polish Kings Sigismund I the Old and Sigismund II Augustus, whose reigns were between 1506 and 1572.
The Valois Tapestries are a cycle of 8 hangings depicting royal festivities in France in the 1560s and 1570s
The History of Constantine, a series of tapestries designed by Peter Paul Rubens and Italian artist Pietro da Cortona in 1622.
The Death of Polydorus, one of a set of seven tapestries showing a scene from the Iliad by Homer.
The biggest collection of Flanders tapestry is in the Spanish royal collection, there is 8000 metres of historical tapestry from Flanders, as well as Spanish tapestries designed by Goya and others. There is a special museum in the Royal Palace of La Granja de San Ildefonso, and others are displayed in various historic buildings.
Tentures des Indes is a ten-piece tapestry set made between 1708 and 1710 are the only intact collection in existence made by the famous French manufacturer the Gobelins Manufactory. They are still hanging in their original place in the Tapestry chamber at the Grandmaster's Palace, Valletta, Malta.
The Pastoral Amusements, also known as "Les Amusements champêtres", a series of 8 Beauvais Tapestries designed by Jean-Baptiste Oudry between 1720 and 1730.
The Prestonpans Tapestry is a 104 metres long embroidery which tells the story of Bonnie Prince Charlie and the Battle of Prestonpans.
Le Bouquet (1951) by Marc Saint-Saens is among the best and most representative French tapestries of the fifties. It is a tribute to Saint-Saens's predilection for scenes from nature and rustic life.
Triumph of Peace (1953) by Peter Colfs. On display in the United Nations Headquarters, Delegates' lobby of the General Assembly, it was at the time of production with 43.5 x 28.5 feet (13.3 m x 8.7 m) the largest mural tapestry in the world.
Christ in Glory, (1962) for Coventry Cathedral designed by Graham Sutherland. Up until the 1990s this was the world's largest vertical tapestry.
The World Trade Center Tapestry, a large 1973 tapestry by Joan Miró and Josep Royo.
The Quaker Tapestry (1981–1989) is a modern set of embroidery panels that tell the story of Quakerism from the 17th century to the present day.
The New World Tapestry is a 267 feet long embroidery, begun in the 1980s, which depicts the colonisation of the Americas between 1583 and 1648, which was displayed at the British Empire and Commonwealth Museum, now defunct.
The Great Tapestry of Scotland is a modern series of embroidered cloths, made up of 160 hand stitched panels, depicting aspects of the history of Scotland from 8500 BC until 2013. At 143 metres (469 ft) long, it is the longest tapestry in the world.
| Technology | Techniques_2 | null |
87089 | https://en.wikipedia.org/wiki/Snake%20lemma | Snake lemma | The snake lemma is a tool used in mathematics, particularly homological algebra, to construct long exact sequences. The snake lemma is valid in every abelian category and is a crucial tool in homological algebra and its applications, for instance in algebraic topology. Homomorphisms constructed with its help are generally called connecting homomorphisms.
Statement
In an abelian category (such as the category of abelian groups or the category of vector spaces over a given field), consider a commutative diagram:
where the rows are exact sequences and 0 is the zero object.
Then there is an exact sequence relating the kernels and cokernels of a, b, and c:
where d is a homomorphism, known as the connecting homomorphism.
Furthermore, if the morphism f is a monomorphism, then so is the morphism , and if g''' is an epimorphism, then so is .
The cokernels here are: , , .
Explanation of the name
To see where the snake lemma gets its name, expand the diagram above as follows:
and then the exact sequence that is the conclusion of the lemma can be drawn on this expanded diagram in the reversed "S" shape of a slithering snake.
Construction of the maps
The maps between the kernels and the maps between the cokernels are induced in a natural manner by the given (horizontal) maps because of the diagram's commutativity. The exactness of the two induced sequences follows in a straightforward way from the exactness of the rows of the original diagram. The important statement of the lemma is that a connecting homomorphism d exists which completes the exact sequence.
In the case of abelian groups or modules over some ring, the map d can be constructed as follows:
Pick an element x in ker c and view it as an element of C; since g is surjective, there exists y in B with g(y) = x. Because of the commutativity of the diagram, we have g'(b(y)) = c(g(y)) = c(x) = 0 (since x is in the kernel of c), and therefore b(y) is in the kernel of g' . Since the bottom row is exact, we find an element z in A' with f '(z) = b(y). z is unique by injectivity of f '. We then define d(x) = z + im(a). Now one has to check that d is well-defined (i.e., d(x) only depends on x and not on the choice of y), that it is a homomorphism, and that the resulting long sequence is indeed exact. One may routinely verify the exactness by diagram chasing (see the proof of Lemma 9.1 in ).
Once that is done, the theorem is proven for abelian groups or modules over a ring. For the general case, the argument may be rephrased in terms of properties of arrows and cancellation instead of elements. Alternatively, one may invoke Mitchell's embedding theorem.
Naturality
In the applications, one often needs to show that long exact sequences are "natural" (in the sense of natural transformations). This follows from the naturality of the sequence produced by the snake lemma.
If
is a commutative diagram with exact rows, then the snake lemma can be applied twice, to the "front" and to the "back", yielding two long exact sequences; these are related by a commutative diagram of the form
Example
Let be field, be a -vector space. is -module by being a -linear transformation, so we can tensor and over .
Given a short exact sequence of -vector spaces , we can induce an exact sequence by right exactness of tensor product. But the sequence is not exact in general. Hence, a natural question arises. Why is this sequence not exact?
According to the diagram above, we can induce an exact sequence by applying the snake lemma. Thus, the snake lemma reflects the tensor product's failure to be exact.
In the category of groups
Whether the snake lemma holds in the category of groups depends on the definition of cokernel. If is a homomorphism of groups, the universal property of the cokernel is satisfied by the natural map , where is the normalization of the image of . The snake lemma fails with this definition of cokernel: The connecting homomorphism can still be defined, and one can write down a sequence as in the statement of the snake lemma. This will always be a chain complex, but it may fail to be exact.
If one simply replaces the cokernels in the statement of the snake lemma with the (right) cosets , the lemma is still valid. The quotients however are not groups, but pointed sets (a short sequence of pointed sets with maps and is called exact if ).
Counterexample to snake lemma with categorical cokernel
Consider the alternating group : this contains a subgroup isomorphic to the symmetric group , which in turn can be written as a semidirect product of cyclic groups: . This gives rise to the following diagram with exact rows:
Note that the middle column is not exact: is not a normal subgroup in the semidirect product.
Since is simple, the right vertical arrow has trivial cokernel. Meanwhile the quotient group is isomorphic to . The sequence in the statement of the snake lemma is therefore
,
which indeed fails to be exact.
In popular culture
The proof of the snake lemma is taught by Jill Clayburgh's character at the very beginning of the 1980 film It's My Turn''.
| Mathematics | Category theory | null |
87160 | https://en.wikipedia.org/wiki/Haast%27s%20eagle | Haast's eagle | Haast's eagle (Hieraaetus moorei) is an extinct species of eagle that lived in the South Island of New Zealand, commonly accepted to be the of Māori mythology. It is the largest eagle known to have existed, with an estimated weight of , compared to the next-largest and extant harpy eagle (Harpia harpyja), at up to . Its massive size is explained as an evolutionary response to the size of its prey—the flightless moa—the largest of which could weigh . Haast's eagle became extinct around 1445, following the arrival of the Māori, who hunted moa to extinction, introduced the Polynesian rat (Rattus exulans), and destroyed large tracts of forest by fire.
Taxonomy
Haast's eagle was first scientifically described by Julius von Haast in 1871 from remains discovered by the Canterbury Museum taxidermist, Frederick Richardson Fuller, in a former marsh. Haast named the eagle Harpagornis moorei after George Henry Moore, the owner of the Glenmark Estate, where the bones of the bird were found. The genus name was from the Greek harpax, meaning "grappling hook", and ornis, meaning "bird". DNA analysis later showed that this bird is related most closely to the much smaller little eagle (Hieraaetus morphnoides) as well as the booted eagle (Hieraaetus pennatus) and not, as previously thought, to the large wedge-tailed eagle (Aquila audax). Harpagornis moorei was therefore reclassified as Hieraaetus moorei.
H. moorei is estimated to have diverged from these smaller eagles as recently as 1.8 million to 700,000 years ago. If this estimate is correct, its increase in weight by ten to fifteen times is an exceptionally rapid weight increase. The suggested increase in the average weight of Haast's eagle over that period would therefore represent the largest, fastest evolutionary increase in average weight of any known vertebrate species. This was made possible in part by the presence of large prey and the absence of competition from other large predators, an example of ecological release and island gigantism. A recent mitochondrial DNA study found it to be more closely related to the little eagle than the booted eagle, with an estimated divergence from the little eagle around 2.2 million years ago. It was placed in the genus Aquila by recent taxonomists.
Description
Haast's eagle was one of the largest known true raptors. In length and weight, it was even larger than the largest living vulture (the Andean condor). Another giant bird (not actually an eagle save for in name) more recently and scantily described from the fossil record, the Woodward's eagle, which resided in North America, rivaled the Haast's in at least the aspect of total length. Female eagles were larger than males. Most estimates place the female Haast's eagles in the range of and males around . A comparison with living eagles of the Australasian region resulted in estimated masses in Haast's eagles of for males and for females. One source estimates that the largest females could have weighed more than . The largest extant eagles, none of which are verified to exceed in a wild state, are about forty percent smaller in body size than Haast's eagles.
It had a relatively short wingspan for its size. It is estimated that the grown female typically spanned up to , possibly up to in a few cases. This wingspan is broadly similar to the larger range of female size in some extant eagles: the wedge-tailed eagle, golden eagle (A. chrysaetos), martial eagle (Polemaetus bellicosus), white-tailed eagle (Haliaeetus albicilla) and Steller's sea eagle (Haliaeetus pelagicus) are all known to exceed in wingspan. Several of the largest extant Old World vultures, if not in mean mass or other linear measurements, probably exceed Haast's eagle in average wingspan as well. Haast's eagle's relatively short wingspan has sometimes led it to being incorrectly portrayed as having evolved toward flightlessness, even though evidence strongly suggests that it flew. Instead, its short and broad wings represents an evolutionary departure from the mode of its ancestors' soaring flight in favour of navigating through a crowded woodland environment. Haast's eagles are likely to have hunted within the dense shrubland and forests of New Zealand, somewhat akin to other forest-dwelling raptors like the goshawks or harpy eagle.
Some wing and leg remains of Haast's eagles permit direct comparison with living eagles. The harpy eagle, the Philippine eagle (Pithecophaga jefferyi), and the Steller's sea eagle are the largest and most powerful living eagles, and the first two also have a similarly reduced relative wing-length as an adaptation to forest-dwelling. A lower mandible from the Haast's eagle measured and the tarsus in several Haast's eagle fossils has been measured from . In comparison, the largest beaks of eagles today (from the Philippine and the Steller's sea eagle) reach a little more than ; and the longest tarsal measurements (from the Philippine and the Papuan eagle, Harpyopsis novaeguineae) top out around .
The talons of the Haast's eagle were similar in length to those of the harpy eagle, with a front-left talon length of and a hallux-claw of possibly up to . The Philippine eagle might be a particularly appropriate living species to compare with the Haast's eagle, because it too evolved in an insular environment from smaller ancestors (apparently basal snake eagles) to island gigantism in the absence of large carnivorous mammals and other competing predators. The eagle's talons are similar to modern eagles, suggesting that it used its talons for hunting and not scavenging. The strong legs and massive flight muscles of these eagles would have enabled the birds to take off with a jumping start from the ground, despite their great weight. The tail was almost certainly long, in excess of in female specimens, and very broad. This characteristic would compensate for the reduction in wing area by providing additional lift. Total length is estimated to have been up to in females, with a standing height of approximately tall or perhaps slightly greater.
Māori cave art depicts the Haast's eagle with a pale head. These Māori rock art drawings can still be found in modern-day South Canterbury near Timaru. Combined with its vulture-like feeding behaviour, this might suggest it had a bald head, or had shorter feathers on its head than elsewhere on its body.
Behaviour and ecology
The Haast's eagle predominantly preyed on large, flightless bird species, including the moa, which ultimately led to the species' extinction. Moa would be up to fifteen times the weight of the eagle. Its large beak also could be used to rip into the internal organs of its prey and death then would have been caused by blood loss. Due to the absence of other large predators or kleptoparasites, a Haast's eagle could easily have monopolised a single large kill over a number of days. Its prey, the moa, could weigh up to .
A 2021 analysis showed that, while predatory, the bill of the Haast's eagle was functionally closer to that of the Andean condor (Vultur gryphus) than to that of other eagles. This is also supported by historic Māori Cave art which depicts the Haast's eagle being pale-headed. It also suggests that it deployed feeding tactics more similar to those of vultures after making a kill, plunging its head into the body cavity to devour the vital organs of its prey. This may have been an adaptation as a result of the bird hunting animals much larger than itself.
Extinction
Until recent human colonisation that introduced rodents and cats, the only placental land mammals found on the islands of New Zealand were three species of bat. Birds occupied or dominated all major niches in the New Zealand animal ecology. Moa were grazers, functionally similar to large ungulates, such as deer or cattle in other habitats, and Haast's eagles were the hunters who filled the same niche as top-niche mammalian predators.
One study estimated the total population of Haast's eagle at 3,000 to 4,500 breeding pairs. Early Māori settlers arrived in New Zealand sometime between AD 1250 and AD 1275, The Māori preyed heavily on large flightless birds, including all moa species. The hunting pressure from both the Māori and the eagle eventually led the moa to extinction by around 1440 to 1445. Both eagles and Māori likely competed for the same foods. Unlike the adaptable humans, eagles were dependent on the native medium and large-sized flightless birds, being specialized in hunting them. The loss of its primary prey caused the Haast's eagle to become extinct at about the same time as its prey.
Relationship with humans
Some believe that these birds are described in many legends of the Māori mythology, under the names pouākai, Hakawai (or Hōkioi in the North Island). According to an account given to Sir George Grey—an early governor of New Zealand—Hōkioi were huge black-and-white birds with yellow-green tinged wings and a red crest. In Māori mythology, Pouākai would prey and kill humans along with moa, which scientists believe could have been possible if the name relates to the eagle, given the massive size and strength of the bird. However, it has also been argued that the "hakawai" and "hōkioi" legends refer to the Austral snipe—in particular the extinct South Island species.
In popular culture
Artwork depicting Haast's eagle now may be viewed at OceanaGold's Heritage and Art Park at Macraes, Otago, New Zealand. The sculpture, weighing approximately , standing tall, and depicted with a wingspan of is constructed from stainless steel tube and sheet and was designed and constructed by Mark Hill, a sculptor from Arrowtown, New Zealand. The Haast's eagle also appeared in a 2003 BBC documentary Monsters We Met.
There is also a statue depicting the Haast's eagle in Karamea, West Coast. This statue was unveiled by the community and the Ngāti Waewae iwi.
| Biology and health sciences | Accipitrimorphae | Animals |
87168 | https://en.wikipedia.org/wiki/Arachnid | Arachnid | Arachnids are arthropods in the class Arachnida () of the subphylum Chelicerata. Arachnida includes, among others, spiders, scorpions, ticks, mites, pseudoscorpions, harvestmen, camel spiders, whip spiders and vinegaroons.
Adult arachnids have eight legs attached to the cephalothorax. In some species the frontmost pair of legs has converted to a sensory function, while in others, different appendages can grow large enough to take on the appearance of extra pairs of legs.
Almost all extant arachnids are terrestrial, living mainly on land. However, some inhabit freshwater environments and, with the exception of the pelagic zone, marine environments as well. They comprise over 110,000 named species, of which 51,000 are species of spiders.
The term is derived from the Greek word (aráchnē, 'spider'), from the myth of the hubristic human weaver Arachne, who was turned into a spider.
Morphology
Almost all adult arachnids have eight legs, unlike adult insects which all have six legs. However, arachnids also have two further pairs of appendages that have become adapted for feeding, defense, and sensory perception. The first pair, the chelicerae, serve in feeding and defense. The next pair, the pedipalps, have been adapted for feeding, locomotion, and/or reproductive functions. In scorpions, pseudoscorpions, and ricinuleids the pedipalps end in a pair of pinchers, while in whip scorpions, Schizomida, Amblypygi, and most harvestmen, they are raptorial and used for prey capture. In Solifugae, the palps are quite leg-like, so that these animals appear to have ten legs. The larvae of mites and Ricinulei have only six legs; a fourth pair usually appears when they moult into nymphs. However, mites are variable: as well as eight, there are adult mites with six or, like in Eriophyoidea, even four legs. While the adult males in some members of Podapolipidae have six legs, the adult females have only a single pair.
Arachnids are further distinguished from insects by the fact they do not have antennae or wings. Their body is organized into two tagmata, called the prosoma and opisthosoma, also referred to as the cephalothorax and abdomen. However, there are questions about the validity of the latter terms. While the term cephalothorax implies a fused cephalon (head) and thorax, there is currently neither fossil nor embryological evidence that arachnids ever had a separate thorax-like division. Likewise, the 'abdomen' of many arachnids contains organs atypical of an abdomen, such as a heart and respiratory organs.
The cephalothorax is usually covered by a single, unsegmented carapace. The abdomen is segmented in the more primitive forms, but varying degrees of fusion between the segments occur in many groups. It is typically divided into a preabdomen and postabdomen, although this is only clearly visible in scorpions, and in some orders, such as the mites, the abdominal sections are completely fused. A telson is present in scorpions, where it has been modified to a stinger, and into a flagellum in the Palpigradi, Schizomida (very short) and whip scorpions. At the base of the flagellum in the two latter groups there are glands which produce acetic acid as a chemical defense. Except for a pair of pectines in scorpions, and the spinnerets in spiders, the abdomen has no appendages.
Like all arthropods, arachnids have an exoskeleton, and they also have an internal structure of cartilage-like tissue, called the endosternite, to which certain muscle groups are attached. The endosternite is even calcified in some Opiliones.
Locomotion
Most arachnids lack extensor muscles in the distal joints of their appendages. Spiders and whip scorpions extend their limbs hydraulically using the pressure of their hemolymph. Solifuges and some harvestmen extend their knees by the use of highly elastic thickenings in the joint cuticle. Scorpions, pseudoscorpions and some harvestmen have evolved muscles that extend two leg joints (the femur-patella and patella-tibia joints) at once. The equivalent joints of the pedipalps of scorpions though, are extended by elastic recoil.
Physiology
There are characteristics that are particularly important for the terrestrial lifestyle of arachnids, such as internal respiratory surfaces in the form of tracheae, or modification of the book gill into a book lung, an internal series of vascular lamellae used for gas exchange with the air. While the tracheae are often individual systems of tubes, similar to those in insects, ricinuleids, pseudoscorpions, and some spiders possess sieve tracheae, in which several tubes arise in a bundle from a small chamber connected to the spiracle. This type of tracheal system has almost certainly evolved from the book lungs, and indicates that the tracheae of arachnids are not homologous with those of insects.
Further adaptations to terrestrial life are appendages modified for more efficient locomotion on land, internal fertilisation, special sensory organs, and water conservation enhanced by efficient excretory structures as well as a waxy layer covering the cuticle.
The excretory glands of arachnids include up to four pairs of coxal glands along the side of the prosoma, and one or two pairs of Malpighian tubules, emptying into the gut. Many arachnids have only one or the other type of excretory gland, although several do have both. The primary nitrogenous waste product in arachnids is guanine.
Arachnid blood is variable in composition, depending on the mode of respiration. Arachnids with an efficient tracheal system do not need to transport oxygen in the blood, and may have a reduced circulatory system. In scorpions and some spiders, however, the blood contains haemocyanin, a copper-based pigment with a similar function to haemoglobin in vertebrates. The heart is located in the forward part of the abdomen, and may or may not be segmented. Some mites have no heart at all.
Diet and digestive system
Arachnids are mostly carnivorous, feeding on the pre-digested bodies of insects and other small animals. But ticks, and many mites, are parasites, some of which are carriers of disease. The diet of mites also include tiny animals, fungi, plant juices and decomposing matter. Almost as varied is the diet of harvestmen, where we will find predators, decomposers and omnivores feeding on decaying plant and animal matter, droppings, animals and mushrooms. The harvestmen and some mites, such as the house dust mite, are also the only arachnids able to ingest solid food, which exposes them to internal parasites, although it is not unusual for spiders to eat their own silk. And one species of spider is mostly herbivorous. Scorpions, spiders and pseudoscorpions secrete venom from specialized glands to kill prey or defend themselves. Their venom also contains pre-digestive enzymes that helps breaking down the prey. The saliva of ticks contains anticoagulants and anticomplements, and several species produce a neurotoxin.
Arachnids produce digestive enzymes in their stomachs, and use their pedipalps and chelicerae to pour them over their dead prey. The digestive juices rapidly turn the prey into a broth of nutrients, which the arachnid sucks into a pre-buccal cavity located immediately in front of the mouth. Behind the mouth is a muscular, sclerotised pharynx, which acts as a pump, sucking the food through the mouth and on into the oesophagus and stomach. In some arachnids, the oesophagus also acts as an additional pump.
The stomach is tubular in shape, with multiple diverticula extending throughout the body. The stomach and its diverticula both produce digestive enzymes and absorb nutrients from the food. It extends through most of the body, and connects to a short sclerotised intestine and anus in the hind part of the abdomen.
Senses
Arachnids have two kinds of eyes: the lateral and median ocelli. The lateral ocelli evolved from compound eyes and may have a tapetum, which enhances the ability to collect light. With the exception of scorpions, which can have up to five pairs of lateral ocelli, there are never more than three pairs present. The median ocelli develop from a transverse fold of the ectoderm. The ancestors of modern arachnids probably had both types, but modern ones often lack one type or the other. The cornea of the eye also acts as a lens, and is continuous with the cuticle of the body. Beneath this is a transparent vitreous body, and then the retina and, if present, the tapetum. In most arachnids, the retina probably does not have enough light sensitive cells to allow the eyes to form a proper image.
In addition to the eyes, almost all arachnids have two other types of sensory organs. The most important to most arachnids are the fine sensory hairs that cover the body and give the animal its sense of touch. These can be relatively simple, but many arachnids also possess more complex structures, called trichobothria.
Finally, slit sense organs are slit-like pits covered with a thin membrane. Inside the pit, a small hair touches the underside of the membrane, and detects its motion. Slit sense organs are believed to be involved in proprioception, and possibly also hearing.
Reproduction
Arachnids may have one or two gonads, which are located in the abdomen. The genital opening is usually located on the underside of the second abdominal segment. In most species, the male transfers sperm to the female in a package, or spermatophore. The males in harvestmen and some mites have a penis. Complex courtship rituals have evolved in many arachnids to ensure the safe delivery of the sperm to the female. Members of many orders exhibit sexual dimorphism.
Arachnids usually lay yolky eggs, which hatch into immatures that resemble adults. Scorpions, however, are either ovoviviparous or viviparous, depending on species, and bear live young. Also some mites are ovoviviparous and viviparous, even if most lay eggs. In most arachnids only the females provide parental care, with harvestmen being one of the few exceptions.
Taxonomy and evolution
Phylogeny
The phylogenetic relationships among the main subdivisions of arthropods have been the subject of considerable research and dispute for many years. A consensus emerged from about 2010 onwards, based on both morphological and molecular evidence; extant (living) arthropods are a monophyletic group and are divided into three main clades: chelicerates (including arachnids), pancrustaceans (the paraphyletic crustaceans plus insects and their allies), and myriapods (centipedes, millipedes and allies). The three groups are related as shown in the cladogram below. Including fossil taxa does not fundamentally alter this view, although it introduces some additional basal groups.
The extant chelicerates comprise two marine groups: Sea spiders and horseshoe crabs, and the terrestrial arachnids. These have been thought to be related as shown below. (Pycnogonida (sea spiders) may be excluded from the chelicerates, which are then identified as the group labelled "Euchelicerata".) A 2019 analysis nests Xiphosura deeply within Arachnida.
Discovering relationships within the arachnids has proven difficult , with successive studies producing different results. A study in 2014, based on the largest set of molecular data to date, concluded that there were systematic conflicts in the phylogenetic information, particularly affecting the orders Acariformes, Parasitiformes and Pseudoscorpiones, which have had much faster evolutionary rates. Analyses of the data using sets of genes with different evolutionary rates produced mutually incompatible phylogenetic trees. The authors favoured relationships shown by more slowly evolving genes, which demonstrated the monophyly of Chelicerata, Euchelicerata and Arachnida, as well as of some clades within the arachnids. The diagram below summarizes their conclusions, based largely on the 200 most slowly evolving genes; dashed lines represent uncertain placements.
Tetrapulmonata, here consisting of Araneae, Amblypygi and Uropygi (Thelyphonida s.s.) (Schizomida was not included in the study), received strong support. Somewhat unexpectedly, there was support for a clade comprising Opiliones, Ricinulei and Solifugae, a combination not found in most other studies. In early 2019, a molecular phylogenetic analysis placed the horseshoe crabs, Xiphosura, as the sister group to Ricinulei. It also grouped pseudoscorpions with mites and ticks, which the authors considered may be due to long branch attraction. The addition of Scorpiones to produce a clade called Arachnopulmonata was also well supported. Pseudoscorpiones may also belong here, as all six orders share the same ancient whole genome duplication, and analyses support pseudoscorpions as the sister group of scorpions. Genetic analysis has not yet been done for Ricinulei, Palpigradi, or Solifugae, but horseshoe crabs have gone through two whole genome duplications, which gives them five Hox clusters with 34 Hox genes, the highest number found in any invertebrate, yet it is not clear if the oldest genome duplication is related to the one in Arachnopulmonata.
More recent phylogenomic analyses that have densely sampled both genomic datasets and morphology have supported horseshoe crabs as nested inside Arachnida, suggesting a complex history of terrestrialization. Morphological analyses including fossils tend to recover the Tetrapulmonata, including the extinct group the Haptopoda, but recover other ordinal relationships with low support.
Fossil history
The Uraraneida are an extinct order of spider-like arachnids from the Devonian and Permian.
A fossil arachnid in 100 million year old (mya) amber from Myanmar, Chimerarachne yingi, has spinnerets (to produce silk); it also has a tail, like the Palaeozoic Uraraneida, some 200 million years after other known fossils with tails. The fossil resembles the most primitive living spiders, the mesotheles.
Taxonomy
The subdivisions of the arachnids are usually treated as orders. Historically, mites and ticks were treated as a single order, Acari. However, molecular phylogenetic studies suggest that the two groups do not form a single clade, with morphological similarities being due to convergence. They are now usually treated as two separate taxa – Acariformes, mites, and Parasitiformes, ticks – which may be ranked as orders or superorders. The arachnid subdivisions are listed below alphabetically; numbers of species are approximate.
Extant forms
Acariformes – mites (32,000 species)
Amblypygi – "blunt rump" tail-less whip scorpions with front legs modified into whip-like sensory structures as long as 25 cm or more (250 species)
Araneae – spiders (51,000 species)
Opiliones – phalangids, harvestmen or daddy-long-legs (6,700 species)
Palpigradi – microwhip scorpions (130 species)
Parasitiformes – ticks (12,000 species)
Pseudoscorpionida – pseudoscorpions (4,000 species)
Ricinulei – ricinuleids, hooded tickspiders (100 species)
Schizomida – "split middle" whip scorpions with divided exoskeletons (350 species)
Scorpiones – scorpions (2,700 species)
Solifugae – solpugids, windscorpions, sun spiders or camel spiders (1,200 species)
Uropygi (also called Thelyphonida) – whip scorpions or vinegaroons, forelegs modified into sensory appendages and a long tail on abdomen tip (120 species)
Extinct forms
†Haptopoda – extinct arachnids apparently part of the Tetrapulmonata, the group including spiders and whip scorpions (1 species)
†Phalangiotarbida – extinct arachnids of uncertain affinity (30 species)
†Trigonotarbida – extinct (late Silurian Early Permian)
†Uraraneida – extinct spider-like arachnids, but with a "tail" and no spinnerets (2 species)
It is estimated that 110,000 arachnid species have been described, and that there may be over a million in total.
| Biology and health sciences | Arachnids | null |
87179 | https://en.wikipedia.org/wiki/Closed-circuit%20television | Closed-circuit television | Closed-circuit television (CCTV), also known as video surveillance, is the use of closed-circuit television cameras to transmit a signal to a specific place on a limited set of monitors. It differs from broadcast television in that the signal is not openly transmitted, though it may employ point-to-point, point-to-multipoint (P2MP), or mesh wired or wireless links. Even though almost all video cameras fit this definition, the term is most often applied to those used for surveillance in areas that require additional security or ongoing monitoring (videotelephony is seldom called "CCTV").
The deployment of this technology has facilitated significant growth in state surveillance, a substantial rise in the methods of advanced social monitoring and control, and a host of crime prevention measures throughout the world. Though surveillance of the public using CCTV is common in many areas around the world, video surveillance has generated significant debate about balancing its use with individuals' right to privacy even when in public.
In industrial plants, CCTV equipment may be used to observe parts of a process from a central control room, especially if the environments observed are dangerous or inaccessible to humans. CCTV systems may operate continuously or only as required to monitor a particular event. A more advanced form of CCTV, using digital video recorders (DVRs), provides recording for possibly many years, with a variety of quality and performance options and extra features (such as motion detection and email alerts). More recently, decentralized IP cameras, perhaps equipped with megapixel sensors, support recording directly to network-attached storage devices or internal flash for stand-alone operation.
History
An early mechanical CCTV system was developed in June 1927 by Russian physicist Léon Theremin. Originally requested by CTO (the Soviet Council of Labor and Defense), the system consisted of a manually-operated scanning-transmitting camera and wireless shortwave transmitter and receiver, with a resolution of a hundred lines. Having been commandeered by Kliment Voroshilov, Theremin's CCTV system was demonstrated to Joseph Stalin, Semyon Budyonny, and Sergo Ordzhonikidze, and subsequently installed in the courtyard of the Moscow Kremlin to monitor approaching visitors.
Another early CCTV system was installed by Siemens AG at Test Stand VII in Peenemünde, Nazi Germany, in 1942, for observing the launch of V-2 rockets.
In the United States, the first commercial closed-circuit television system became available in 1949 from Remington Rand and designed by CBS Laboratories, called "Vericon". Vericon was advertised as not requiring a government permit due to the system using cabled connections between camera and monitor rather than over-the-air transmission.
Technology
The earliest video surveillance systems involved constant monitoring because there was no way to record and store information. The development of reel-to-reel media enabled the recording of surveillance footage. These systems required magnetic tapes to be changed manually, with the operator having to manually thread the tape from the tape reel through the recorder onto a take-up reel. Due to these shortcomings, video surveillance was not widespread.
Later, videocassette recorder technology became available in the 1970s, making it easier to record and erase information, and the use of video surveillance became more common. During the 1990s, digital multiplexing was developed, allowing several cameras to record at once, as well as time lapse and motion-only recording. This saved time and money which then led to an increase in the use of CCTV. Recently, CCTV technology has been shifting towards Internet-based products and systems, and other technological developments.
Application
Early CCTV systems were installed in central London by the Metropolitan Police between 1960 and 1965. By 1963, CCTV was being used in Munich to monitor traffic. Closed-circuit television was used as a form of pay-per-view theatre television for sports such as professional boxing and professional wrestling, and from 1964 through 1970, the Indianapolis 500 automobile race. Boxing telecasts were broadcast live to a select number of venues, mostly theaters, with arenas, stadiums, schools, and convention centres also being less often used venues, where viewers paid for tickets to watch the fight live. The first fight with a closed-circuit telecast was Joe Louis vs. Joe Walcott in 1948.
Closed-circuit telecasts peaked in popularity with Muhammad Ali in the 1960s and 1970s, with "The Rumble in the Jungle" fight drawing 50million CCTV viewers worldwide in 1974, and the "Thrilla in Manila" drawing 100million CCTV viewers worldwide in 1975. In 1985, the WrestleMania I professional wrestling show was seen by over one million viewers with this scheme. As late as 1996, the Julio César Chávez vs. Oscar De La Hoya boxing fight had 750,000 viewers. Although closed-circuit television was gradually replaced by pay-per-view home cable television in the 1980s and 1990s, it is still in use today for most awards shows and other events that are transmitted live to most venues but do not air as such on network television, and later re-edited for broadcast.
In September 1968, Olean, New York, was the first city in the United States to install CCTV video cameras along its main business street in an effort to fight crime. Marie Van Brittan Brown received a patent for the design of a CCTV-based home security system in 1969. (). Another early appearance was in 1973 in Times Square in New York City. The NYPD installed it to deter crime in the area; however, crime rates did not appear to drop much due to the cameras. Nevertheless, during the 1980s, video surveillance began to spread across the country specifically targeting public areas. It was seen as a cheaper way to deter crime compared to increasing the size of the police departments. Some businesses as well, especially those that were prone to theft, began to use video surveillance. From the mid-1990s on, police departments across the country installed an increasing number of cameras in various public spaces including housing projects, schools, and public parks. CCTV later became common in banks and stores to discourage theft by recording evidence of criminal activity. In 1997, 3,100 CCTV systems were installed in public housing and residential areas in New York City.
Experiments in the UK during the 1970s and 1980s, including outdoor CCTV in Bournemouth in 1985, led to several larger trial programs later that decade. The first use by local government was in King's Lynn, Norfolk, in 1987.
Uses
Crime prevention
A 2008 report by UK Police Chiefs concluded that only 3% of crimes were solved by CCTV. In London, a Metropolitan Police report showed that in 2008 only one crime was solved per 1000 cameras. In some cases CCTV cameras have become a target of attacks themselves. A 2009 systematic review by researchers from Northeastern University and the University of Cambridge used meta-analytic techniques to pool the average effect of CCTV on crime across 41 different studies. The studies included in the meta-analysis used quasi-experimental evaluation designs that involved before-and-after measures of crime in experimental and control areas. However, researchers have argued that the British car park studies included in the meta-analysis cannot accurately control for the fact that CCTV was introduced simultaneously with a range of other security-related measures. Second, some have noted that, in many of the studies, there may be issues with selection bias since the introduction of CCTV was potentially endogenous to previous crime trends. In particular, the estimated effects may be biased if CCTV is introduced in response to crime trends.
In 2012, cities such as Manchester in the UK are using DVR-based technology to improve accessibility for crime prevention. In 2013, City of Philadelphia Auditor found that the $15 million system was operational only 32% of the time. There is anecdotal evidence that CCTV aids in detection and conviction of offenders; for example, UK police forces routinely seek CCTV recordings after crimes. Cameras have also been installed on public transport in the hope of deterring crime.
A 2017 review published in the Journal of Scandinavian Studies in Criminology and Crime Prevention compiles seven studies that use such research designs. The studies found that CCTV reduced crime by 24–28% in public streets and urban subway stations. It also found that CCTV could decrease unruly behaviour in football stadiums and theft in supermarkets/mass merchant stores. However, there was no evidence of CCTV having desirable effects in parking facilities or suburban subway stations. Furthermore, the review indicates that CCTV is more effective in preventing property crimes than in violent crimes. However, a 2019, 40-year-long systematic review study reported that the most consistent effects of crime reduction of CCTV were in car parks.
A more open question is whether most CCTV is cost-effective. While low-quality domestic kits are cheap, the professional installation and maintenance of high definition CCTV is expensive. Gill and Spriggs did a cost-effectiveness analysis (CEA) of CCTV in crime prevention that showed little monetary saving with the installation of CCTV as most of the crimes prevented resulted in little monetary loss. Critics however noted that benefits of non-monetary value cannot be captured in a traditional cost effectiveness analysis and were omitted from their study.
In October 2009, an "Internet Eyes" website was announced which would pay members of the public to view CCTV camera images from their homes and report any crimes they witnessed. The site aimed to add "more eyes" to cameras which might be insufficiently monitored. Civil liberties campaigners criticized the idea as "a distasteful and a worrying development". Russia has also implemented a video surveillance system called 'Safe City', which has the capability to recognize facial features and moving objects, sending the data automatically to government authorities. However, the widespread tracking of individuals through video surveillance has raised significant privacy issues.
Forensics
Material collected by surveillance cameras has been used as a tool in post-event forensics to identify tactics and perpetrators of terrorist attacks. Furthermore, there are various projects—such as INDECT—that aim to detect suspicious behaviours of individuals and crowds. It has been argued that terrorists will not be deterred by cameras, that terror attacks are not really the subject of the current use of video surveillance and that terrorists might even see it as an extra channel for propaganda and publication of their acts. In Germany, calls for extended video surveillance by the country's main political parties, SPD, CDU, and CSU have been dismissed as "little more than a placebo for a subjective feeling of security" by a member of the Left party.
In Singapore, since 2012, thousands of CCTV cameras have helped deter loan sharks, nab litterbugs, and stop illegal parking, according to government figures. In 2013, Oaxaca, Mexico, hired deaf police officers to lip read conversations to uncover criminal conspiracies.
Body-worn cameras
In recent years, the use of body-worn video cameras has been introduced for a number of uses. For example, as a new form of surveillance in law enforcement, there are surveillance cameras that are worn by the police officer and are usually located on a police officer's chest or head. According to the Bureau of Justice Statistics (BJS), in the United States, in 2016, about 47% of the 15,328 general-purpose law enforcement agencies had acquired body-worn cameras.
Traffic flow monitoring
Many cities and motorway networks have extensive traffic-monitoring systems. Many of these cameras however, are owned by private companies and transmit data to drivers' GPS systems.
Highways England has a publicly owned CCTV network of over 3000 pan–tilt–zoom cameras covering the British motorway and trunk road network. These cameras are primarily used to monitor traffic conditions and are not used as speed cameras. With the addition of fixed cameras for the active traffic management system, the number of cameras on the Highways England's CCTV network is likely to increase significantly over the next few years. The London congestion charge is enforced by cameras positioned at the boundaries of and inside the congestion charge zone, which automatically read the number plates of vehicles that enter the zone. If the driver does not pay the charge then a fine will be imposed. Similar systems are being developed as a means of locating cars reported stolen. Other surveillance cameras serve as traffic enforcement cameras.
In Mecca, Saudi Arabia, CCTV cameras are used for monitoring (and thus managing) the flow of crowds. In the Philippines, barangay San Antonio used CCTV cameras and artificial intelligence software to detect the formation of crowds during an outbreak of a disease. Security personnel were sent whenever a crowd formed at a particular location in the city.
Use in homes and buildings
In schools
In the United States, Britain, Canada, Australia, and New Zealand, CCTV is widely used in schools to prevent bullying, vandalism, monitoring visitors, and maintaining a record of evidence of a crime. There are some restrictions: cameras are not typically installed in areas where there is a "reasonable expectation of privacy", such as bathrooms, gym locker areas, and private offices. Cameras are generally acceptable in parking lots, cafeterias, and supply rooms. Though some teachers object to the installation of cameras. A study of high school students in Israeli schools shows that students' views on CCTV used in school are based on how they think of their teachers, school, and authorities. It also stated that most students do not want CCTV installed inside a classroom.
In private and public places
Many homeowners choose to install CCTV systems either inside or outside their own homes, sometimes both. Modern CCTV systems can be monitored through mobile phone apps with internet coverage. Some systems also provide motion detection, so when movement is detected, an alert can be sent to a phone.
On a driver-only operated train, CCTV cameras may allow the driver to confirm that people are clear of doors before closing them and starting the train. A trial by RET in 2011 with facial recognition cameras mounted on trams made sure that people who were banned from them did not sneak on anyway. CCTV has also been frequently operated in many department stores and shopping malls to mitigate concerns of potential theft. In some countries, malls must obtain approval from the Ministry of Interior (MOI) or Information Commissioner's Office (ICO) before installing CCTVs. Some organizations also use CCTV to monitor the actions of workers in a workplace.Many sporting events in the United States use CCTV inside the venue, either to display on the stadium or arena's scoreboard or in the concourse or restroom areas to allow people to view action outside the seating bowl. The cameras send the feed to a central control centre where a producer selects feeds to send to the television monitors that people can view. In a trial with CCTV cameras, football club fans no longer needed to identify themselves manually, but could pass freely after being authorized by the facial recognition system.
Criminal use
Criminals may use surveillance cameras to monitor the public. For example, a hidden camera at an ATM can capture people's PINs as they are entered without their knowledge. The devices are small enough not to be noticed, and are placed where they can monitor the keypad of the machine as people enter their PINs. Images may be transmitted wirelessly to the criminal. Even lawful surveillance cameras sometimes have their data received by people who have no legal right to receive it.
Prevalence
In Asia
About 65% of CCTV cameras in the world are installed in Asia. In Asia, different human activities attracted the use of surveillance camera systems and services, including but not limited to business and related industries, transportation, sports, and care for the environment.
In 2018, China was reported to have over 170 million CCTV cameras. In 2023, China was estimated to have a huge surveillance network of around 540–626 million surveillance cameras, though numbers differ significantly between sources. Beijing, China's capital city, has the most cameras for a city overall, with a total of 1.15 million installed. The cameras are used to record details such as gender, age, and ethnicity. Cameras have been used in a southern Chinese city to issue tickets to people for infractions. In India, the cities of Hyderabad and Delhi, the capital, have around 900,000 and 450,000 cameras, respectively. The city of Chennai has the highest density per area of CCTV cameras worldwide, with 657 cameras per square kilometer in 2020 (from 280,000 CCTVs). China and India have some of the highest-density and the most amount of CCTVs in cities.
South Korea's military has removed over 1,300 surveillance Chinese cameras from its bases for security reasons. In Hong Kong, the police have stated that they are planning to install up to 7,000 surveillance cameras across Hong Kong in roughly three years time, up from the estimated 600 installed cameras in 2024; this amounts to roughly 2,000 planned cameras every year starting from 2025. Earlier, in June 2024, the cameras have also been vaguely planned to be integrated with facial recognition artificial intelligence. The plan has been criticized for the potential for the country to become similar to the "intense surveillance of mainland China". In Japan, an estimation by Nikkei Business estimated that the total number of security cameras in Japan is approximately 5 million in 2018. In Singapore, it was estimated that the total number of CCTVs was around 90,000 in 2021.
In the Americas
In 2009, there were an estimated 15,000 CCTV systems in Chicago, many linked to an integrated camera network. New York City's Domain Awareness System has 6,000 video surveillance cameras linked together, there are over 4,000 cameras on the subway system (although nearly half of them do not work), and two-thirds of large apartment and commercial buildings use video surveillance cameras. In Washington, D.C., there are more than 30,000 surveillance cameras in schools, and the Metro has nearly 6,000 cameras in use across the system.
There were an estimated 30 million surveillance cameras in the United States in 2011. Video surveillance has been common in the United States since the 1990s; for example, one manufacturer reported net earnings of $120 million in 1995. With lower cost and easier installation, sales of home security cameras increased in the early 21st century. Following the September 11 attacks, the use of video surveillance in public places became more common to deter future terrorist attacks. Under the Homeland Security Grant Program, government grants are available for cities to install surveillance camera networks. In 2018, there are approximately 70 million surveillance cameras in the United States.
In Canada, Project SCRAM is a policing effort by the Canadian policing service Halton Regional Police Service to register and help consumers understand privacy and safety issues related to the installations of home security systems. The project service has not been extended to commercial businesses.
In Latin America, the CCTV market is growing rapidly with the increase of property crime. In Brazil, CCTV usage is only permitted in public areas, though individuals must be informed about the presence of the camera according to the Brazilian LGPD (which broadly aligns with the EU's GDPR), the Brazilian Civil Code, and the Brazilian Association of Technical Standards. However, starting in 2023, in Brazil, the Smart Sampa project, a project that plans to deploy 20,000 facial recognition cameras by 2024, has been criticized for its potential to be "biased against Black individuals" and overall risks of data privacy.
In Russia
In 2017, in Russia, the Moscow network included 160,000 CCTV cameras and 95 percent of residential buildings; over 3,500 Russian cameras were connected to the General Centre for Data Storage and Processing. Video recordings are used to solve 70 percent of offenses and crimes. In 2024, there are over 1 million video surveillance cameras in Russia. About 230,000 are in use in Moscow alone. According to data from the Russian Minister for Digital Development, Maksut Shadayev, one in three of all CCTVs in Russia were connected to a facial recognition system. A leaked document revealed that the president of Russia, Vladimir Putin, called on the Russian security services to fund "a massive AI-based surveillance apparatus". The spending of over was planned for the system in 2024–2026.
In Europe
In the United Kingdom
In the United Kingdom, the vast majority of CCTV cameras are operated not by government bodies, but by private individuals or companies, especially to monitor the interiors of shops and businesses. According to the Freedom of Information Act 2000 requests, the total number of local government-operated CCTV cameras was around 52,000 over the entirety of the UK.
An article published in CCTV Image magazine estimated the number of private and local government-operated cameras in the United Kingdom was 1.85 million in 2011. The estimate was based on extrapolating from a comprehensive survey of public and private cameras within the Cheshire Constabulary jurisdiction. This works out as an average of one camera for every 32 people in the UK, although the density of cameras varies greatly from place to place. The Cheshire report also claims that the average person on a typical day would be seen by 70 CCTV cameras.
The Cheshire figure is regarded as more dependable than a previous study by Michael McCahill and Clive Norris of UrbanEye published in 2002. Based on a small sample in Putney High Street, McCahill and Norris extrapolated the number of surveillance cameras in Greater London to be around 500,000 and the total number of cameras in the UK to be around 4.2 million. According to their estimate, the UK has one camera for every 14 people. Although it has been acknowledged for several years that the methodology behind this figure is flawed, it has been widely quoted. Furthermore, the figure of 500,000 for Greater London is often confused with the figure for the police and local government-operated cameras in the City of London, which was about 650 in 2011.
The CCTV User Group estimated that there were around 1.5 million private and local government CCTV cameras in city centres, stations, airports, and major retail areas in the UK. Research conducted by the Scottish Centre for Crime and Justice Research and based on a survey of all Scottish local authorities identified that there are over 2,200 public space CCTV cameras in Scotland. The UK has often been cited as a country that has one of the most CCTV cameras in Europe.
In Africa
In South Africa, due to the high crime rate, CCTV surveillance is widely prevalent. The first IP camera was released in 1996 by Axis Communications, but IP cameras did not arrive in South Africa until 2008. To regulate the number of suppliers in 2001, the Private Security Industry Regulation Act was passed requiring all security companies to be registered with the Private Security Industry Regulatory Authority (PSIRA). In Egypt, the capital city of Cairo has approximately 47,000 cameras, while the New Administrative Capital has more than 6,000 surveillance cameras in 2023. In South Sudan, the Ministry of Interior has reinstated the operation of CCTV surveillance cameras in Juba after the cameras have been inactive for over four years; South Sudan also launched a drone security system in 2024 in Juba.
Privacy
Proponents of CCTV cameras argue that cameras are effective at deterring and solving crime, and that appropriate regulation and legal restrictions on surveillance of public spaces can provide sufficient protections so that an individual's right to privacy can reasonably be weighed against the benefits of surveillance. However, anti-surveillance activists have held that there is a right to privacy in public areas, that the development of CCTV in public areas, linked to databases of people's pictures and identity, presents a breach of civil liberties and the loss of anonymity in public places.
Furthermore, some scholars have argued that situations wherein a person's rights can be justifiably compromised are so rare as to not sufficiently warrant the frequent compromising of public privacy rights that occurs in regions with widespread CCTV surveillance. For example, in her book Setting the Watch: Privacy and the Ethics of CCTV Surveillance, Beatrice von Silva-Tarouca Larsen argues that CCTV surveillance is ethically permissible only in "certain restrictively defined situations", such as when a specific location has a "comprehensively documented and significant criminal threat".
Law by countries
In the United States, the Constitution does not explicitly include the right to privacy although the Supreme Court has said several of the amendments to the Constitution implicitly grant this right. Access to video surveillance recordings may require a judge's writ, which is readily available. However, there is little legislation and regulation specific to video surveillance. In Canada, the use of video surveillance has grown very rapidly. In Ontario, both the municipal and provincial versions of the Freedom of Information and Protection of Privacy Act outline guidelines that control how images and information can be gathered by this method and or released.
All countries in the European Union are signatories to the European Convention on Human Rights, which protects individual rights, including the right to privacy. The General Data Protection Regulation (GDPR) required that the footage should only be retained for as long as necessary for the purpose for which it was collected. In Sweden, the use of CCTV in public spaces is regulated both nationally and via GDPR. In an opinion poll commissioned by Lund University in August 2017, the general public of Sweden was asked to choose one measure that would ensure their need for privacy when subject to CCTV operation in public spaces: 43% favored regulation in the form of clear routines for managing, storing, and distributing image material generated from surveillance cameras, 39% favored regulation in the form of clear signage informing that camera surveillance in public spaces is present, 10% favored regulation in the form of having restrictive policies for issuing permits for surveillance cameras in public spaces, 6% were unsure, and 2% favored regulation in the form of having permits restricting the use of surveillance cameras during certain times.
In the United Kingdom, the Data Protection Act 1998 imposes legal restrictions on the uses of CCTV recordings and mandates the registration of CCTV systems with the Data Protection Agency. In 2004, the successor to the Data Protection Agency, the Information Commissioner's Office, clarified that this required registration of all CCTV systems with the Commissioner and prompt deletion of archived recordings. However, subsequent case law (Durant vs. FSA) limited the scope of the protection provided by this law, and not all CCTV systems are currently regulated.
A 2007 report by the UK Information Commissioner's Office highlighted the need for the public to be made more aware of the growing use of surveillance and the potential impact on civil liberties. In the same year, a campaign group claimed that the majority of CCTV cameras in the UK are operated illegally or are in breach of privacy guidelines. In response, the Information Commissioner's Office rebutted the claim and added that any reported abuses of the Data Protection Act are swiftly investigated. Even if there are some concerns arising from the use of CCTV such as involving privacy, more commercial establishments are still installing CCTV systems in the UK. In 2012, the UK government enacted the Protection of Freedoms Act which includes several provisions related to controlling the storage and use of information about individuals. Under this Act, the Home Office published a code of practice in 2013 for the use of surveillance cameras by government and local authorities. The code wrote that "surveillance by consent should be regarded as analogous to policing by consent."
In the Philippines, the main laws governing CCTV usage are Data Privacy Act of 2012 and the Cybercrime Prevention Act of 2012. The Data Privacy Act of 2012 (Republic Act No. 10173) is the primary law that governs data privacy in the Philippines. The Act mandates that the privacy of individuals must be respected and protected. The law applies to CCTV cameras as they collect and process personal data. The Cybercrime Prevention Act of 2012 (Republic Act No. 10175) includes provisions that apply to CCTV usage. Under the Act, the unauthorized access to, interception of, or interference with data is a criminal offense. This means that unauthorized access to CCTV footage could potentially be considered a cybercrime.
Technological developments
Computer-controlled identification
Computer-controlled cameras can identify, track, and categorize objects in their field of view. Video content analysis, also referred to as video analytics, is the capability of automatically analyzing video to detect and determine temporal events not based on a single image but rather on object classification. Advanced VCA applications can measure object speed. Some video analytics applications can be used to apply rules to designated areas. These rules can relate to access control. For example, they can describe which objects can enter into a specific area. There are different approaches to implementing VCA technology. Data may be processed on the camera itself (edge processing) or by a centralized server. Artificial intelligence-powered CCTV cameras have also been further tested to detect congestion, be used as a facial recognition system, and predict signs of criminal activities.
Compression
There is a cost in the retention of the images produced by CCTV systems. The amount and quality of data stored on storage media is subject to compression ratios, images stored per second, and image size, and is affected by the retention period of the videos or images. DVRs store images in a variety of proprietary file formats. CCTV security cameras can either store the images on a local hard disk drive, an SD card, or in the cloud. Recordings may be retained for a preset amount of time and then automatically archived, overwritten, or deleted, the period being determined by the organisation that generated them.
IP cameras
A growing branch in CCTV is internet protocol cameras (IP cameras). It is estimated that 2014 was the first year that IP cameras outsold analog cameras. IP cameras use the Internet Protocol (IP) used by most local area networks (LANs) to transmit video across data networks in digital form. IP can optionally be transmitted across the public internet, allowing users to view their cameras remotely on a computer or phone via an internet connection. IP cameras are considered part of the Internet of things (IoT) and have many of the same benefits and security risks as other IP-enabled devices. Smart doorbells are one example of a type of CCTV that uses IP to allow it to send alerts.
Main types of IP cameras include fixed cameras, pan–tilt–zoom (PTZ) cameras, and multi-sensor cameras. Fixed cameras' resolution typically does not exceed 20 megapixels. The main feature of a PTZ is its remote directional and optical zoom capability. With multi-sensor cameras, wider areas can be monitored. Industrial video surveillance systems use network video recorders to support IP cameras. These devices are responsible for the recording, storage, video stream processing, and alarm management. Since 2008, IP video surveillance manufacturers can use a standardized network interface (ONVIF) to support compatibility between systems. For professional or public infrastructure security applications, IP video is restricted to within a private network or VPN.
Networking CCTV cameras
The city of Chicago operates a networked video surveillance system which combines CCTV video feeds of government agencies with those of the private sector, installed in city buses, businesses, public schools, subway stations, housing projects, etc. Even homeowners are able to contribute footage. It is estimated to incorporate the video feeds of a total of 15,000 cameras. The system is used by Chicago's Office of Emergency Management in case of an emergency call: it detects the caller's location and instantly displays the real-time video feed of the nearest security camera to the operator, not requiring any user intervention. While the system is far too vast to allow complete real-time monitoring, it stores the video data for use as evidence in criminal cases.
Wireless security cameras
Many consumers are turning to wireless security cameras for home surveillance. Wireless cameras do not require a video cable for video/audio transmission, simply a cable for power. Wireless cameras are also easy and inexpensive to install. Previous generations of wireless security cameras relied on analogue technology; modern wireless cameras use digital technology with usually more secure and interference-free signals. Wireless mesh networks have been used for connection with the other radios in the same group. There are also cameras using solar power. Wireless IP cameras can become a client on the WLAN, and they can be configured with encryption and authentication protocols with a connection to an access point.
Talking CCTV
In Wiltshire, United Kingdom, in 2003, a pilot scheme for what is now known as "Talking CCTV" was put into action, allowing operators of CCTV cameras to communicate through the camera via a speaker when it is needed. In 2005, Ray Mallon, the mayor and former senior police officer of Middlesbrough, implemented "Talking CCTV" in his area. Other towns have had such cameras installed. In 2007, several of the devices were installed in Bridlington town centre, East Riding of Yorkshire.
Countermeasures
In December 2016, a form of anti-CCTV and facial recognition sunglasses called "reflectacles" were invented by a craftsman based in Chicago named Scott Urban. They reflect infrared and, optionally, visible light which makes the user's face a white blur to cameras. The project passed its funding goal of $28,000, and "reflectacles" became commercially available in June 2017.
| Technology | Photography | null |
87249 | https://en.wikipedia.org/wiki/Kola%20nut | Kola nut | The kola nut (Yoruba: obi, Dagbani: guli, Hausa: goro, Igbo: ọjị, Sängö: gôro, Swahili: mukezu) is the seed of certain species of plant of the genus Cola, placed formerly in the cocoa family Sterculiaceae and now usually subsumed in the mallow family Malvaceae (as subfamily Sterculioideae). These cola species are trees native to the tropical rainforests of Africa. Their caffeine-containing seeds are used as flavoring ingredients in various carbonated soft drinks, from which the name cola originates.
Description
About across, the kola nut is a nut of evergreen trees of the genus Cola, primarily of the species Cola acuminata and Cola nitida. Cola acuminata, an evergreen tree about 20 meters in height, has long, ovoid leaves pointed at both the ends with a leathery texture. The trees have cream-white flowers with purplish-brown striations, and star-shaped fruit consisting of usually 5 follicles. Inside each follicle, about a dozen prismatic seeds develop in a white seed-shell. The nut has a reddish or white color flesh on the inside, and has a sweet and rose-like aroma.
Kola nuts contain about 2–4% caffeine and theobromine, as well as tannins, alkaloids, saponins, and flavonoids.
Chemistry
Preliminary studies of phytochemicals in kola nut indicate the presence of various constituents: caffeine (2–3.5%), theobromine (1.0–2.5%), theophylline, methylliberine, polyphenols, catechins, and phlobaphens (kola red), among others.
Cultivation
Originally a tree of the tropical rainforest, it needs a hot humid climate, but can withstand a dry season on sites with a high ground water level. It may be cultivated in drier areas where groundwater is available. C. nitida is a shade bearer, but develops a better spreading crown which yields more fruits in open places. Though it is a lowland forest tree, it has been found at altitudes over 300 m on deep, rich soils under heavy and evenly distributed rainfall.
Regular weeding is necessary, which can be performed manually or through the use of herbicides. Some irrigation can be provided to the plants, but it is important to remove the water through an effective drainage system, as excess water may prove to be detrimental for the growth of the plant. When not grown in adequate shade, the kola nut plant responds well to fertilizers. Usually, the plants need to be provided with windbreaks to protect them from strong gales.
Kola nuts can be harvested mechanically or by hand, by plucking them at the tree branch. Nigeria produces 52.4% of worldwide production followed by the Ivory Coast and Cameroon. When kept in a cool, dry place, kola nuts can be stored for a long time.
The crop's value makes it one of the most important indigenous cash crops in West Africa and is used as a means of social mobility.
Pests and diseases
The nuts are subject to attack by the kola weevil Balanogastris kolae. The larvae of the moth Characoma strictigrapta that also attacks cacao bore into the nuts. Traders sometimes apply an extract of the bark of Rauvolfia vomitoria or the pulverised fruits of Xylopia and Capsicum to counteract the attack on nursery plants. The cacao pests Sahlbergella spp. have been found also on C. nitida as an alternative host plant. While seeds are liable to worm attack, the wood is subject to borer attack.
Production
In 2022, world production of kola nuts was , led by Nigeria with 55% of the total (table).
Uses
The kola nut has a bitter flavor and contains caffeine. The nut is a nervous system stimulant and is chewed in many West African countries, in both private and social settings. It is often used ceremonially, presented to chiefs or guests. Throughout history, kola nuts have been planted on graves as part of various rituals. Laborers in many countries also grow kola nuts in efforts to fight fatigue and hunger, while Brazilians and people of the West Indies use the nut as a remedy for hangovers, intoxication, and diarrhea.
In folk medicine, kola nuts are considered useful for aiding digestion when ground and mixed with honey, and are used as a remedy for coughs.
Kola nuts are perhaps best known to Western culture as a flavoring ingredient and one of the sources of caffeine in cola and other similarly flavored beverages, although kola nut extract is no longer claimed on the labels of major commercial cola drinks such as Coca-Cola.
History
Human use of the kola nut, like the coffee berry and tea leaf, appears to have ancient origins. The spread of the kola nut across North Africa seems to be connected to the spread of Islam across North Africa during the 17th century, as trading across the Mediterranean became established. The kola nut was particularly useful on slave ships to improve the taste of water, as enslaved Africans were often given poor quality water to drink. A French voyager named Chevalier Des Marchais, who traveled to West Africa in the late 1720s, noted that the nut made the, "bitterest, our sourest Things taste Sweet after it." These sweet alterations are attributed to the chemical substances that the nut adds to one's palate or the sheer amount of caffeine.
Kola nuts were used as an ingredient within Coca-Cola and Pepsi-Cola in 1886 and 1888 respectively. Kola nuts are an important part of the traditional spiritual practice, culture, and religion in West Africa, particularly Ghana, Niger, Nigeria, Sierra Leone, Democratic Republic of Congo and Liberia.
Cola recipe
In the 1880s, a pharmacist in Georgia, John Pemberton, took caffeine extracted from kola nuts and cocaine-containing extracts from coca leaves and mixed them with sugar, other flavorings, and carbonated water to invent Coca-Cola, the first widely popular cola soft drink. Although the exact details of its cola recipe remain confidential, as of 2016, the Coca-Cola formula no longer contained actual kola nut extract, and an independent test conducted to identify it failed to detect its signature proteins.
In culture
Used in cultural traditions of the Igbo people, the presentation of kola nuts to guests or in a traditional gathering shows good will. It is implemented in Yoruba religion both as an offering to orishas and as an instrument of divination.
A kola nut ceremony is briefly described in Chinua Achebe's 1958 novel Things Fall Apart. The eating of kola nuts is referred to at least ten times in the novel, showing the kola nut's significance in pre-colonial 1890s Igbo culture in Nigeria. One of these sayings on kola nut in Things Fall Apart is "He who brings kola brings life." It is also featured prominently in Chris Abani's 2004 novel GraceLand. The kola nut is also mentioned in The Color Purple by Alice Walker, although it is spelled "cola".
The kola nut is mentioned in Bloc Party's song "Where is Home?" on the album A Weekend in the City. The lyric, setting a post-funeral scene for the murder of a black boy in London, reads, "After the funeral, breaking kola nuts, we sit and reminisce about the past." The kola nut is mentioned in the At the Drive-In song "Enfilade" on the album Relationship of Command. The kola nut is repeatedly mentioned in Chimamanda Ngozi Adichie's novel Half of a Yellow Sun, which also features the phrase: "He who brings the Kola nut brings life."
Gallery
| Biology and health sciences | Nuts | Plants |
87299 | https://en.wikipedia.org/wiki/Heaviside%20step%20function | Heaviside step function | The Heaviside step function, or the unit step function, usually denoted by or (but sometimes , or ), is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for positive arguments. Different conventions concerning the value are in use. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one.
The function was originally developed in operational calculus for the solution of differential equations, where it represents a signal that switches on at a specified time and stays switched on indefinitely. Heaviside developed the operational calculus as a tool in the analysis of telegraphic communications and represented the function as .
Formulation
Taking the convention that , the Heaviside function may be defined as:
a piecewise function:
using the Iverson bracket notation:
an indicator function:
For the alternative convention that , it may be expressed as:
a linear transformation of the sign function,
the arithmetic mean of two Iverson brackets,
a one-sided limit of the two-argument arctangent
a hyperfunction or equivalently where is the principal value of the complex logarithm of
Other definitions which are undefined at include:
the derivative of the ramp function:
in terms of the absolute value function as
Relationship with Dirac delta
The Dirac delta function is the weak derivative of the Heaviside function:
Hence the Heaviside function can be considered to be the integral of the Dirac delta function. This is sometimes written as
although this expansion may not hold (or even make sense) for , depending on which formalism one uses to give meaning to integrals involving . In this context, the Heaviside function is the cumulative distribution function of a random variable which is almost surely 0. (See Constant random variable.)
Analytic approximations
Approximations to the Heaviside step function are of use in biochemistry and neuroscience, where logistic approximations of step functions (such as the Hill and the Michaelis–Menten equations) may be used to approximate binary cellular switches in response to chemical signals.
For a smooth approximation to the step function, one can use the logistic function
where a larger corresponds to a sharper transition at . If we take , equality holds in the limit:
There are many other smooth, analytic approximations to the step function. Among the possibilities are:
These limits hold pointwise and in the sense of distributions. In general, however, pointwise convergence need not imply distributional convergence, and vice versa distributional convergence need not imply pointwise convergence. (However, if all members of a pointwise convergent sequence of functions are uniformly bounded by some "nice" function, then convergence holds in the sense of distributions too.)
In general, any cumulative distribution function of a continuous probability distribution that is peaked around zero and has a parameter that controls for variance can serve as an approximation, in the limit as the variance approaches zero. For example, all three of the above approximations are cumulative distribution functions of common probability distributions: the logistic, Cauchy and normal distributions, respectively.
Non-Analytic approximations
Approximations to the Heaviside step function could be made through Smooth transition function like :
Integral representations
Often an integral representation of the Heaviside step function is useful:
where the second representation is easy to deduce from the first, given that the step function is real and thus is its own complex conjugate.
Zero argument
Since is usually used in integration, and the value of a function at a single point does not affect its integral, it rarely matters what particular value is chosen of . Indeed when is considered as a distribution or an element of (see space) it does not even make sense to talk of a value at zero, since such objects are only defined almost everywhere. If using some analytic approximation (as in the examples above) then often whatever happens to be the relevant limit at zero is used.
There exist various reasons for choosing a particular value.
is often used since the graph then has rotational symmetry; put another way, is then an odd function. In this case the following relation with the sign function holds for all :
Also, H(x) + H(-x) = 1 for all x.
is used when needs to be right-continuous. For instance cumulative distribution functions are usually taken to be right continuous, as are functions integrated against in Lebesgue–Stieltjes integration. In this case is the indicator function of a closed semi-infinite interval: The corresponding probability distribution is the degenerate distribution.
is used when needs to be left-continuous. In this case is an indicator function of an open semi-infinite interval:
In functional-analysis contexts from optimization and game theory, it is often useful to define the Heaviside function as a set-valued function to preserve the continuity of the limiting functions and ensure the existence of certain solutions. In these cases, the Heaviside function returns a whole interval of possible solutions, .
Discrete form
An alternative form of the unit step, defined instead as a function (that is, taking in a discrete variable ), is:
or using the half-maximum convention:
where is an integer. If is an integer, then must imply that , while must imply that the function attains unity at . Therefore the "step function" exhibits ramp-like behavior over the domain of , and cannot authentically be a step function, using the half-maximum convention.
Unlike the continuous case, the definition of is significant.
The discrete-time unit impulse is the first difference of the discrete-time step
This function is the cumulative summation of the Kronecker delta:
where
is the discrete unit impulse function.
Antiderivative and derivative
The ramp function is an antiderivative of the Heaviside step function:
The distributional derivative of the Heaviside step function is the Dirac delta function:
Fourier transform
The Fourier transform of the Heaviside step function is a distribution. Using one choice of constants for the definition of the Fourier transform we have
Here is the distribution that takes a test function to the Cauchy principal value of . The limit appearing in the integral is also taken in the sense of (tempered) distributions.
Unilateral Laplace transform
The Laplace transform of the Heaviside step function is a meromorphic function. Using the unilateral Laplace transform we have:
When the bilateral transform is used, the integral can be split in two parts and the result will be the same.
| Mathematics | Specific functions | null |
87352 | https://en.wikipedia.org/wiki/Graph%20of%20a%20function | Graph of a function | In mathematics, the graph of a function is the set of ordered pairs , where In the common case where and are real numbers, these pairs are Cartesian coordinates of points in a plane and often form a curve.
The graphical representation of the graph of a function is also known as a plot.
In the case of functions of two variables – that is, functions whose domain consists of pairs –, the graph usually refers to the set of ordered triples where . This is a subset of three-dimensional space; for a continuous real-valued function of two real variables, its graph forms a surface, which can be visualized as a surface plot.
In science, engineering, technology, finance, and other areas, graphs are tools used for many purposes. In the simplest case one variable is plotted as a function of another, typically using rectangular axes; see Plot (graphics) for details.
A graph of a function is a special case of a relation.
In the modern foundations of mathematics, and, typically, in set theory, a function is actually equal to its graph. However, it is often useful to see functions as mappings, which consist not only of the relation between input and output, but also which set is the domain, and which set is the codomain. For example, to say that a function is onto (surjective) or not the codomain should be taken into account. The graph of a function on its own does not determine the codomain. It is common to use both terms function and graph of a function since even if considered the same object, they indicate viewing it from a different perspective.
Definition
Given a function from a set (the domain) to a set (the codomain), the graph of the function is the set
which is a subset of the Cartesian product . In the definition of a function in terms of set theory, it is common to identify a function with its graph, although, formally, a function is formed by the triple consisting of its domain, its codomain and its graph.
Examples
Functions of one variable
The graph of the function defined by
is the subset of the set
From the graph, the domain is recovered as the set of first component of each pair in the graph .
Similarly, the range can be recovered as .
The codomain , however, cannot be determined from the graph alone.
The graph of the cubic polynomial on the real line
is
If this set is plotted on a Cartesian plane, the result is a curve (see figure).
Functions of two variables
The graph of the trigonometric function
is
If this set is plotted on a three dimensional Cartesian coordinate system, the result is a surface (see figure).
Oftentimes it is helpful to show with the graph, the gradient of the function and several level curves. The level curves can be mapped on the function surface or can be projected on the bottom plane. The second figure shows such a drawing of the graph of the function:
| Mathematics | Basics | null |
87372 | https://en.wikipedia.org/wiki/Additive%20function | Additive function | In number theory, an additive function is an arithmetic function f(n) of the positive integer variable n such that whenever a and b are coprime, the function applied to the product ab is the sum of the values of the function applied to a and b:
Completely additive
An additive function f(n) is said to be completely additive if holds for all positive integers a and b, even when they are not coprime. Totally additive is also used in this sense by analogy with totally multiplicative functions. If f is a completely additive function then f(1) = 0.
Every completely additive function is additive, but not vice versa.
Examples
Examples of arithmetic functions which are completely additive are:
The restriction of the logarithmic function to
The multiplicity of a prime factor p in n, that is the largest exponent m for which pm divides n.
a0(n) – the sum of primes dividing n counting multiplicity, sometimes called sopfr(n), the potency of n or the integer logarithm of n . For example:
a0(4) = 2 + 2 = 4
a0(20) = a0(22 · 5) = 2 + 2 + 5 = 9
a0(27) = 3 + 3 + 3 = 9
a0(144) = a0(24 · 32) = a0(24) + a0(32) = 8 + 6 = 14
a0(2000) = a0(24 · 53) = a0(24) + a0(53) = 8 + 15 = 23
a0(2003) = 2003
a0(54,032,858,972,279) = 1240658
a0(54,032,858,972,302) = 1780417
a0(20,802,650,704,327,415) = 1240681
The function Ω(n), defined as the total number of prime factors of n, counting multiple factors multiple times, sometimes called the "Big Omega function" . For example;
Ω(1) = 0, since 1 has no prime factors
Ω(4) = 2
Ω(16) = Ω(2·2·2·2) = 4
Ω(20) = Ω(2·2·5) = 3
Ω(27) = Ω(3·3·3) = 3
Ω(144) = Ω(24 · 32) = Ω(24) + Ω(32) = 4 + 2 = 6
Ω(2000) = Ω(24 · 53) = Ω(24) + Ω(53) = 4 + 3 = 7
Ω(2001) = 3
Ω(2002) = 4
Ω(2003) = 1
Ω(54,032,858,972,279) = Ω(11 ⋅ 19932 ⋅ 1236661) = 4 ;
Ω(54,032,858,972,302) = Ω(2 ⋅ 72 ⋅ 149 ⋅ 2081 ⋅ 1778171) = 6
Ω(20,802,650,704,327,415) = Ω(5 ⋅ 7 ⋅ 112 ⋅ 19932 ⋅ 1236661) = 7.
Examples of arithmetic functions which are additive but not completely additive are:
ω(n), defined as the total number of distinct prime factors of n . For example:
ω(4) = 1
ω(16) = ω(24) = 1
ω(20) = ω(22 · 5) = 2
ω(27) = ω(33) = 1
ω(144) = ω(24 · 32) = ω(24) + ω(32) = 1 + 1 = 2
ω(2000) = ω(24 · 53) = ω(24) + ω(53) = 1 + 1 = 2
ω(2001) = 3
ω(2002) = 4
ω(2003) = 1
ω(54,032,858,972,279) = 3
ω(54,032,858,972,302) = 5
ω(20,802,650,704,327,415) = 5
a1(n) – the sum of the distinct primes dividing n, sometimes called sopf(n) . For example:
a1(1) = 0
a1(4) = 2
a1(20) = 2 + 5 = 7
a1(27) = 3
a1(144) = a1(24 · 32) = a1(24) + a1(32) = 2 + 3 = 5
a1(2000) = a1(24 · 53) = a1(24) + a1(53) = 2 + 5 = 7
a1(2001) = 55
a1(2002) = 33
a1(2003) = 2003
a1(54,032,858,972,279) = 1238665
a1(54,032,858,972,302) = 1780410
a1(20,802,650,704,327,415) = 1238677
Multiplicative functions
From any additive function it is possible to create a related which is a function with the property that whenever and are coprime then:
One such example is Likewise if is completely additive, then is completely multiplicative. More generally, we could consider the function , where is a nonzero real constant.
Summatory functions
Given an additive function , let its summatory function be defined by . The average of is given exactly as
The summatory functions over can be expanded as where
The average of the function is also expressed by these functions as
There is always an absolute constant such that for all natural numbers ,
Let
Suppose that is an additive function with
such that as ,
Then where is the Gaussian distribution function
Examples of this result related to the prime omega function and the numbers of prime divisors of shifted primes include the following for fixed where the relations hold for :
| Mathematics | Functions: General | null |
87410 | https://en.wikipedia.org/wiki/Coral%20reef | Coral reef | A coral reef is an underwater ecosystem characterized by reef-building corals. Reefs are formed of colonies of coral polyps held together by calcium carbonate. Most coral reefs are built from stony corals, whose polyps cluster in groups.
Coral belongs to the class Anthozoa in the animal phylum Cnidaria, which includes sea anemones and jellyfish. Unlike sea anemones, corals secrete hard carbonate exoskeletons that support and protect the coral. Most reefs grow best in warm, shallow, clear, sunny and agitated water. Coral reefs first appeared 485 million years ago, at the dawn of the Early Ordovician, displacing the microbial and sponge reefs of the Cambrian.
Sometimes called rainforests of the sea, shallow coral reefs form some of Earth's most diverse ecosystems. They occupy less than 0.1% of the world's ocean area, about half the area of France, yet they provide a home for at least 25% of all marine species, including fish, mollusks, worms, crustaceans, echinoderms, sponges, tunicates and other cnidarians. Coral reefs flourish in ocean waters that provide few nutrients. They are most commonly found at shallow depths in tropical waters, but deep water and cold water coral reefs exist on smaller scales in other areas.
Shallow tropical coral reefs have declined by 50% since 1950, partly because they are sensitive to water conditions. They are under threat from excess nutrients (nitrogen and phosphorus), rising ocean heat content and acidification, overfishing (e.g., from blast fishing, cyanide fishing, spearfishing on scuba), sunscreen use, and harmful land-use practices, including runoff and seeps (e.g., from injection wells and cesspools).
Coral reefs deliver ecosystem services for tourism, fisheries and shoreline protection. The annual global economic value of coral reefs has been estimated at anywhere from US$30–375 billion (1997 and 2003 estimates) to US$2.7 trillion (a 2020 estimate) to US$9.9 trillion (a 2014 estimate).
Though the shallow water tropical coral reefs are best known, there are also deeper water reef-forming corals, which live in colder water and in temperate seas.
Formation
Most coral reefs were formed after the Last Glacial Period when melting ice caused sea level to rise and flood continental shelves. Most coral reefs are less than 10,000 years old. As communities established themselves, the reefs grew upwards, pacing rising sea levels. Reefs that rose too slowly could become drowned, without sufficient light. Coral reefs are also found in the deep sea away from continental shelves, around oceanic islands and atolls. The majority of these islands are volcanic in origin. Others have tectonic origins where plate movements lifted the deep ocean floor.
In The Structure and Distribution of Coral Reefs, Charles Darwin set out his theory of the formation of atoll reefs, an idea he conceived during the voyage of the Beagle. He theorized that uplift and subsidence of Earth's crust under the oceans formed the atolls. Darwin set out a sequence of three stages in atoll formation. A fringing reef forms around an extinct volcanic island as the island and ocean floor subside. As the subsidence continues, the fringing reef becomes a barrier reef and ultimately an atoll reef.
Darwin predicted that underneath each lagoon would be a bedrock base, the remains of the original volcano. Subsequent research supported this hypothesis. Darwin's theory followed from his understanding that coral polyps thrive in the tropics where the water is agitated, but can only live within a limited depth range, starting just below low tide. Where the level of the underlying earth allows, the corals grow around the coast to form fringing reefs, and can eventually grow to become a barrier reef.
Where the bottom is rising, fringing reefs can grow around the coast, but coral raised above sea level dies. If the land subsides slowly, the fringing reefs keep pace by growing upwards on a base of older, dead coral, forming a barrier reef enclosing a lagoon between the reef and the land. A barrier reef can encircle an island, and once the island sinks below sea level a roughly circular atoll of growing coral continues to keep up with the sea level, forming a central lagoon. Barrier reefs and atolls do not usually form complete circles but are broken in places by storms. Like sea level rise, a rapidly subsiding bottom can overwhelm coral growth, killing the coral and the reef, due to what is called coral drowning. Corals that rely on zooxanthellae can die when the water becomes too deep for their symbionts to adequately photosynthesize, due to decreased light exposure.
The two main variables determining the geomorphology, or shape, of coral reefs are the nature of the substrate on which they rest, and the history of the change in sea level relative to that substrate.
The approximately 20,000-year-old Great Barrier Reef offers an example of how coral reefs formed on continental shelves. Sea level was then lower than in the 21st century. As sea level rose, the water and the corals encroached on what had been hills of the Australian coastal plain. By 13,000 years ago, sea level had risen to lower than at present, and many hills of the coastal plains had become continental islands. As sea level rise continued, water topped most of the continental islands. The corals could then overgrow the hills, forming cays and reefs. Sea level on the Great Barrier Reef has not changed significantly in the last 6,000 years. The age of living reef structure is estimated to be between 6,000 and 8,000 years. Although the Great Barrier Reef formed along a continental shelf, and not around a volcanic island, Darwin's principles apply. Development stopped at the barrier reef stage, since Australia is not about to submerge. It formed the world's largest barrier reef, from shore, stretching for .
Healthy tropical coral reefs grow horizontally from per year, and grow vertically anywhere from per year; however, they grow only at depths shallower than because of their need for sunlight, and cannot grow above sea level.
Material
As the name implies, coral reefs are made up of coral skeletons from mostly intact coral colonies. As other chemical elements present in corals become incorporated into the calcium carbonate deposits, aragonite is formed. However, shell fragments and the remains of coralline algae such as the green-segmented genus Halimeda can add to the reef's ability to withstand damage from storms and other threats. Such mixtures are visible in structures such as Eniwetok Atoll.
In the geologic past
The times of maximum reef development were in the Middle Cambrian (513–501 Ma), Devonian (416–359 Ma) and Carboniferous (359–299 Ma), owing to extinct order Rugosa corals, and Late Cretaceous (100–66 Ma) and Neogene (23 Ma–present), owing to order Scleractinia corals.
Not all reefs in the past were formed by corals: those in the Early Cambrian (542–513 Ma) resulted from calcareous algae and archaeocyathids (small animals with conical shape, probably related to sponges) and in the Late Cretaceous (100–66 Ma), when reefs formed by a group of bivalves called rudists existed; one of the valves formed the main conical structure and the other, much smaller valve acted as a cap.
Measurements of the oxygen isotopic composition of the aragonitic skeleton of coral reefs, such as Porites, can indicate changes in sea surface temperature and sea surface salinity conditions during the growth of the coral. This technique is often used by climate scientists to infer a region's paleoclimate.
Types
Since Darwin's identification of the three classical reef formations – the fringing reef around a volcanic island becoming a barrier reef and then an atoll – scientists have identified further reef types. While some sources find only three, Thomas lists "Four major forms of large-scale coral reefs" – the fringing reef, barrier reef, atoll and table reef based on Stoddart, D.R. (1969). Spalding et al. list four main reef types that can be clearly illustrated – the fringing reef, barrier reef, atoll, and "bank or platform reef"—and notes that many other structures exist which do not conform easily to strict definitions, including the "patch reef".
Fringing reef
A fringing reef, also called a shore reef, is directly attached to a shore, or borders it with an intervening narrow, shallow channel or lagoon. It is the most common reef type. Fringing reefs follow coastlines and can extend for many kilometres. They are usually less than 100 metres wide, but some are hundreds of metres wide. Fringing reefs are initially formed on the shore at the low water level and expand seawards as they grow in size. The final width depends on where the sea bed begins to drop steeply. The surface of the fringe reef generally remains at the same height: just below the waterline. In older fringing reefs, whose outer regions pushed far out into the sea, the inner part is deepened by erosion and eventually forms a lagoon. Fringing reef lagoons can become over 100 metres wide and several metres deep. Like the fringing reef itself, they run parallel to the coast. The fringing reefs of the Red Sea are "some of the best developed in the world" and occur along all its shores except off sandy bays.
Barrier reef
Barrier reefs are separated from a mainland or island shore by a deep channel or lagoon. They resemble the later stages of a fringing reef with its lagoon but differ from the latter mainly in size and origin. Their lagoons can be several kilometres wide and 30 to 70 metres deep. Above all, the offshore outer reef edge formed in open water rather than next to a shoreline. Like an atoll, it is thought that these reefs are formed either as the seabed lowered or sea level rose. Formation takes considerably longer than for a fringing reef, thus barrier reefs are much rarer.
The best known and largest example of a barrier reef is the Australian Great Barrier Reef. Other major examples are the Mesoamerican Barrier Reef System and the New Caledonian Barrier Reef. Barrier reefs are also found on the coasts of Providencia, Mayotte, the Gambier Islands, on the southeast coast of Kalimantan, on parts of the coast of Sulawesi, southeastern New Guinea and the south coast of the Louisiade Archipelago.
Platform reef
Platform reefs, variously called bank or table reefs, can form on the continental shelf, as well as in the open ocean, in fact anywhere where the seabed rises close enough to the surface of the ocean to enable the growth of zooxanthemic, reef-forming corals. Platform reefs are found in the southern Great Barrier Reef, the Swain and Capricorn Group on the continental shelf, about 100–200 km from the coast. Some platform reefs of the northern Mascarenes are several thousand kilometres from the mainland. Unlike fringing and barrier reefs which extend only seaward, platform reefs grow in all directions. They are variable in size, ranging from a few hundred metres to many kilometres across. Their usual shape is oval to elongated. Parts of these reefs can reach the surface and form sandbanks and small islands around which may form fringing reefs. A lagoon may form In the middle of a platform reef.
Platform reefs are typically situated within atolls, where they adopt the name "patch reefs" and often span a diameter of just a few dozen meters. In instances where platform reefs develop along elongated structures, such as old and weathered barrier reefs, they tend to arrange themselves in a linear formation. This is the case, for example, on the east coast of the Red Sea near Jeddah. In old platform reefs, the inner part can be so heavily eroded that it forms a pseudo-atoll. These can be distinguished from real atolls only by detailed investigation, possibly including core drilling. Some platform reefs of the Laccadives are U-shaped, due to wind and water flow.
Atoll
Atolls or atoll reefs are a more or less circular or continuous barrier reef that extends all the way around a lagoon without a central island. They are usually formed from fringing reefs around volcanic islands. Over time, the island erodes away and sinks below sea level. Atolls may also be formed by the sinking of the seabed or rising of the sea level. A ring of reefs results, which enclose a lagoon. Atolls are numerous in the South Pacific, where they usually occur in mid-ocean, for example, in the Caroline Islands, the Cook Islands, French Polynesia, the Marshall Islands and Micronesia.
Atolls are found in the Indian Ocean, for example, in the Maldives, the Chagos Islands, the Seychelles and around Cocos Island. The entire Maldives consist of 26 atolls.
Other reef types or variants
Apron reef – short reef resembling a fringing reef, but more sloped; extending out and downward from a point or peninsular shore. The initial stage of a fringing reef.
Bank reef – isolated, flat-topped reef larger than a patch reef and usually on mid-shelf regions and linear or semi-circular in shape; a type of platform reef.
Patch reef – common, isolated, comparatively small reef outcrop, usually within a lagoon or embayment, often circular and surrounded by sand or seagrass. Can be considered as a type of platform reef or as features of fringing reefs, atolls and barrier reefs. The patches may be surrounded by a ring of reduced seagrass cover referred to as a grazing halo.
Ribbon reef – long, narrow, possibly winding reef, usually associated with an atoll lagoon. Also called a shelf-edge reef or sill reef.
Drying reef – a part of a reef which is above water at low tide but submerged at high tide
Habili – reef specific to the Red Sea; does not reach near enough to the surface to cause visible surf; may be a hazard to ships (from the Arabic for "unborn")
Microatoll – community of species of corals; vertical growth limited by average tidal height; growth morphologies offer a low-resolution record of patterns of sea level change; fossilized remains can be dated using radioactive carbon dating and have been used to reconstruct Holocene sea levels
Cays – small, low-elevation, sandy islands formed on the surface of coral reefs from eroded material that piles up, forming an area above sea level; can be stabilized by plants to become habitable; occur in tropical environments throughout the Pacific, Atlantic and Indian Oceans (including the Caribbean and on the Great Barrier Reef and Belize Barrier Reef), where they provide habitable and agricultural land
Seamount or guyot – formed when a coral reef on a volcanic island subsides; tops of seamounts are rounded and guyots are flat; flat tops of guyots, or tablemounts, are due to erosion by waves, winds, and atmospheric processes
Zones
Coral reef ecosystems contain distinct zones that host different kinds of habitats. Usually, three major zones are recognized: the fore reef, reef crest, and the back reef (frequently referred to as the reef lagoon).
The three zones are physically and ecologically interconnected. Reef life and oceanic processes create opportunities for the exchange of seawater, sediments, nutrients and marine life.
Most coral reefs exist in waters less than 50 m deep. Some inhabit tropical continental shelves where cool, nutrient-rich upwelling does not occur, such as the Great Barrier Reef. Others are found in the deep ocean surrounding islands or as atolls, such as in the Maldives. The reefs surrounding islands form when islands subside into the ocean, and atolls form when an island subsides below the surface of the sea.
Alternatively, Moyle and Cech distinguish six zones, though most reefs possess only some of the zones.
The reef surface is the shallowest part of the reef. It is subject to surge and tides. When waves pass over shallow areas, they shoal, as shown in the adjacent diagram. This means the water is often agitated. These are the precise condition under which corals flourish. The light is sufficient for photosynthesis by the symbiotic zooxanthellae, and agitated water brings plankton to feed the coral.
The off-reef floor is the shallow sea floor surrounding a reef. This zone occurs next to reefs on continental shelves. Reefs around tropical islands and atolls drop abruptly to great depths and do not have such a floor. Usually sandy, the floor often supports seagrass meadows which are important foraging areas for reef fish.
The reef drop-off is, for its first 50 m, habitat for reef fish who find shelter on the cliff face and plankton in the water nearby. The drop-off zone applies mainly to the reefs surrounding oceanic islands and atolls.
The reef face is the zone above the reef floor or the reef drop-off. This zone is often the reef's most diverse area. Coral and calcareous algae provide complex habitats and areas that offer protection, such as cracks and crevices. Invertebrates and epiphytic algae provide much of the food for other organisms. A common feature on this forereef zone is spur and groove formations that serve to transport sediment downslope.
The reef flat is the sandy-bottomed flat, which can be behind the main reef, containing chunks of coral. This zone may border a lagoon and serve as a protective area, or it may lie between the reef and the shore, and in this case is a flat, rocky area. Fish tend to prefer it when it is present.
The reef lagoon is an entirely enclosed region, which creates an area less affected by wave action and often contains small reef patches.
However, the topography of coral reefs is constantly changing. Each reef is made up of irregular patches of algae, sessile invertebrates, and bare rock and sand. The size, shape and relative abundance of these patches change from year to year in response to the various factors that favor one type of patch over another. Growing coral, for example, produces constant change in the fine structure of reefs. On a larger scale, tropical storms may knock out large sections of reef and cause boulders on sandy areas to move.
Locations
Coral reefs are estimated to cover 284,300 km2 (109,800 sq mi), just under 0.1% of the oceans' surface area. The Indo-Pacific region (including the Red Sea, Indian Ocean, Southeast Asia and the Pacific) account for 91.9% of this total. Southeast Asia accounts for 32.3% of that figure, while the Pacific including Australia accounts for 40.8%. Atlantic and Caribbean coral reefs account for 7.6%.
Although corals exist both in temperate and tropical waters, shallow-water reefs form only in a zone extending from approximately 30° N to 30° S of the equator. Tropical corals do not grow at depths of over . The optimum temperature for most coral reefs is , and few reefs exist in waters below . When the net production by reef building corals no longer keeps pace with relative sea level and the reef structure permanently drowns a Darwin Point is reached. One such point exists at the northwestern end of the Hawaiian Archipelago; see Evolution of Hawaiian volcanoes#Coral atoll stage.
However, reefs in the Persian Gulf have adapted to temperatures of in winter and in summer. 37 species of scleractinian corals inhabit such an environment around Larak Island.
Deep-water coral inhabits greater depths and colder temperatures at much higher latitudes, as far north as Norway. Although deep water corals can form reefs, little is known about them.
The northernmost coral reef on Earth is located near Eilat, Israel. Coral reefs are rare along the west coasts of the Americas and Africa, due primarily to upwelling and strong cold coastal currents that reduce water temperatures in these areas (the Humboldt, Benguela, and Canary Currents, respectively). Corals are seldom found along the coastline of South Asia—from the eastern tip of India (Chennai) to the Bangladesh and Myanmar borders—as well as along the coasts of northeastern South America and Bangladesh, due to the freshwater release from the Amazon and Ganges Rivers respectively.
Significant coral reefs include:
The Great Barrier Reef—largest, comprising over 2,900 individual reefs and 900 islands stretching for over off Queensland, Australia
The Mesoamerican Barrier Reef System—second largest, stretching from Isla Contoy at the tip of the Yucatán Peninsula down to the Bay Islands of Honduras
The New Caledonia Barrier Reef—second longest double barrier reef, covering
The Andros, Bahamas Barrier Reef—third largest, following the east coast of Andros Island, Bahamas, between Andros and Nassau
The Red Sea—includes 6,000-year-old fringing reefs located along a coastline
The Florida Reef Tract—largest continental US reef and the third-largest coral barrier reef, extends from Soldier Key, located in Biscayne Bay, to the Dry Tortugas in the Gulf of Mexico
Blake Plateau has the world's largest known deep-water coral reef, comprising a 6.4 million acre reef that stretches from Miami to Charleston, S. C. Its discovery was announced in January 2024.
Pulley Ridge—deepest photosynthetic coral reef, Florida
Numerous reefs around the Maldives
The Philippines coral reef area, the second-largest in Southeast Asia, is estimated at 26,000 square kilometres. 915 reef fish species and more than 400 scleractinian coral species, 12 of which are endemic are found there.
The Raja Ampat Islands in Indonesia's Southwest Papua province offer the highest known marine diversity.
Bermuda is known for its northernmost coral reef system, located at . The presence of coral reefs at this high latitude is due to the proximity of the Gulf Stream. Bermuda coral species represent a subset of those found in the greater Caribbean.
The world's northernmost individual coral reef is located in the Finlayson Channel, in the inside passage of British Columbia, Canada.
The world's southernmost coral reef is at Lord Howe Island, in the Pacific Ocean off the east coast of Australia.
Coral
When alive, corals are colonies of small animals embedded in calcium carbonate shells. Coral heads consist of accumulations of individual animals called polyps, arranged in diverse shapes. Polyps are usually tiny, but they can range in size from a pinhead to across.
Reef-building or hermatypic corals live only in the photic zone (above 70 m), the depth to which sufficient sunlight penetrates the water.
Zooxanthellae
Coral polyps do not photosynthesize, but have a symbiotic relationship with microscopic algae (dinoflagellates) of the genus Symbiodinium, commonly referred to as zooxanthellae. These organisms live within the polyps' tissues and provide organic nutrients that nourish the polyp in the form of glucose, glycerol and amino acids. Because of this relationship, coral reefs grow much faster in clear water, which admits more sunlight. Without their symbionts, coral growth would be too slow to form significant reef structures. Corals get up to 90% of their nutrients from their symbionts. In return, as an example of mutualism, the corals shelter the zooxanthellae, averaging one million for every cubic centimetre of coral, and provide a constant supply of the carbon dioxide they need for photosynthesis.
The varying pigments in different species of zooxanthellae give them an overall brown or golden-brown appearance and give brown corals their colors. Other pigments such as reds, blues, greens, etc. come from colored proteins made by the coral animals. Coral that loses a large fraction of its zooxanthellae becomes white (or sometimes pastel shades in corals that are pigmented with their own proteins) and is said to be bleached, a condition which, unless corrected, can kill the coral.
There are eight clades of Symbiodinium phylotypes. Most research has been conducted on clades A–D. Each clade contributes their own benefits as well as less compatible attributes to the survival of their coral hosts. Each photosynthetic organism has a specific level of sensitivity to photodamage to compounds needed for survival, such as proteins. Rates of regeneration and replication determine the organism's ability to survive. Phylotype A is found more in the shallow waters. It is able to produce mycosporine-like amino acids that are UV resistant, using a derivative of glycerin to absorb the UV radiation and allowing them to better adapt to warmer water temperatures. In the event of UV or thermal damage, if and when repair occurs, it will increase the likelihood of survival of the host and symbiont. This leads to the idea that, evolutionarily, clade A is more UV resistant and thermally resistant than the other clades.
Clades B and C are found more frequently in deeper water, which may explain their higher vulnerability to increased temperatures. Terrestrial plants that receive less sunlight because they are found in the undergrowth are analogous to clades B, C, and D. Since clades B through D are found at deeper depths, they require an elevated light absorption rate to be able to synthesize as much energy. With elevated absorption rates at UV wavelengths, these phylotypes are more prone to coral bleaching versus the shallow clade A.
Clade D has been observed to be high temperature-tolerant, and has a higher rate of survival than clades B and C during modern bleaching events.
Skeleton
Reefs grow as polyps and other organisms deposit calcium carbonate, the basis of coral, as a skeletal structure beneath and around themselves, pushing the coral head's top upwards and outwards. Waves, grazing fish (such as parrotfish), sea urchins, sponges and other forces and organisms act as bioeroders, breaking down coral skeletons into fragments that settle into spaces in the reef structure or form sandy bottoms in associated reef lagoons.
Typical shapes for coral species are named by their resemblance to terrestrial objects such as wrinkled brains, cabbages, table tops, antlers, wire strands and pillars. These shapes can depend on the life history of the coral, like light exposure and wave action, and events such as breakages.
Reproduction
Corals reproduce both sexually and asexually. An individual polyp uses both reproductive modes within its lifetime. Corals reproduce sexually by either internal or external fertilization. The reproductive cells are found on the mesenteries, membranes that radiate inward from the layer of tissue that lines the stomach cavity. Some mature adult corals are hermaphroditic; others are exclusively male or female. A few species change sex as they grow.
Internally fertilized eggs develop in the polyp for a period ranging from days to weeks. Subsequent development produces a tiny larva, known as a planula. Externally fertilized eggs develop during synchronized spawning. Polyps across a reef simultaneously release eggs and sperm into the water en masse. Spawn disperse over a large area. The timing of spawning depends on time of year, water temperature, and tidal and lunar cycles. Spawning is most successful given little variation between high and low tide. The less water movement, the better the chance for fertilization. The release of eggs or planula usually occurs at night and is sometimes in phase with the lunar cycle (three to six days after a full moon).
The period from release to settlement lasts only a few days, but some planulae can survive afloat for several weeks. During this process, the larvae may use several different cues to find a suitable location for settlement. At long distances sounds from existing reefs are likely important, while at short distances chemical compounds become important. The larvae are vulnerable to predation and environmental conditions. The lucky few planulae that successfully attach to substrate then compete for food and space.
Gallery of reef-building corals
Other reef builders
Corals are the most prodigious reef-builders. However many other organisms living in the reef community contribute skeletal calcium carbonate in the same manner as corals. These include coralline algae, some sponges and bivalves. Reefs are always built by the combined efforts of these different phyla, with different organisms leading reef-building in different geological periods.
Coralline algae
Coralline algae are important contributors to reef structure. Although their mineral deposition rates are much slower than corals, they are more tolerant of rough wave-action, and so help to create a protective crust over those parts of the reef subjected to the greatest forces by waves, such as the reef front facing the open ocean. They also strengthen the reef structure by depositing limestone in sheets over the reef surface.
Sponges
"Sclerosponge" is the descriptive name for all Porifera that build reefs. In the early Cambrian period, Archaeocyatha sponges were the world's first reef-building organisms, and sponges were the only reef-builders until the Ordovician. Sclerosponges still assist corals building modern reefs, but like coralline algae are much slower-growing than corals and their contribution is (usually) minor.
In the northern Pacific Ocean cloud sponges still create deep-water mineral-structures without corals, although the structures are not recognizable from the surface like tropical reefs. They are the only extant organisms known to build reef-like structures in cold water.
Bivalves
Oyster reefs are dense aggregations of oysters living in colonial communities. Other regionally-specific names for these structures include oyster beds and oyster banks. Oyster larvae require a hard substrate or surface to attach on, which includes the shells of old or dead oysters. Thus reefs can build up over time as new larvae settle on older individuals. Crassostrea virginica were once abundant in Chesapeake Bay and shorelines bordering the Atlantic coastal plain until the late nineteenth century. Ostrea angasi is a species of flat oyster that had also formed large reefs in South Australia.
Hippuritida, an extinct order of bivalves known as rudists, were major reef-building organisms during the Cretaceous. By the mid-Cretaceous, rudists became the dominant tropical reef-builders, becoming more numerous than scleractinian corals. During this period, ocean temperatures and saline levels—which corals are sensitive to—were higher than it is today, which may have contributed to the success of rudist reefs.
Gastropods
Some gastropods, like family Vermetidae, are sessile and cement themselves to the substrate, contributing to the reef building.
Darwin's paradox
In The Structure and Distribution of Coral Reefs, published in 1842, Darwin described how coral reefs were found in some tropical areas but not others, with no obvious cause. The largest and strongest corals grew in parts of the reef exposed to the most violent surf and corals were weakened or absent where loose sediment accumulated.
Tropical waters contain few nutrients yet a coral reef can flourish like an "oasis in the desert". This has given rise to the ecosystem conundrum, sometimes called "Darwin's paradox": "How can such high production flourish in such nutrient poor conditions?"
Coral reefs support over one-quarter of all marine species. This diversity results in complex food webs, with large predator fish eating smaller forage fish that eat yet smaller zooplankton and so on. However, all food webs eventually depend on plants, which are the primary producers. Coral reefs typically produce 5–10 grams of carbon per square meter per day (gC·m−2·day−1) biomass.
One reason for the unusual clarity of tropical waters is their nutrient deficiency and drifting plankton. Further, the sun shines year-round in the tropics, warming the surface layer, making it less dense than subsurface layers. The warmer water is separated from deeper, cooler water by a stable thermocline, where the temperature makes a rapid change. This keeps the warm surface waters floating above the cooler deeper waters. In most parts of the ocean, there is little exchange between these layers. Organisms that die in aquatic environments generally sink to the bottom, where they decompose, which releases nutrients in the form of nitrogen (N), phosphorus (P) and potassium (K). These nutrients are necessary for plant growth, but in the tropics, they do not directly return to the surface.
Plants form the base of the food chain and need sunlight and nutrients to grow. In the ocean, these plants are mainly microscopic phytoplankton which drift in the water column. They need sunlight for photosynthesis, which powers carbon fixation, so they are found only relatively near the surface, but they also need nutrients. Phytoplankton rapidly use nutrients in the surface waters, and in the tropics, these nutrients are not usually replaced because of the thermocline.
Explanations
Around coral reefs, lagoons fill in with material eroded from the reef and the island. They become havens for marine life, providing protection from waves and storms.
Most importantly, reefs recycle nutrients, which happens much less in the open ocean. In coral reefs and lagoons, producers include phytoplankton, as well as seaweed and coralline algae, especially small types called turf algae, which pass nutrients to corals. The phytoplankton form the base of the food chain and are eaten by fish and crustaceans. Recycling reduces the nutrient inputs needed overall to support the community.
Corals also absorb nutrients, including inorganic nitrogen and phosphorus, directly from water. Many corals extend their tentacles at night to catch zooplankton that pass near. Zooplankton provide the polyp with nitrogen, and the polyp shares some of the nitrogen with the zooxanthellae, which also require this element.
Sponges live in crevices in the reefs. They are efficient filter feeders, and in the Red Sea they consume about 60% of the phytoplankton that drifts by. Sponges eventually excrete nutrients in a form that corals can use.
The roughness of coral surfaces is key to coral survival in agitated waters. Normally, a boundary layer of still water surrounds a submerged object, which acts as a barrier. Waves breaking on the extremely rough edges of corals disrupt the boundary layer, allowing the corals access to passing nutrients. Turbulent water thereby promotes reef growth. Without the access to nutrients brought by rough coral surfaces, even the most effective recycling would not suffice.
Deep nutrient-rich water entering coral reefs through isolated events may have significant effects on temperature and nutrient systems. This water movement disrupts the relatively stable thermocline that usually exists between warm shallow water and deeper colder water. Temperature regimes on coral reefs in the Bahamas and Florida are highly variable with temporal scales of minutes to seasons and spatial scales across depths.
Water can pass through coral reefs in various ways, including current rings, surface waves, internal waves and tidal changes. Movement is generally created by tides and wind. As tides interact with varying bathymetry and wind mixes with surface water, internal waves are created. An internal wave is a gravity wave that moves along density stratification within the ocean. When a water parcel encounters a different density it oscillates and creates internal waves. While internal waves generally have a lower frequency than surface waves, they often form as a single wave that breaks into multiple waves as it hits a slope and moves upward. This vertical breakup of internal waves causes significant diapycnal mixing and turbulence. Internal waves can act as nutrient pumps, bringing plankton and cool nutrient-rich water to the surface.
The irregular structure characteristic of coral reef bathymetry may enhance mixing and produce pockets of cooler water and variable nutrient content. Arrival of cool, nutrient-rich water from depths due to internal waves and tidal bores has been linked to growth rates of suspension feeders and benthic algae as well as plankton and larval organisms. The seaweed Codium isthmocladum reacts to deep water nutrient sources because their tissues have different concentrations of nutrients dependent upon depth. Aggregations of eggs, larval organisms and plankton on reefs respond to deep water intrusions. Similarly, as internal waves and bores move vertically, surface-dwelling larval organisms are carried toward the shore. This has significant biological importance to cascading effects of food chains in coral reef ecosystems and may provide yet another key to unlocking the paradox.
Cyanobacteria provide soluble nitrates via nitrogen fixation.
Coral reefs often depend on surrounding habitats, such as seagrass meadows and mangrove forests, for nutrients. Seagrass and mangroves supply dead plants and animals that are rich in nitrogen and serve to feed fish and animals from the reef by supplying wood and vegetation. Reefs, in turn, protect mangroves and seagrass from waves and produce sediment in which the mangroves and seagrass can root.
Biodiversity
Coral reefs form some of the world's most productive ecosystems, providing complex and varied marine habitats that support a wide range of organisms. Fringing reefs just below low tide level have a mutually beneficial relationship with mangrove forests at high tide level and sea grass meadows in between: the reefs protect the mangroves and seagrass from strong currents and waves that would damage them or erode the sediments in which they are rooted, while the mangroves and sea grass protect the coral from large influxes of silt, fresh water and pollutants. This level of variety in the environment benefits many coral reef animals, which, for example, may feed in the sea grass and use the reefs for protection or breeding.
Reefs are home to a variety of animals, including fish, seabirds, sponges, cnidarians (which includes some types of corals and jellyfish), worms, crustaceans (including shrimp, cleaner shrimp, spiny lobsters and crabs), mollusks (including cephalopods), echinoderms (including starfish, sea urchins and sea cucumbers), sea squirts, sea turtles and sea snakes. Aside from humans, mammals are rare on coral reefs, with visiting cetaceans such as dolphins the main exception. A few species feed directly on corals, while others graze on algae on the reef. Reef biomass is positively related to species diversity.
The same hideouts in a reef may be regularly inhabited by different species at different times of day. Nighttime predators such as cardinalfish and squirrelfish hide during the day, while damselfish, surgeonfish, triggerfish, wrasses and parrotfish hide from eels and sharks.
The great number and diversity of hiding places in coral reefs, i.e. refuges, are the most important factor causing the great diversity and high biomass of the organisms in coral reefs.
Coral reefs also have a very high degree of microorganism diversity compared to other environments.
Algae
Reefs are chronically at risk of algal encroachment. Overfishing and excess nutrient supply from onshore can enable algae to outcompete and kill the coral. Increased nutrient levels can be a result of sewage or chemical fertilizer runoff. Runoff can carry nitrogen and phosphorus which promote excess algae growth. Algae can sometimes out-compete the coral for space. The algae can then smother the coral by decreasing the oxygen supply available to the reef. Decreased oxygen levels can slow down calcification rates, weakening the coral and leaving it more susceptible to disease and degradation. Algae inhabit a large percentage of surveyed coral locations. The algal population consists of turf algae, coralline algae and macro algae. Some sea urchins (such as Diadema antillarum) eat these algae and could thus decrease the risk of algal encroachment.
Sponges
Sponges are essential for the functioning of the coral reef system. Algae and corals in coral reefs produce organic material. This is filtered through sponges which convert this organic material into small particles which in turn are absorbed by algae and corals. Sponges are essential to the coral reef system however, they are quite different from corals. While corals are complex and many celled while sponges are very simple organisms with no tissue. They are alike in that they are both immobile aquatic invertebrates but otherwise are completely different.
Types of sponges-
There are several different species of sea sponge. They come in multiple shapes and sizes and all have unique characteristics. Some types of sea sponges include; the tube sponge, vase sponge, yellow sponge, bright red tree sponge, painted tunicate sponge, and the sea squirt sponge.
Medicinal Qualities of Sea Sponges-
Sea sponges have provided the base for many life saving medications. Scientists began to study them in the 1940s and after a few years, discovered that sea sponges contain properties that can stop viral infections. The first drug developed from sea sponges was released in 1969.
Fish
Over 4,000 species of fish inhabit coral reefs. The reasons for this diversity remain unclear. Hypotheses include the "lottery", in which the first (lucky winner) recruit to a territory is typically able to defend it against latecomers, "competition", in which adults compete for territory, and less-competitive species must be able to survive in poorer habitat, and "predation", in which population size is a function of postsettlement piscivore mortality. Healthy reefs can produce up to 35 tons of fish per square kilometre each year, but damaged reefs produce much less.
Invertebrates
Sea urchins, Dotidae and sea slugs eat seaweed. Some species of sea urchins, such as Diadema antillarum, can play a pivotal part in preventing algae from overrunning reefs. Researchers are investigating the use of native collector urchins, Tripneustes gratilla, for their potential as biocontrol agents to mitigate the spread of invasive algae species on coral reefs. Nudibranchia and sea anemones eat sponges.
A number of invertebrates, collectively called "cryptofauna", inhabit the coral skeletal substrate itself, either boring into the skeletons (through the process of bioerosion) or living in pre-existing voids and crevices. Animals boring into the rock include sponges, bivalve mollusks, and sipunculans. Those settling on the reef include many other species, particularly crustaceans and polychaete worms.
Seabirds
Coral reef systems provide important habitats for seabird species, some endangered. For example, Midway Atoll in Hawaii supports nearly three million seabirds, including two-thirds (1.5 million) of the global population of Laysan albatross, and one-third of the global population of black-footed albatross. Each seabird species has specific sites on the atoll where they nest. Altogether, 17 species of seabirds live on Midway. The short-tailed albatross is the rarest, with fewer than 2,200 surviving after excessive feather hunting in the late 19th century.
Other
Sea snakes feed exclusively on fish and their eggs. Marine birds, such as herons, gannets, pelicans and boobies, feed on reef fish. Some land-based reptiles intermittently associate with reefs, such as monitor lizards, the marine crocodile and semiaquatic snakes, such as Laticauda colubrina. Sea turtles, particularly hawksbill sea turtles, feed on sponges.
Ecosystem services
Coral reefs deliver ecosystem services to tourism, fisheries and coastline protection. The global economic value of coral reefs has been estimated to be between US$29.8 billion and $375 billion per year. About 500 million people benefit from ecosystem services provided by coral reefs.
The economic cost over a 25-year period of destroying one square kilometre of coral reef has been estimated to be somewhere between $137,000 and $1,200,000.
To improve the management of coastal coral reefs, the World Resources Institute (WRI) developed and published tools for calculating the value of coral reef-related tourism, shoreline protection and fisheries, partnering with five Caribbean countries. As of April 2011, published working papers covered St. Lucia, Tobago, Belize, and the Dominican Republic. The WRI was "making sure that the study results support improved coastal policies and management planning". The Belize study estimated the value of reef and mangrove services at $395–559 million annually.
Bermuda's coral reefs provide economic benefits to the Island worth on average $722 million per year, based on six key ecosystem services, according to Sarkis et al (2010).
Shoreline protection
Coral reefs protect shorelines by absorbing wave energy, and many small islands would not exist without reefs. Coral reefs can reduce wave energy by 97%, helping to prevent loss of life and property damage. Coastlines protected by coral reefs are also more stable in terms of erosion than those without. Reefs can attenuate waves as well as or better than artificial structures designed for coastal defence such as breakwaters. An estimated 197 million people who live both below 10 m elevation and within 50 km of a reef consequently may receive risk reduction benefits from reefs. Restoring reefs is significantly cheaper than building artificial breakwaters in tropical environments. Expected damages from flooding would double, and costs from frequent storms would triple without the topmost meter of reefs. For 100-year storm events, flood damages would increase by 91% to $US 272 billion without the top meter.
Fisheries
About six million tons of fish are taken each year from coral reefs. Well-managed reefs have an average annual yield of 15 tons of seafood per square kilometre. Southeast Asia's coral reef fisheries alone yield about $2.4 billion annually from seafood.
Threats
Since their emergence 485 million years ago, coral reefs have faced many threats, including disease, predation, invasive species, bioerosion by grazing fish, algal blooms, and geologic hazards. Recent human activities present new threats. From 2009 to 2018, coral reefs worldwide declined 14%.
Human activities that threaten coral include coral mining, bottom trawling, and the digging of canals and accesses into islands and bays, all of which can damage marine ecosystems if not done sustainably. Other localized threats include blast fishing, overfishing, coral overmining, and marine pollution, including use of the banned anti-fouling biocide tributyltin; although absent in developed countries, these activities continue in places with few environmental protections or poor regulatory enforcement. Chemicals in sunscreens may awaken latent viral infections in zooxanthellae and impact reproduction. However, concentrating tourism activities via offshore platforms has been shown to limit the spread of coral disease by tourists.
Greenhouse gas emissions present a broader threat through sea temperature rise and sea level rise, resulting in widespread coral bleaching and loss of coral cover. Climate change causes more frequent and more severe storms, also changes ocean circulation patterns, which can destroy coral reefs.Ocean acidification also affects corals by decreasing calcification rates and increasing dissolution rates, although corals can adapt their calcifying fluids to changes in seawater pH and carbonate levels to mitigate the impact. Volcanic and human-made aerosol pollution can modulate regional sea surface temperatures.
In 2011, two researchers suggested that "extant marine invertebrates face the same synergistic effects of multiple stressors" that occurred during the end-Permian extinction, and that genera "with poorly buffered respiratory physiology and calcareous shells", such as corals, were particularly vulnerable.
Corals respond to stress by "bleaching", or expelling their colorful zooxanthellate endosymbionts. Corals with Clade C zooxanthellae are generally vulnerable to heat-induced bleaching, whereas corals with the hardier Clade A or D are generally resistant, as are tougher coral genera like Porites and Montipora.
Every 4–7 years, an El Niño event causes some reefs with heat-sensitive corals to bleach, with especially widespread bleachings in 1998 and 2010. However, reefs that experience a severe bleaching event become resistant to future heat-induced bleaching, due to rapid directional selection. Similar rapid adaption may protect coral reefs from global warming.
A large-scale systematic study of the Jarvis Island coral community, which experienced ten El Niño-coincident coral bleaching events from 1960 to 2016, found that the reef recovered from almost complete death after severe events.
Protection
Marine protected areas (MPAs) are areas designated because they provide various kinds of protection to ocean and/or estuarine areas. They are intended to promote responsible fishery management and habitat protection. MPAs can also encompass social and biological objectives, including reef restoration, aesthetics, biodiversity and economic benefits.
The effectiveness of MPAs is still debated. For example, a study investigating the success of a small number of MPAs in Indonesia, the Philippines and Papua New Guinea found no significant differences between the MPAs and unprotected sites. Furthermore, in some cases they can generate local conflict, due to a lack of community participation, clashing views of the government and fisheries, effectiveness of the area and funding. In some situations, as in the Phoenix Islands Protected Area, MPAs provide revenue to locals. The level of income provided is similar to the income they would have generated without controls. Overall, it appears the MPA's can provide protection to local coral reefs, but that clear management and sufficient funds are required.
The Caribbean Coral Reefs – Status Report 1970–2012, states that coral decline may be reduced or even reversed. For this overfishing needs to be stopped, especially fishing on species key to coral reefs, such as parrotfish. Direct human pressure on coral reefs should also be reduced and the inflow of sewage should be minimised. Measures to achieve this could include restricting coastal settlement, development and tourism. The report shows that healthier reefs in the Caribbean are those with large, healthy populations of parrotfish. These occur in countries that protect parrotfish and other species, like sea urchins. They also often ban fish trapping and spearfishing. Together these measures help creating "resilient reefs".
Protecting networks of diverse and healthy reefs, not only climate refugia, helps ensure the greatest chance of genetic diversity, which is critical for coral to adapt to new climates. A variety of conservation methods applied across marine and terrestrial threatened ecosystems makes coral adaption more likely and effective.
Designating a reef as a biosphere reserve, marine park, national monument or world heritage site can offer protections. For example, Belize's barrier reef, Sian Ka'an, the Galapagos islands, Great Barrier Reef, Henderson Island, Palau and Papahānaumokuākea Marine National Monument are world heritage sites.
In Australia, the Great Barrier Reef is protected by the Great Barrier Reef Marine Park Authority, and is the subject of much legislation, including a biodiversity action plan. Australia compiled a Coral Reef Resilience Action Plan. This plan consists of adaptive management strategies, including reducing carbon footprint. A public awareness plan provides education on the "rainforests of the sea" and how people can reduce carbon emissions.
Inhabitants of Ahus Island, Manus Province, Papua New Guinea, have followed a generations-old practice of restricting fishing in six areas of their reef lagoon. Their cultural traditions allow line fishing, but no net or spear fishing. Both biomass and individual fish sizes are significantly larger than in places where fishing is unrestricted.
Increased levels of atmospheric CO2 contribute to ocean acidification, which in turn damages coral reefs. To help combat ocean acidification, several countries have put laws in place to reduce greenhouse gases such as carbon dioxide. Many land use laws aim to reduce CO2 emissions by limiting deforestation. Deforestation can release significant amounts of CO2 absent sequestration via active follow-up forestry programs. Deforestation can also cause erosion, which flows into the ocean, contributing to ocean acidification. Incentives are used to reduce miles traveled by vehicles, which reduces carbon emissions into the atmosphere, thereby reducing the amount of dissolved CO2 in the ocean. State and federal governments also regulate land activities that affect coastal erosion. High-end satellite technology can monitor reef conditions.
The United States Clean Water Act puts pressure on state governments to monitor and limit run-off of polluted water.
Restoration
Coral reef restoration has grown in prominence over the past several decades because of the unprecedented reef die-offs around the planet. Coral stressors can include pollution, warming ocean temperatures, extreme weather events, and overfishing. With the deterioration of global reefs, fish nurseries, biodiversity, coastal development and livelihood, and natural beauty are under threat. Fortunately, researchers have taken it upon themselves to develop a new field, coral restoration, in the 1970s–1980s
Coral farming
Coral aquaculture, also known as coral farming or coral gardening, is showing promise as a potentially effective tool for restoring coral reefs. The "gardening" process bypasses the early growth stages of corals when they are most at risk of dying. Coral seeds are grown in nurseries, then replanted on the reef. Coral is farmed by coral farmers whose interests range from reef conservation to increased income. Due to its straight forward process and substantial evidence of the technique having a significant effect on coral reef growth, coral nurseries became the most widespread and arguably the most effective method for coral restoration.
Coral gardens take advantage of a coral's natural ability to fragment and continuing to grow if the fragments are able to anchor themselves onto new substrates. This method was first tested by Baruch Rinkevich in 1995 which found success at the time. By today's standards, coral farming has grown into a variety of different forms, but still has the same goals of cultivating corals. Consequently, coral farming quickly replaced previously used transplantation methods or the act of physically moving sections or whole colonies of corals into a new area. Transplantation has seen success in the past and decades of experiments have led to a high success and survival rate. However, this method still requires the removal of corals from existing reefs. With the current state of reefs, this kind of method should generally be avoided if possible. Saving healthy corals from eroding substrates or reefs that are doomed to collapse could be a major advantage of utilizing transplantation.
Coral gardens generally take on the safe forms no matter where you go. It begins with the establishment of a nursery where operators can observe and care for coral fragments. It goes without saying that nurseries should be established in areas that are going to maximize growth and minimize mortality. Floating offshore coral trees or even aquariums are possible locations where corals can grow. After a location has been determined, collection and cultivation can occur.
The major benefit of using coral farms is it lowers polyp and juvenile mortality rates. By removing predators and recruitment obstacles, corals are able to mature without much hindrance. However, nurseries cannot stop climate stressors. Warming temperatures or hurricanes can still disrupt or even kill nursery corals.
Technology is becoming more popular in the coral farming process. Teams from the Reef Restoration and Adaptation Program (RRAP) have trialled coral counting technology utilizing a prototype robotic camera. The camera uses computer vision and learning algorithms to detect and count individual coral babies and track their growth and health in real time. This technology, with research led by QUT, is intended to be used during annual coral spawning events and will provide researchers with control that is not currently possible when mass-producing corals.
Creating substrates
Efforts to expand the size and number of coral reefs generally involve supplying substrate to allow more corals to find a home. Substrate materials include discarded vehicle tires, scuttled ships, subway cars and formed concrete, such as reef balls. Reefs grow unaided on marine structures such as oil rigs. In large restoration projects, propagated hermatypic coral on substrate can be secured with metal pins, superglue or milliput. Needle and thread can also attach A-hermatype coral to substrate.
Biorock is a substrate produced by a patented process that runs low voltage electrical currents through seawater to cause dissolved minerals to precipitate onto steel structures. The resultant white carbonate (aragonite) is the same mineral that makes up natural coral reefs. Corals rapidly colonize and grow at accelerated rates on these coated structures. The electrical currents also accelerate the formation and growth of both chemical limestone rock and the skeletons of corals and other shell-bearing organisms, such as oysters. The vicinity of the anode and cathode provides a high-pH environment which inhibits the growth of competitive filamentous and fleshy algae. The increased growth rates fully depend on the accretion activity. Under the influence of the electric field, corals display an increased growth rate, size and density.
Simply having many structures on the ocean floor is not enough to form coral reefs. Restoration projects must consider the complexity of the substrates they are creating for future reefs. Researchers conducted an experiment near Ticao Island in the Philippines in 2013 where several substrates in varying complexities were laid in the nearby degraded reefs. Large complexity consisted of plots that had both a human-made substrates of both smooth and rough rocks with a surrounding fence, medium consisted of only the human-made substrates, and small had neither the fence or substrates. After one month, researchers found that there was a positive correlation between structure complexity and recruitment rates of larvae. The medium complexity performed the best with larvae favoring rough rocks over smooth rocks. Following one year of their study, researchers visited the site and found that many of the sites were able to support local fisheries. They came to the conclusion that reef restoration could be done cost-effectively and will yield long term benefits given they are protected and maintained.
Relocation
One case study with coral reef restoration was conducted on the island of Oahu in Hawaii. The University of Hawaii operates a Coral Reef Assessment and Monitoring Program to help relocate and restore coral reefs in Hawaii. A boat channel from the island of Oahu to the Hawaii Institute of Marine Biology on Coconut Island was overcrowded with coral reefs. Many areas of coral reef patches in the channel had been damaged from past dredging in the channel.
Dredging covers corals with sand. Coral larvae cannot settle on sand; they can only build on existing reefs or compatible hard surfaces, such as rock or concrete. Because of this, the university decided to relocate some of the coral. They transplanted them with the help of United States Army divers, to a site relatively close to the channel. They observed little if any damage to any of the colonies during transport and no mortality of coral reefs was observed on the transplant site. While attaching the coral to the transplant site, they found that coral placed on hard rock grew well, including on the wires that attached the corals to the site.
No environmental effects were seen from the transplantation process, recreational activities were not decreased, and no scenic areas were affected.
As an alternative to transplanting coral themselves, juvenile fish can also be encouraged to relocate to existing coral reefs by auditory simulation. In damaged sections of the Great Barrier Reef, loudspeakers playing recordings of healthy reef environments were found to attract fish twice as often as equivalent patches where no sound was played, and also increased species biodiversity by 50%.
Heat-tolerant symbionts
Another possibility for coral restoration is gene therapy: inoculating coral with genetically modified bacteria, or naturally-occurring heat-tolerant varieties of coral symbiotes, may make it possible to grow corals that are more resistant to climate change and other threats. Warming oceans are forcing corals to adapt to unprecedented temperatures. Those that do not have a tolerance for the elevated temperatures experience coral bleaching and eventually mortality. There is already research that looks to create genetically modified corals that can withstand a warming ocean. Madeleine J. H. van Oppen, James K. Oliver, Hollie M. Putnam, and Ruth D. Gates described four different ways that gradually increase in human intervention to genetically modify corals. These methods focus on altering the genetics of the zooxanthellae within coral rather than the alternative.
The first method is to induce acclimatization of the first generation of corals. The idea is that when adult and offspring corals are exposed to stressors, the zooxanthellae will gain a mutation. This method is based mostly on the chance that the zooxanthellae will acquire the specific trait that will allow it to better survive in warmer waters. The second method focuses on identifying what different kinds of zooxanthellae are within the coral and configuring how much of each zooxanthella lives within the coral at a given age. Use of zooxanthellae from the previous method would only boost success rates for this method. However, this method would only be applicable to younger corals, for now, because previous experiments of manipulation zooxanthellae communities at later life stages have all failed. The third method focuses on selective breeding tactics. Once selected, corals would be reared and exposed to simulated stressors in a laboratory. The last method is to genetically modify the zooxanthellae itself. When preferred mutations are acquired, the genetically modified zooxanthellae will be introduced to an aposymbiotic poly and a new coral will be produced. This method is the most laborious of the fourth, but researchers believe this method should be utilized more and holds the most promise in genetic engineering for coral restoration.
Invasive algae
Hawaiian coral reefs smothered by the spread of invasive algae were managed with a two-prong approach: divers manually removed invasive algae, with the support of super-sucker barges. Grazing pressure on invasive algae needed to be increased to prevent the regrowth of the algae. Researchers found that native collector urchins were reasonable candidate grazers for algae biocontrol, to extirpate the remaining invasive algae from the reef.
Invasive algae in Caribbean reefs
Macroalgae, or better known as seaweed, has to potential to cause reef collapse because they can outcompete many coral species. Macroalgae can overgrow on corals, shade, block recruitment, release biochemicals that can hinder spawning, and potentially form bacteria harmful to corals. Historically, algae growth was controlled by herbivorous fish and sea urchins. Parrotfish are a prime example of reef caretakers. Consequently, these two species can be considered as keystone species for reef environments because of their role in protecting reefs.
Before the 1980s, Jamaica's reefs were thriving and well cared for, however, this all changed after Hurricane Allen occurred in 1980 and an unknown disease spread across the Caribbean. In the wake of these events, massive damage was caused to both the reefs and sea urchin population across Jamaican's reefs and into the Caribbean Sea. As little as 2% of the original sea urchin population survived the disease. Primary macroalgae succeeded the destroyed reefs and eventually larger, more resilient macroalgae soon took its place as the dominant organism. Parrotfish and other herbivorous fish were few in numbers because of decades of overfishing and bycatch at the time. Historically, the Jamaican coast had 90% coral cover and was reduced to 5% in the 1990s. Eventually, corals were able to recover in areas where sea urchin populations were increasing. Sea urchins were able to feed and multiply and clear off substrates, leaving areas for coral polyps to anchor and mature. However, sea urchin populations are still not recovering as fast as researchers predicted, despite being highly fecundate. It is unknown whether or not the mysterious disease is still present and preventing sea urchin populations from rebounding. Regardless, these areas are slowly recovering with the aid of sea urchin grazing. This event supports an early restoration idea of cultivating and releasing sea urchins into reefs to prevent algal overgrowth.
Microfragmentation and fusion
In 2014, Christopher Page, Erinn Muller, and David Vaughan from the International Center for Coral Reef Research & Restoration at Mote Marine Laboratory in Summerland Key, Florida developed a new technology called "microfragmentation", in which they use a specialized diamond band saw to cut corals into 1 cm2 fragments instead of 6 cm2 to advance the growth of brain, boulder, and star corals. Corals Orbicella faveolata and Montastraea cavernosa were outplanted off the Florida's shores in several microfragment arrays. After two years, O. faveolata had grown 6.5x its original size while M. cavernosa had grown nearly twice its size. Under conventional means, both corals would have required decades to reach the same size. It is suspected that if predation events had not occurred near the beginning of the experiment O. faveolata would have grown at least ten times its original size. By using this method, Mote Marine Laboratory successfully generated 25,000 corals within a single year, subsequently transplanting 10,000 of them into the Florida Keys. Shortly after, they discovered that these microfragments fused with other microfragments from the same parent coral. Typically, corals that are not from the same parent fight and kill nearby corals in an attempt to survive and expand. This new technology is known as "fusion" and has been shown to grow coral heads in just two years instead of the typical 25–75 years. After fusion occurs, the reef will act as a single organism rather than several independent reefs. Currently, there has been no published research into this method.
| Physical sciences | Oceanic and coastal landforms | null |
87492 | https://en.wikipedia.org/wiki/Oryza | Oryza | Oryza is a genus of plants in the grass family. It includes the major food crop rice (species Oryza sativa and Oryza glaberrima). Members of the genus grow as tall, wetland grasses, growing to tall; the genus includes both annual and perennial species.
Oryza is situated in tribe Oryzeae, which is characterized morphologically by its single-flowered spikelets whose glumes are almost completely suppressed. In Oryza, two sterile lemma simulate glumes. The tribe Oryzeae is in subfamily Ehrhartoideae, a group of Poaceae tribes with certain features of internal leaf anatomy in common. The most distinctive leaf characteristics of this subfamily are the arm cells and fusoid cells found in their leaves.
One species, Asian rice (O. sativa), provides 20% of global grain and is a food crop of major global importance. The species are divided into two subgroups within the genus.
Species
Inside the genus Oryza, species can be divided by their genomes types. They include the diploid (2n = 24) of cultivated rice and their relatives, , , , and as well as the tetraploid (4n = 48) , , , and . Species of the same genome type cross easily, while hybridizing different types requires techniques like embryo rescue.
Over 300 names have been proposed for species, subspecies, and other infraspecific taxa within the genus. Published sources disagree as to how many of these should be recognized as distinct species. The following follows the World Checklist maintained by Kew Garden in London.
Oryza australiensis (EE) – Australia
Oryza barthii (AA) – tropical Africa
Oryza brachyantha (FF) – tropical Africa
Oryza coarctata (KKLL) – India, Pakistan, Bangladesh, Myanmar
Oryza eichingeri (CC) – tropical Africa, Sri Lanka
Oryza glaberrima (AA) – African rice – tropical Africa
Oryza grandiglumis (CCDD) – Brazil, Venezuela, Fr Guiana, Colombia, Peru, Bolivia
Oryza latifolia (CCDD) – Latin America + West Indies from Sinaloa + Cuba to Argentina
Oryza longiglumis (HHJJ) – New Guinea
Oryza longistaminata (AA) – Madagascar, tropical + southern Africa
Oryza meyeriana (GG) – China, Indian Subcontinent, Southeast Asia
Oryza minuta (BBCC) – Himalayas, Southeast Asia, New Guinea, Northern Territory of Australia
Oryza neocaledonica (GG) – New Caledonia
Oryza officinalis (CC) – China, Indian Subcontinent, Southeast Asia, New Guinea, Australia
Oryza punctata (BB) – Madagascar, tropical + southern Africa
Oryza ridleyi (HHJJ) – Southeast Asia, New Guinea
Oryza rufipogon (AA) – brownbeard or red rice – China, Indian Subcontinent, Southeast Asia, New Guinea, Australia
Oryza sativa (AA) – Asian rice – China, Indian Subcontinent, Japan, Southeast Asia; naturalized many places
Oryza schlechteri (HHKK) – New Guinea
Formerly included
Many species are now regarded as better suited to other genera:
Echinochloa
Leersia
Maltebrunia
Potamophila
Prosphytochloa
Rhynchoryza
| Biology and health sciences | Poales | Plants |
87598 | https://en.wikipedia.org/wiki/Date%20palm | Date palm | Phoenix dactylifera, commonly known as the date palm, is a flowering-plant species in the palm family Arecaceae, cultivated for its edible sweet fruit called dates. The species is widely cultivated across northern Africa, the Middle East, the Horn of Africa, Australia, South Asia, and California. It is naturalized in many tropical and subtropical regions worldwide. P. dactylifera is the type species of genus Phoenix, which contains 12–19 species of wild date palms.
Date palms reach up to in height, growing singly or forming a clump with several stems from a single root system. Slow-growing, they can reach over 100years of age when maintained properly. Date fruits (dates) are oval-cylindrical, long, and about in diameter, with colour ranging from dark brown to bright red or yellow, depending on variety. Containing 61–68percent sugar by mass when dried, dates are consumed as sweet snacks on their own or with confections.
There is archaeological evidence of date cultivation in Arabia from the 6thmillennium BCE. Dates are "emblematic of oasis agriculture and highly symbolic in Muslim, Christian, and Jewish religions".
Description
Date palms reach up to in height, growing singly or forming a clump with several stems from a single root system. Slow-growing, they can reach over 100 years of age when maintained properly. The roots have pneumatodes. The leaves are long, with spines on the petiole, and pinnate, with about 150 leaflets. The leaflets are long and wide. The full span of the crown ranges from .
The date palm is dioecious, having separate male and female plants. They can be easily grown from seed, but only 50% of seedlings will be female and hence fruit-bearing, and dates from seedling plants are often smaller and of poorer quality. Most commercial plantations thus use cuttings of heavily cropping cultivars. Plants grown from cuttings will fruit 2–3 years earlier than seedling plants.
Dates are naturally wind-pollinated, but in traditional oasis horticulture and modern commercial orchards, they are entirely hand-pollinated. Natural pollination occurs with about an equal number of male and female plants. With assistance, one male can pollinate up to 100 females. Since the males are of value only as pollinators, they are usually pruned in favor of fruit-producing female plants. Some growers do not maintain male plants, as male flowers become available at local markets at pollination time. Manual pollination is done by skilled labourers on ladders, or by use of a wind machine. In some areas, such as Iraq, the pollinator climbs the tree using a special climbing tool that wraps around the tree trunk and the climber's back (called in Arabic) to keep him attached to the trunk while climbing.
Date fruits are oval-cylindrical, long, and diameter, and when ripe, range from bright red to bright yellow in colour, depending on variety. Dates contain a single stone (seed) about long and thick. Three main cultivar groups exist: soft (e.g., Medjool); semi-dry (e.g., Deglet Noor), and dry (e.g., Thoory).
Genome
A draft genome of P. dactylifera (Khalas variety) was published in 2011 followed by more complete genome assemblies in 2013 and 2019. The later study used long-read sequencing technology. With the release of this improved genome assembly, the researchers were able to map genes for fruit color and sugar content. The NYU Abu Dhabi researchers had also re-sequenced the genomes of several date varieties to develop the first single nucleotide polymorphism map of the date palm genome in 2015.
Etymology
The species name dactylifera 'date-bearing' is Latin, and is formed with the loanword in Latin from Greek (), which means 'date' (also 'finger'), and with the native Latin , which means 'to bear'. The fruit is known as a date. The fruit's English name (through Old French, through Latin) comes from the Greek word for 'finger', , because of the fruit's elongated shape.
Distribution
The place of origin of the date palm is uncertain because of long cultivation. According to some sources it probably originated from the Fertile Crescent region straddling Egypt and Mesopotamia while others state that they are native to the Persian Gulf area. Fossil records show that the date palm has existed for at least 50 million years.
Ecology
A major palm pest, the red palm beetle (Rhynchophorus ferrugineus), currently poses a significant threat to date production in parts of the Middle East as well as to iconic landscape specimens throughout the Mediterranean world. Another significant insect pest is Ommatissus lybicus, sometimes called the "dubas bug", whose sap sucking results in sooty mould formation.
In the 1920s, eleven healthy Medjool palms were transferred from Morocco to the United States where they were tended by members of the Chemehuevi tribe in a remote region of Nevada. Nine of these survived and in 1935, cultivars were transferred to the U.S. Date Garden in Indio, California. Eventually this stock was reintroduced to Africa and led to the U.S. production of dates in Yuma, Arizona and Bard, California.
Cultivation
Dates are a traditional crop throughout the Middle East and North Africa. Dates (especially Medjool and Deglet Nour) are also cultivated in the southwestern United States, and in Sonora and Baja California in Mexico.
Date palms can take 4 to 8 years after planting before they will bear fruit, and start producing viable yields for commercial harvest between 7 and 10 years. Mature date palms can produce of dates per harvest season. They do not all ripen at the same time so several harvests are required. To obtain fruit of marketable quality, the bunches of dates must be thinned and bagged or covered before ripening so that the remaining fruits grow larger and are protected from weather and animals, such as birds, that also like to eat them.
Date palms require well-drained deep sandy loam soils with a pH of 8–11 (alkaline). The soil should have the ability to hold moisture and also be free of calcium carbonate.
Agricultural history
Dates have been cultivated in the Middle East and the Indus Valley for thousands of years, and there is archaeological evidence of date cultivation in Mehrgarh, a Neolithic civilization in western Pakistan, around 7000 BCE and in eastern Arabia between 5530 and 5320 calBC. Dates have been cultivated since ancient times from Mesopotamia to prehistoric Egypt. The ancient Egyptians used the fruits to make date wine and ate dates at harvest. Evidence of cultivation is continually found throughout later civilizations in the Indus Valley, including the Harappan period from 2600 to 1900 BCE.
One cultivar, the Judean date palm, is renowned for its long-lived orthodox seed, which successfully sprouted after accidental storage for 2,000 years. In total seven seeds about 2000 years old have sprouted and turned into trees named Methuselah, Hannah, Adam, Judith, Boaz, Jonah and Uriel. The upper survival time limit of properly stored seeds remains unknown. A genomic study from New York University Abu Dhabi Center for Genomics and Systems Biology showed that domesticated date palm varieties from North Africa, including well-known varieties such as Medjool and Deglet Nour, share large parts of their genome with Middle East date palms and the Cretan wild palms, P. theophrasti, as well as Indian wild palms, Phoenix sylvestris.
An article on date palm tree cultivation is contained in Ibn al-'Awwam's 12th-century agricultural work, Book on Agriculture.
Cultivars
A large number of date cultivars and varieties emerged through history of its cultivation, but the exact number is difficult to assess. Hussain and El-Zeid (1975) have reported 400 varieties, while Nixon (1954) named around 250. Most of those are limited to a particular region, and only a few dozen have attained broader commercial importance. The most renowned cultivars worldwide include Deglet Noor, originally of Algeria; Yahidi and Hallawi of Iraq; Medjool of Morocco; Mazafati of Iran.
Production
In 2022, world production of dates was 9.7 million tonnes, led by Egypt, Saudi Arabia, and Algeria accounting for 46% of the total (table).
Nutrition
Date palm fruits contain 21% water, 75% carbohydrates (63% sugars and 8% dietary fiber), 2% protein, and less than 1% fat (table). In a reference amount, dates supply of food energy, and are a rich source (20% or more of the Daily Value, DV) of potassium (22% DV) and a moderate source of pantothenic acid, vitamin B6, and the dietary minerals magnesium and manganese (10–19% DV), with other micronutrients in low amounts (table).
The primary carbohydrates are monosaccharides, comprising glucose (23–30%), fructose (19–28%), and non-starch polysaccharides (7–10%) of the fruit's total weight. The sucrose content is negligible.
The glycemic index (GI) for different varieties of the date palm fruit is in the range of 38–71, with 53 on average, indicating dates are a relatively low GI food source. The glycemic load (GL) value of date palm fruits, calculated for a serving size of three fruits (weighting 27 grams) is 9 on average, indicating that dates have low GL.
Uses
Fruits
Dry or soft dates are eaten out-of-hand, or may be pitted and stuffed with fillings such as almonds, walnuts, pecans, candied orange and lemon peel, tahini, marzipan or cream cheese. Pitted dates are also referred to as stoned dates. Partially dried pitted dates may be glazed with glucose syrup for use as a snack food. Dates can also be chopped and used in a range of sweet and savory dishes, from tajines (tagines) in Morocco to puddings, ka'ak (types of Arab cookies) and other dessert items. Date nut bread, a type of cake, is very popular in the United States, especially around holidays. Dates are also processed into cubes, paste called 'ajwa, spread, date syrup or "honey" called "dibs" or rub in Libya, powder (date sugar), vinegar or alcohol. Vinegar made from dates was a traditional product of the Middle East. Recent innovations include chocolate-covered dates and products such as sparkling date juice, used in some Islamic countries as a non-alcoholic version of champagne, for special occasions and religious times such as Ramadan. When Muslims break fast in the evening meal of Ramadan, it is traditional to eat a date first.
Reflecting the maritime trading heritage of Britain, imported chopped dates are added to, or form the main basis of a variety of traditional dessert recipes including sticky toffee pudding, Christmas pudding and date and walnut loaf. They are particularly available to eat whole at Christmas time. Dates are one of the ingredients of HP Sauce, a popular British condiment.
In Southeast Spain (where a large date plantation exists including UNESCO-protected Palmeral of Elche) dates (usually pitted with fried almond) are served wrapped in bacon and shallow-fried. In Palestine date syrup, termed silan, is used while cooking chicken and also for sweets and desserts, and as a honey substitute. Dates are one of the ingredients of jallab, a Middle Eastern fruit syrup. In Pakistan, a viscous, thick syrup made from the ripe fruits is used as a coating for leather bags and pipes to prevent leaking.
Forks
In the past, sticky dates were served using specialized small forks having two metal tines, called daddelgaffel in Scandinavia. Some designs were patented. These have generally been replaced by an inexpensive pale-colored knobbled plastic fork that resembles a date branch, which is traditionally included with numerous brands of prepackaged trays of dates, though this practice has declined in response to increased use of resealable packaging and calls for fewer single-use plastics.
Seeds
Date seeds are soaked and ground up for animal feed. Their oil is suitable for use in cosmetics and dermatological applications. The oil contains lauric acid (36%) and oleic acid (41%). Date palm seeds contain 0.56–5.4% lauric acid. They can also be processed chemically as a source of oxalic acid. Date seeds are also ground and used in the manner of coffee beans, or as an additive to coffee. Experimental studies have shown that feeding mice with the aqueous extract of date pits exhibit anti-genotoxic effects and reduce DNA damage induced by N-nitroso-N-methylurea.
Fruit clusters
Stripped fruit clusters are used as brooms. Recently, the floral stalks have been found to be of ornamental value in households.
Sap
Apart from P. dactylifera, wild date palms such as Phoenix sylvestris and Phoenix reclinata, depending on the region, can be also tapped for sap.
The consumption of raw date palm sap is one of the means by which the deadly Nipah virus spreads from bats to humans. The virus can be inactivated by boiling the sap down to molasses. (In Malaysia, by contrast,
the vector was found to be factory farming of pigs.)
Leaves
In North Africa, date palm leaves are commonly used for making huts. Mature leaves are also made into mats, screens, baskets, and fans. Processed leaves can be used for insulating board. Dried leaf petioles are a source of cellulose pulp, used for walking sticks, brooms, fishing floats, and fuel. Leaf sheaths are prized for their scent, and fibre from them is also used for rope, coarse cloth, and large hats.
Young date leaves are cooked and eaten as a vegetable, as is the terminal bud or heart, though its removal kills the palm. The finely ground seeds are mixed with flour to make bread in times of scarcity. The flowers of the date palm are also edible. Traditionally the female flowers are the most available for sale and weigh . The flower buds are used in salad or ground with dried fish to make a condiment for bread.
In culture
In Ancient Rome, the palm fronds used in triumphal processions to symbolize victory were most likely those of P. dactylifera. The date palm was a popular garden plant in Roman peristyle gardens, though it would not bear fruit in the more temperate climate of Italy. It is recognizable in frescoes from Pompeii and elsewhere in Italy, including a garden scene from the House of the Wedding of Alexander. In later times, traders spread dates around southwest Asia, northern Africa, and Spain. Dates were introduced into California by the Spaniards by 1769, existing by then around Mission San Diego de Alcalá, and were introduced to Mexico as early as the 16th century.
Dates are mentioned more than 50 times in the Bible and 20 times in the Quran. Date palms holds great significance in Abrahamic religions. The tree was heavily cultivated as a food source in ancient Israel where Judaism and subsequently Christianity developed. Date palm leaves are used for Palm Sunday in the Christian religion.
Many Jewish scholars believe that the "honey" reference in Exodus chapter 3 to "a land flowing with milk and honey" is actually a reference to date "honey", and not honey from bees. In the Torah, palm trees are referenced as symbols of prosperity and triumph. Psalm 92:12 states that "The righteous shall flourish like the palm tree." Palm branches occurred as iconography in sculpture ornamenting the Second Jewish Temple in Jerusalem, on Jewish coins, and in the sculpture of synagogues. They are also used as ornamentation in the Feast of the Tabernacles. Date palms are one of the seven species of native Israeli plants revered in Judaism. The date palm has historically been considered a symbol of Judea and the Jewish people. The leaves are used as a lulav in the Jewish holiday of Sukkot. They are also commonly used as the s'chach in the construction of a sukkah.
In the Quran, Allah instructs Maryām (the Virgin Mary) to eat dates during labour pains when she gives birth to Isa (Jesus). In Islamic culture, dates and yogurt or milk are traditionally the first foods consumed for Iftar after the sun has set during Ramadan.
In Mandaeism, the date palm (Mandaic: , which can refer to both the tree and its fruit) symbolizes the cosmic tree and is often associated with the cosmic wellspring (Mandaic: ). The date palm, associated with masculinity, and wellspring, associated with femininity, are often mentioned together as heavenly symbols in Mandaean texts.
Gallery
| Biology and health sciences | Monocots | null |
87837 | https://en.wikipedia.org/wiki/Ratio | Ratio | In mathematics, a ratio () shows how many times one number contains another. For example, if there are eight oranges and six lemons in a bowl of fruit, then the ratio of oranges to lemons is eight to six (that is, 8:6, which is equivalent to the ratio 4:3). Similarly, the ratio of lemons to oranges is 6:8 (or 3:4) and the ratio of oranges to the total amount of fruit is 8:14 (or 4:7).
The numbers in a ratio may be quantities of any kind, such as counts of people or objects, or such as measurements of lengths, weights, time, etc. In most contexts, both numbers are restricted to be positive.
A ratio may be specified either by giving both constituting numbers, written as "a to b" or "a:b", or by giving just the value of their quotient Equal quotients correspond to equal ratios.
A statement expressing the equality of two ratios is called a proportion.
Consequently, a ratio may be considered as an ordered pair of numbers, a fraction with the first number in the numerator and the second in the denominator, or as the value denoted by this fraction. Ratios of counts, given by (non-zero) natural numbers, are rational numbers, and may sometimes be natural numbers.
A more specific definition adopted in physical sciences (especially in metrology) for ratio is the dimensionless quotient between two physical quantities measured with the same unit. A quotient of two quantities that are measured with units may be called a rate.
Notation and terminology
The ratio of numbers A and B can be expressed as:
the ratio of A to B
A:B
A is to B (when followed by "as C is to D"; see below)
a fraction with A as numerator and B as denominator that represents the quotient (i.e., A divided by B, or ). This can be expressed as a simple or a decimal fraction, or as a percentage, etc.
When a ratio is written in the form A:B, the two-dot character is sometimes the colon punctuation mark. In Unicode, this is , although Unicode also provides a dedicated ratio character, .
The numbers A and B are sometimes called terms of the ratio, with A being the antecedent and B being the consequent.
A statement expressing the equality of two ratios A:B and C:D is called a proportion, written as A:B = C:D or A:B∷C:D. This latter form, when spoken or written in the English language, is often expressed as
(A is to B) as (C is to D).
A, B, C and D are called the terms of the proportion. A and D are called its extremes, and B and C are called its means. The equality of three or more ratios, like A:B = C:D = E:F, is called a continued proportion.
Ratios are sometimes used with three or even more terms, e.g., the proportion for the edge lengths of a "two by four" that is ten inches long is therefore
(unplaned measurements; the first two numbers are reduced slightly when the wood is planed smooth)
a good concrete mix (in volume units) is sometimes quoted as
For a (rather dry) mixture of 4/1 parts in volume of cement to water, it could be said that the ratio of cement to water is 4:1, that there is 4 times as much cement as water, or that there is a quarter (1/4) as much water as cement.
The meaning of such a proportion of ratios with more than two terms is that the ratio of any two terms on the left-hand side is equal to the ratio of the corresponding two terms on the right-hand side.
History and etymology
It is possible to trace the origin of the word "ratio" to the Ancient Greek (logos). Early translators rendered this into Latin as ("reason"; as in the word "rational"). A more modern interpretation of Euclid's meaning is more akin to computation or reckoning. Medieval writers used the word ("proportion") to indicate ratio and ("proportionality") for the equality of ratios.
Euclid collected the results appearing in the Elements from earlier sources. The Pythagoreans developed a theory of ratio and proportion as applied to numbers. The Pythagoreans' conception of number included only what would today be called rational numbers, casting doubt on the validity of the theory in geometry where, as the Pythagoreans also discovered, incommensurable ratios (corresponding to irrational numbers) exist. The discovery of a theory of ratios that does not assume commensurability is probably due to Eudoxus of Cnidus. The exposition of the theory of proportions that appears in Book VII of The Elements reflects the earlier theory of ratios of commensurables.
The existence of multiple theories seems unnecessarily complex since ratios are, to a large extent, identified with quotients and their prospective values. However, this is a comparatively recent development, as can be seen from the fact that modern geometry textbooks still use distinct terminology and notation for ratios and quotients. The reasons for this are twofold: first, there was the previously mentioned reluctance to accept irrational numbers as true numbers, and second, the lack of a widely used symbolism to replace the already established terminology of ratios delayed the full acceptance of fractions as alternative until the 16th century.
Euclid's definitions
Book V of Euclid's Elements has 18 definitions, all of which relate to ratios. In addition, Euclid uses ideas that were in such common usage that he did not include definitions for them. The first two definitions say that a part of a quantity is another quantity that "measures" it and conversely, a multiple of a quantity is another quantity that it measures. In modern terminology, this means that a multiple of a quantity is that quantity multiplied by an integer greater than one—and a part of a quantity (meaning aliquot part) is a part that, when multiplied by an integer greater than one, gives the quantity.
Euclid does not define the term "measure" as used here, However, one may infer that if a quantity is taken as a unit of measurement, and a second quantity is given as an integral number of these units, then the first quantity measures the second. These definitions are repeated, nearly word for word, as definitions 3 and 5 in book VII.
Definition 3 describes what a ratio is in a general way. It is not rigorous in a mathematical sense and some have ascribed it to Euclid's editors rather than Euclid himself. Euclid defines a ratio as between two quantities of the same type, so by this definition the ratios of two lengths or of two areas are defined, but not the ratio of a length and an area. Definition 4 makes this more rigorous. It states that a ratio of two quantities exists, when there is a multiple of each that exceeds the other. In modern notation, a ratio exists between quantities p and q, if there exist integers m and n such that mp>q and nq>p. This condition is known as the Archimedes property.
Definition 5 is the most complex and difficult. It defines what it means for two ratios to be equal. Today, this can be done by simply stating that ratios are equal when the quotients of the terms are equal, but such a definition would have been meaningless to Euclid. In modern notation, Euclid's definition of equality is that given quantities p, q, r and s, p:q∷r:s if and only if, for any positive integers m and n, np<mq, np=mq, or np>mq according as nr<ms, nr=ms, or nr>ms, respectively. This definition has affinities with Dedekind cuts as, with n and q both positive, np stands to mq as stands to the rational number (dividing both terms by nq).
Definition 6 says that quantities that have the same ratio are proportional or in proportion. Euclid uses the Greek ἀναλόγον (analogon), this has the same root as λόγος and is related to the English word "analog".
Definition 7 defines what it means for one ratio to be less than or greater than another and is based on the ideas present in definition 5. In modern notation it says that given quantities p, q, r and s, p:q>r:s if there are positive integers m and n so that np>mq and nr≤ms.
As with definition 3, definition 8 is regarded by some as being a later insertion by Euclid's editors. It defines three terms p, q and r to be in proportion when p:q∷q:r. This is extended to four terms p, q, r and s as p:q∷q:r∷r:s, and so on. Sequences that have the property that the ratios of consecutive terms are equal are called geometric progressions. Definitions 9 and 10 apply this, saying that if p, q and r are in proportion then p:r is the duplicate ratio of p:q and if p, q, r and s are in proportion then p:s is the triplicate ratio of p:q.
Number of terms and use of fractions
In general, a comparison of the quantities of a two-entity ratio can be expressed as a fraction derived from the ratio. For example, in a ratio of 2:3, the amount, size, volume, or quantity of the first entity is that of the second entity.
If there are 2 oranges and 3 apples, the ratio of oranges to apples is 2:3, and the ratio of oranges to the total number of pieces of fruit is 2:5. These ratios can also be expressed in fraction form: there are 2/3 as many oranges as apples, and 2/5 of the pieces of fruit are oranges. If orange juice concentrate is to be diluted with water in the ratio 1:4, then one part of concentrate is mixed with four parts of water, giving five parts total; the amount of orange juice concentrate is 1/4 the amount of water, while the amount of orange juice concentrate is 1/5 of the total liquid. In both ratios and fractions, it is important to be clear what is being compared to what, and beginners often make mistakes for this reason.
Fractions can also be inferred from ratios with more than two entities; however, a ratio with more than two entities cannot be completely converted into a single fraction, because a fraction can only compare two quantities. A separate fraction can be used to compare the quantities of any two of the entities covered by the ratio: for example, from a ratio of 2:3:7 we can infer that the quantity of the second entity is that of the third entity.
Proportions and percentage ratios
If we multiply all quantities involved in a ratio by the same number, the ratio remains valid. For example, a ratio of 3:2 is the same as 12:8. It is usual either to reduce terms to the lowest common denominator, or to express them in parts per hundred (percent).
If a mixture contains substances A, B, C and D in the ratio 5:9:4:2 then there are 5 parts of A for every 9 parts of B, 4 parts of C and 2 parts of D. As 5+9+4+2=20, the total mixture contains 5/20 of A (5 parts out of 20), 9/20 of B, 4/20 of C, and 2/20 of D. If we divide all numbers by the total and multiply by 100, we have converted to percentages: 25% A, 45% B, 20% C, and 10% D (equivalent to writing the ratio as 25:45:20:10).
If the two or more ratio quantities encompass all of the quantities in a particular situation, it is said that "the whole" contains the sum of the parts: for example, a fruit basket containing two apples and three oranges and no other fruit is made up of two parts apples and three parts oranges. In this case, , or 40% of the whole is apples and , or 60% of the whole is oranges. This comparison of a specific quantity to "the whole" is called a proportion.
If the ratio consists of only two values, it can be represented as a fraction, in particular as a decimal fraction. For example, older televisions have a 4:3 aspect ratio, which means that the width is 4/3 of the height (this can also be expressed as 1.33:1 or just 1.33 rounded to two decimal places). More recent widescreen TVs have a 16:9 aspect ratio, or 1.78 rounded to two decimal places. One of the popular widescreen movie formats is 2.35:1 or simply 2.35. Representing ratios as decimal fractions simplifies their comparison. When comparing 1.33, 1.78 and 2.35, it is obvious which format offers wider image. Such a comparison works only when values being compared are consistent, like always expressing width in relation to height.
Reduction
Ratios can be reduced (as fractions are) by dividing each quantity by the common factors of all the quantities. As for fractions, the simplest form is considered that in which the numbers in the ratio are the smallest possible integers.
Thus, the ratio 40:60 is equivalent in meaning to the ratio 2:3, the latter being obtained from the former by dividing both quantities by 20. Mathematically, we write 40:60 = 2:3, or equivalently 40:60∷2:3. The verbal equivalent is "40 is to 60 as 2 is to 3."
A ratio that has integers for both quantities and that cannot be reduced any further (using integers) is said to be in simplest form or lowest terms.
Sometimes it is useful to write a ratio in the form 1:x or x:1, where x is not necessarily an integer, to enable comparisons of different ratios. For example, the ratio 4:5 can be written as 1:1.25 (dividing both sides by 4) Alternatively, it can be written as 0.8:1 (dividing both sides by 5).
Where the context makes the meaning clear, a ratio in this form is sometimes written without the 1 and the ratio symbol (:), though, mathematically, this makes it a factor or multiplier.
Irrational ratios
Ratios may also be established between incommensurable quantities (quantities whose ratio, as value of a fraction, amounts to an irrational number). The earliest discovered example, found by the Pythagoreans, is the ratio of the length of the diagonal to the length of a side of a square, which is the square root of 2, formally Another example is the ratio of a circle's circumference to its diameter, which is called , and is not just an irrational number, but a transcendental number.
Also well known is the golden ratio of two (mostly) lengths and , which is defined by the proportion
or, equivalently
Taking the ratios as fractions and as having the value , yields the equation
or
which has the positive, irrational solution
Thus at least one of a and b has to be irrational for them to be in the golden ratio. An example of an occurrence of the golden ratio in math is as the limiting value of the ratio of two consecutive Fibonacci numbers: even though all these ratios are ratios of two integers and hence are rational, the limit of the sequence of these rational ratios is the irrational golden ratio.
Similarly, the silver ratio of and is defined by the proportion
corresponding to
This equation has the positive, irrational solution so again at least one of the two quantities a and b in the silver ratio must be irrational.
Odds
Odds (as in gambling) are expressed as a ratio. For example, odds of "7 to 3 against" (7:3) mean that there are seven chances that the event will not happen to every three chances that it will happen. The probability of success is 30%. In every ten trials, there are expected to be three wins and seven losses.
Units
Ratios may be unitless, as in the case they relate quantities in units of the same dimension, even if their units of measurement are initially different.
For example, the ratio can be reduced by changing the first value to 60 seconds, so the ratio becomes . Once the units are the same, they can be omitted, and the ratio can be reduced to 3:2.
On the other hand, there are non-dimensionless quotients, also known as rates (sometimes also as ratios).
In chemistry, mass concentration ratios are usually expressed as weight/volume fractions.
For example, a concentration of 3% w/v usually means 3 g of substance in every 100 mL of solution. This cannot be converted to a dimensionless ratio, as in weight/weight or volume/volume fractions.
Triangular coordinates
The locations of points relative to a triangle with vertices A, B, and C and sides AB, BC, and CA are often expressed in extended ratio form as triangular coordinates.
In barycentric coordinates, a point with coordinates α, β, γ is the point upon which a weightless sheet of metal in the shape and size of the triangle would exactly balance if weights were put on the vertices, with the ratio of the weights at A and B being α : β, the ratio of the weights at B and C being β : γ, and therefore the ratio of weights at A and C being α : γ.
In trilinear coordinates, a point with coordinates x:y:z has perpendicular distances to side BC (across from vertex A) and side CA (across from vertex B) in the ratio x:y, distances to side CA and side AB (across from C) in the ratio y:z, and therefore distances to sides BC and AB in the ratio x:z.
Since all information is expressed in terms of ratios (the individual numbers denoted by α, β, γ, x, y, and z have no meaning by themselves), a triangle analysis using barycentric or trilinear coordinates applies regardless of the size of the triangle.
| Mathematics | Basics | null |
87851 | https://en.wikipedia.org/wiki/Stoat | Stoat | The stoat (Mustela erminea), also known as the Eurasian ermine or ermine, is a species of mustelid native to Eurasia and the northern regions of North America. Because of its wide circumpolar distribution, it is listed as Least Concern on the IUCN Red List.
The name ermine () is used especially in its pure white winter coat of the stoat or its fur. Ermine fur was used in the 15th century by Catholic monarchs, who sometimes used it as the mozzetta cape. It has long been used on the ceremonial robes of members of the United Kingdom House of Lords. It was also used in capes on images such as the Infant Jesus of Prague.
The stoat was introduced into New Zealand in the late 19th century to control rabbits, but had a devastating effect on native bird populations and was nominated as one of the world's top 100 "worst invaders".
Etymology
The root word for "stoat" is likely either the Dutch word ("bold") or the Gothic word (, "to push"). According to John Guillim, in his Display of Heraldrie, the word "ermine" is likely derived from Armenia, the nation where it was thought the species originated, though other authors have linked it to the Norman French from the Teutonic (Anglo-Saxon ). This seems to come from the Lithuanian word . In Ireland (where the least weasel does not occur), the stoat is referred to as a weasel, while in North America it is called a short-tailed weasel. A male stoat is called a dog, hob, or jack, while a female is called a jill. The collective noun for stoats is either gang or pack.
Taxonomy
Formerly considered a single species with a very wide circumpolar range, a 2021 study split M. erminea into three species: M. erminea sensu stricto (Eurasia and northern North America), M. richardsonii (most of North America), and M. haidarum (several islands off the Pacific Northwest coast).
Subspecies
, 21 subspecies are recognized.
Evolution
The stoat's direct ancestor was Mustela palerminea, a common carnivore in central and eastern Europe during the Middle Pleistocene, that spread to North America during the late Blancan or early Irvingtonian. The stoat is the product of a process that began 5–7 million years ago, when northern forests were replaced by open grassland, thus prompting an explosive evolution of small, burrowing rodents. The stoat's ancestors were larger than the current form, and underwent a reduction in size as they exploited the new food source. The stoat first arose in Eurasia, shortly after the long-tailed weasel, which is in a different genus (Neogale), arose as its mirror image in North America 2 million years ago. The stoat thrived during the Ice Age, as its small size and long body allowed it to easily operate beneath snow, as well as hunt in burrows. The stoat and the long-tailed weasel remained separated until 500,000 years ago, when falling sea levels exposed the Bering land bridge.
Fossilised stoat remains have been recovered from Denisova Cave. Combined phylogenetic analyses indicate the stoat's closest living relatives are the American ermine (M. richardsonii) and Haida ermine (M. haidarum), the latter of which partially descends from M. erminea. It is basal to most other members of Mustela, with only the yellow-bellied (M. kathia), Malayan (M. katiah), and back-striped (M. strigidorsa) weasels being more basal. The mountain weasel (Mustela altaica) was formerly considered its closest relative although more recent analyses have found it to be significantly more derived. It was also previously thought to be allied with members of the genus Neogale such as the long-tailed weasel, but as those species have since been separated into a new genus, this is likely not the case.
Description
Build
The stoat is similar to the least weasel in general proportions, manner of posture, and movement, though the tail is relatively longer, always exceeding a third of the body length, though it is shorter than that of the long-tailed weasel. The stoat has an elongated neck, the head being set exceptionally far in front of the shoulders. The trunk is nearly cylindrical, and does not bulge at the abdomen. The greatest circumference of body is little
more than half its length. The skull, although very similar to that of the least weasel, is relatively longer, with a narrower braincase. The projections of the skull and teeth are weakly developed, but stronger than those of the least weasel. The eyes are round, black and protrude slightly. The whiskers are brown or white in colour, and very long. The ears are short, rounded and lie almost flattened against the skull. The claws are not retractable, and are large in proportion to the digits. Each foot has five toes. The male stoat has a curved baculum with a proximal knob that increases in weight as it ages. Fat is deposited primarily along the spine and kidneys, then on gut mesenteries, under the limbs and around the shoulders. The stoat has four pairs of nipples, though they are visible only in females.
The dimensions of the stoat are variable, but not as significantly as the least weasel's. Unusual among the Carnivora, the size of stoats tends to decrease proportionally with latitude, in contradiction to Bergmann's rule. Sexual dimorphism in size is pronounced, with males being roughly 25% larger than females and 1.5-2.0 times their weight. On average, males measure in body length, while females measure . The tail measures in males and in females. In males, the hind foot measures , while in females it is . The height of the ear measures in males and . The skulls of males measure in length, while those of females measure . Males average in weight, while females weigh less than .
The stoat has large anal scent glands measuring in males and smaller in females. Scent glands are also present on the cheeks, belly and flanks. Epidermal secretions, which are deposited during body rubbing, are chemically distinct from the products of the anal scent glands, which contain a higher proportion of volatile chemicals. When attacked or being aggressive, the stoat secretes the contents of its anal glands, giving rise to a strong, musky odour produced by several sulphuric compounds. The odour is distinct from that of least weasels.
Fur
The winter fur is very dense and silky, but quite closely lying and short, while the summer fur is rougher, shorter and sparse. In summer, the fur is sandy-brown on the back and head and a white below. The division between the dark back and the light belly is usually straight, though this trait is only present in 13.5% of Irish stoats. The stoat moults twice a year. In spring, the moult is slow, starting from the forehead, across the back, toward the belly. In autumn, the moult is quicker, progressing in the reverse direction. The moult, initiated by photoperiod, starts earlier in autumn and later in spring at higher latitudes. In the stoat's northern range, it adopts a completely white coat (save for the black tail-tip) during the winter period. Differences in the winter and summer coats are less apparent in southern forms of the species. In the species' southern range, the coat remains brown, but is denser and sometimes paler than in summer.
Distribution and habitat
The stoat has a circumboreal range throughout North America, Europe, and Asia. The stoat in Europe is found as far south as 41ºN in Portugal, and inhabits most islands with the exception of Iceland, Svalbard, the Mediterranean islands and some small North Atlantic islands. In Japan, it is present in central mountains (northern and central Japanese Alps) to northern part of Honshu (primarily above 1,200 m) and Hokkaido. Its vertical range is from sea level to . In North America, it is found throughout Alaska and western Yukon to most of Arctic Canada east to Greenland. Throughout the rest of North America, as well as parts of Nunavut, including Baffin Island and some islands in southeast Alaska, it is replaced by M. richardsonii.
Introduction to New Zealand
Stoats were introduced into New Zealand during the late 19th century to control rabbits and hares, but are now a major threat to native bird populations. The introduction of stoats was opposed by scientists in New Zealand and Britain, including the New Zealand ornithologist Walter Buller. The warnings were ignored and stoats began to be introduced from Britain in the 1880s, resulting in a noticeable decline in bird populations within six years. Stoats are a serious threat to ground- and hole-nesting birds, since the latter have very few means of escaping predation. The highest rates of stoat predation occur after seasonal gluts in southern beechmast (beechnuts), which enable the reproduction of rodents on which stoats also feed, enabling stoats to increase their own numbers. For instance, the endangered South Island takahē's wild population dropped by a third between 2006 and 2007, after a stoat plague triggered by the 2005–06 mast wiped out more than half the takahē in untrapped areas.
Behaviour and ecology
Reproduction and development
In the Northern Hemisphere, mating occurs in the April–July period. In spring, the male's testes are enlarged, a process accompanied by an increase of testosterone concentration in the plasma. Spermatogenesis occurs in December, and the males are fertile from May to August, after which the testes regress. Female stoats are usually only in heat for a brief period, which is triggered by changes in day length. Copulation can last as long as 1 hour. Stoats are not monogamous, with litters often being of mixed paternity. Stoats undergo embryonic diapause, meaning that the embryo does not immediately implant in the uterus after fertilization, but rather lies dormant for a period of nine to ten months. The gestation period is therefore variable but typically around 300 days, and after mating in the summer, the offspring will not be born until the following spring – adult female stoats spend almost all their lives either pregnant or in heat. Females can reabsorb embryos and in the event of a severe winter they may reabsorb their entire litter. Males play no part in rearing the young, which are born blind, deaf, toothless and covered in fine white or pinkish down. The milk teeth erupt after three weeks, and solid food is eaten after four weeks. The eyes open after five to six weeks, with the black tail-tip appearing a week later. Lactation ends after 12 weeks. Prior to the age of five to seven weeks, kits have poor thermoregulation, so they huddle for warmth when the mother is absent. Males become sexually mature at 10–11 months, while females are sexually mature at the age of 2–3 weeks whilst still blind, deaf and hairless, and are usually mated with adult males before being weaned.
Territorial and sheltering behaviour
Stoat territoriality has a generally mustelid spacing pattern, with male territories encompassing smaller female territories, which they defend from other males. The size of the territory and the ranging behaviour of its occupants varies seasonally, depending on the abundance of food and mates. During the breeding season, the ranges of females remain unchanged, while males either become roamers, strayers or transients. Dominant older males have territories 50 times larger than those of younger, socially inferior males. Both sexes mark their territories with urine, feces and two types of scent marks; anal drags are meant to convey territorial occupancy, and body rubbing is associated with agonistic encounters.
The stoat does not dig its own burrows, instead using the burrows and nest chambers of the rodents it kills. The skins and underfur of rodent prey are used to line the nest chamber. The nest chamber is sometimes located in seemingly unsuitable places, such as among logs piled against the walls of houses. The stoat also inhabits old and rotting stumps, under tree roots, in heaps of brushwood, haystacks, in bog hummocks, in the cracks of vacant mud buildings, in rock piles, rock clefts, and even in magpie nests. Males and females typically live apart, but close to each other. Each stoat has several dens dispersed within its range. A single den has several galleries, mainly within of the surface.
Diet
As with the least weasel, mouse-like rodents predominate in the stoat's diet. It regularly preys on larger rodent and lagomorph species, and takes individuals far larger than itself. In Russia, its prey includes rodents and lagomorphs such as European water voles, common hamsters, pikas and others, which it overpowers in their burrows. Prey species of secondary importance include small birds, fish, and shrews and, more rarely, amphibians, lizards, and insects. It also preys on lemmings.
In Great Britain, European rabbits are an important food source, with the frequency in which stoats prey on them having increased between the 1960s and mid 1990s since the end of the myxomatosis epidemic. Typically, male stoats prey on rabbits more frequently than females do, which depend to a greater extent on smaller rodent species. British stoats rarely kill shrews, rats, squirrels and water voles, though rats may be an important food source locally. In Ireland, shrews and rats are frequently eaten. In mainland Europe, water voles make up a large portion of the stoat's diet. Hares are sometimes taken, but are usually young specimens. In New Zealand, the stoat feeds principally on birds, including the rare kiwi, kaka, mohua, yellow-crowned parakeet, and New Zealand dotterel. Cases are known of stoats preying on young muskrats. The stoat typically eats about of food a day, which is equivalent to 25% of the animal's live weight.
The stoat is an opportunistic predator that moves rapidly and checks every available burrow or crevice for food. Because of their larger size, male stoats are less successful than females in pursuing rodents far into tunnels. Stoats regularly climb trees to gain access to birds' nests, and are common raiders of nest boxes, particularly those of large species. The stoat reputedly mesmerises prey such as rabbits by a "dance" (sometimes called the weasel war dance), though this behaviour could be linked to Skrjabingylus infections. The stoat seeks to immobilize large prey such as rabbits with a bite to the spine at the back of the neck. The stoat may surplus kill when the opportunity arises, though excess prey is usually cached and eaten later to avoid obesity, as overweight stoats tend to be at a disadvantage when pursuing prey into their burrows. Small prey typically die instantly from a bite to the back of the neck, while larger prey, such as rabbits, typically die of shock, as the stoat's canine teeth are too short to reach the spinal column or major arteries.
Communication
The stoat is a usually silent animal, but can produce a range of sounds similar to those of the least weasel. Kits produce a fine chirping noise. Adults trill excitedly before mating, and indicate submission through quiet trilling, whining and squealing. When nervous, the stoat hisses, and will intersperse this with sharp barks or shrieks and prolonged screeching when aggressive.
Aggressive behavior in stoats is categorized in these forms:
Noncontact approach, which is sometimes accompanied by a threat display and vocalization from the approached animal
Forward thrust, accompanied by a sharp shriek, which is usually done by stoats defending a nest or retreat site
Nest occupation, when a stoat appropriates the nesting site of a weaker individual
Kleptoparasitism, in which a dominant stoat appropriates the killing of a weaker one, usually after a fight.
Submissive stoats express their status by avoiding higher-ranking animals, fleeing from them or making whining or squealing sounds.
Predators
Larger mammalian predators such as red foxes (Vulpes vulpes) and sables (Martes zibellina) are known to prey on stoats. Additionally, a wide range of birds of prey can take stoats, from small northern hawk-owls (Surnia ulula) and short-eared owls (Asio flammeus) to various buzzards, kites, goshawks, and even Eurasian eagle-owls (Bubo bubo) and golden eagles (Aquila chrysaetos). Although not classified as birds of prey, grey herons (Ardea cinerea) are known to prey on stoats.
Diseases and parasites
Tuberculosis has been recorded in stoats inhabiting the former Soviet Union and New Zealand. They are largely resistant to tularemia, but are reputed to suffer from canine distemper in captivity. Symptoms of mange have also been recorded.
Stoats are vulnerable to ectoparasites associated with their prey and the nests of other animals on which they do not prey. The louse Trichodectes erminea is recorded in stoats living in Canada, Ireland and New Zealand. In continental Europe, 26 flea species are recorded to infest stoats, including Rhadinospylla pentacantha, Megabothris rectangulatus, Orchopeas howardi, Spilopsyllus ciniculus, Ctenophthalamus nobilis, Dasypsyllus gallinulae, Nosopsyllus fasciatus, Leptospylla segnis, Ceratophyllus gallinae, Parapsyllus n. nestoris, Amphipsylla kuznetzovi and Ctenopsyllus bidentatus. Tick species known to infest stoats are Ixodes canisuga, I. hexagonus, and I. ricinus and Haemaphysalis longicornis. Louse species known to infest stoats include Mysidea picae and Polyplax spinulosa. Mite species known to infest stoats include Neotrombicula autumnalis, Demodex erminae, Eulaelaps stabulans, Gymnolaelaps annectans, Hypoaspis nidicorva, and Listrophorus mustelae.
The nematode Skrjabingylus nasicola is particularly threatening to stoats, as it erodes the bones of the nasal sinuses and decreases fertility. Other nematode species known to infect stoats include Capillaria putorii, Molineus patens and Strongyloides martes. Cestode species known to infect stoats include Taenia tenuicollis, Mesocestoides lineatus and rarely Acanthocephala.
In culture
Folklore and mythology
In Irish mythology, stoats were viewed anthropomorphically as animals with families, which held rituals for their dead. They were also viewed as noxious animals prone to thieving, and their saliva was said to be able to poison a grown man. To encounter a stoat when setting out for a journey was considered bad luck, but one could avert this by greeting the stoat as a neighbour. Stoats were also supposed to hold the souls of infants who died before baptism.
In the folklore of the Komi people of the Urals, stoats are symbolic of beautiful and coveted young women. In the Zoroastrian religion, the stoat is considered a sacred animal, as its white winter coat represented purity. Similarly, Mary Magdalene was depicted as wearing a white stoat pelt as a sign of her reformed character.
One popular European legend had it that a white stoat would die before allowing its pure white coat to be besmirched. When it was being chased by hunters, it would supposedly turn around and give itself up to the hunters rather than risk soiling itself.
The former nation (now region) of Brittany in France uses a stylized ermine-fur pattern in forming the coat of arms and flag of Brittany. Gilles Servat's song La Blanche Hermine ("The White Ermine") became an anthem for Bretons (and is popular among French people in general).
In the 16th century Chinese novel Investiture of the Gods, Erlang Shen transforms into an ermine to demonstrate his shapeshifting abilities.
Fur use
Stoat skins are prized by the fur trade, especially in winter coat, and used to trim coats and stoles. The fur from the winter coat is referred to as ermine and is the traditional ancient symbol of the Duchy of Brittany, forming its earliest flag. There is also a design called ermine inspired by the winter coat of the stoat and painted onto other furs, such as rabbit. In Europe these furs are a symbol of royalty and high status. The ceremonial robes of members of the United Kingdom House of Lords and the academic hoods of the universities of Oxford and Cambridge are traditionally trimmed with ermine. In practice, rabbit or fake fur is now often used due to expense or animal rights concerns. Prelates of the Catholic Church still wear ecclesiastical garments featuring ermine (a sign of their status equal to that of the nobility). Cecilia Gallerani is depicted holding an ermine in her portrait, Lady with an Ermine, by Leonardo da Vinci. Henry Peacham's Emblem 75, which depicts an ermine being pursued by a hunter and two hounds, is entitled "Cui candor morte redemptus" ("Purity Bought with His Own Death"). Peacham goes on to preach that men and women should follow the example of the ermine and keep their minds and consciences as pure as the legendary ermine keeps its fur.
Ermine (both M. erminea and M. richardsonii, both of which inhabited the Tlingit's territory) were also valued by the Tlingit and other indigenous peoples of the Pacific Northwest Coast. They could be attached to traditional regalia and cedar bark hats as status symbols, or they were also made into shirts.
The stoat was a fundamental item in the fur trade of the Soviet Union, with no less than half the global catch coming from within its borders. The Soviet Union also contained the highest grades of stoat pelts, with the best grade North American pelts being comparable only to the 9th grade in the quality criteria of former Soviet stoat standards. Stoat harvesting never became a specialty in any Soviet republic, with most stoats being captured incidentally in traps or near villages. Stoats in the Soviet Union were captured either with dogs or with box-traps or jaw-traps. Guns were rarely used, as they could damage the pelt.
| Biology and health sciences | Carnivora | null |
87872 | https://en.wikipedia.org/wiki/Antiproton | Antiproton | The antiproton, , (pronounced p-bar) is the antiparticle of the proton. Antiprotons are stable, but they are typically short-lived, since any collision with a proton will cause both particles to be annihilated in a burst of energy.
The existence of the antiproton with electric charge of , opposite to the electric charge of of the proton, was predicted by Paul Dirac in his 1933 Nobel Prize lecture. Dirac received the Nobel Prize for his 1928 publication of his Dirac equation that predicted the existence of positive and negative solutions to Einstein's energy equation () and the existence of the positron, the antimatter analog of the electron, with opposite charge and spin.
The antiproton was first experimentally confirmed in 1955 at the Bevatron particle accelerator by University of California, Berkeley physicists Emilio Segrè and Owen Chamberlain, for which they were awarded the 1959 Nobel Prize in Physics.
In terms of valence quarks, an antiproton consists of two up antiquarks and one down antiquark (). The properties of the antiproton that have been measured all match the corresponding properties of the proton, with the exception that the antiproton has electric charge and magnetic moment that are the opposites of those in the proton, which is to be expected from the antimatter equivalent of a proton. The questions of how matter is different from antimatter, and the relevance of antimatter in explaining how our universe survived the Big Bang, remain open problems—open, in part, due to the relative scarcity of antimatter in today's universe.
Occurrence in nature
Antiprotons have been detected in cosmic rays beginning in 1979, first by balloon-borne experiments and more recently by satellite-based detectors. The standard picture for their presence in cosmic rays is that they are produced in collisions of cosmic ray protons with atomic nuclei in the interstellar medium, via the reaction, where A represents a nucleus:
+ A → + + + A
The secondary antiprotons () then propagate through the galaxy, confined by the galactic magnetic fields. Their energy spectrum is modified by collisions with other atoms in the interstellar medium, and antiprotons can also be lost by "leaking out" of the galaxy.
The antiproton cosmic ray energy spectrum is now measured reliably and is consistent with this standard picture of antiproton production by cosmic ray collisions. These experimental measurements set upper limits on the number of antiprotons that could be produced in exotic ways, such as from annihilation of supersymmetric dark matter particles in the galaxy or from the Hawking radiation caused by the evaporation of primordial black holes. This also provides a lower limit on the antiproton lifetime of about 1–10 million years. Since the galactic storage time of antiprotons is about 10 million years, an intrinsic decay lifetime would modify the galactic residence time and distort the spectrum of cosmic ray antiprotons. This is significantly more stringent than the best laboratory measurements of the antiproton lifetime:
LEAR collaboration at CERN:
Antihydrogen Penning trap of Gabrielse et al.:
BASE experiment at CERN:
APEX collaboration at Fermilab: for → + anything
APEX collaboration at Fermilab: for → +
The magnitude of properties of the antiproton are predicted by CPT symmetry to be exactly related to those of the proton. In particular, CPT symmetry predicts the mass and lifetime of the antiproton to be the same as those of the proton, and the electric charge and magnetic moment of the antiproton to be opposite in sign and equal in magnitude to those of the proton. CPT symmetry is a basic consequence of quantum field theory and no violations of it have ever been detected.
List of recent cosmic ray detection experiments
BESS: balloon-borne experiment, flown in 1993, 1995, 1997, 2000, 2002, 2004 (Polar-I) and 2007 (Polar-II).
CAPRICE: balloon-borne experiment, flown in 1994 and 1998.
HEAT: balloon-borne experiment, flown in 2000.
AMS: space-based experiment, prototype flown on the Space Shuttle in 1998, intended for the International Space Station, launched May 2011.
PAMELA: satellite experiment to detect cosmic rays and antimatter from space, launched June 2006. Recent report discovered 28 antiprotons in the South Atlantic Anomaly.
Modern experiments and applications
Production
Antiprotons were routinely produced at Fermilab for collider physics operations in the Tevatron, where they were collided with protons. The use of antiprotons allows for a higher average energy of collisions between quarks and antiquarks than would be possible in proton–proton collisions. This is because the valence quarks in the proton, and the valence antiquarks in the antiproton, tend to carry the largest fraction of the proton or antiproton's momentum.
Formation of antiprotons requires energy equivalent to a temperature of 10 trillion K (1013 K), and this does not tend to happen naturally. However, at CERN, protons are accelerated in the Proton Synchrotron to an energy of 26 GeV and then smashed into an iridium rod. The protons bounce off the iridium nuclei with enough energy for matter to be created. A range of particles and antiparticles are formed, and the antiprotons are separated off using magnets in vacuum.
Measurements
In July 2011, the ASACUSA experiment at CERN determined the mass of the antiproton to be times that of the electron. This is the same as the mass of a proton, within the level of certainty of the experiment.
In October 2017, scientists working on the BASE experiment at CERN reported a measurement of the antiproton magnetic moment to a precision of 1.5 parts per billion. It is consistent with the most precise measurement of the proton magnetic moment (also made by BASE in 2014), which supports the hypothesis of CPT symmetry. This measurement represents the first time that a property of antimatter is known more precisely than the equivalent property in matter.
In January 2022, by comparing the charge-to-mass ratios between antiproton and negatively charged hydrogen ion, the BASE experiment has determined the antiproton's charge-to-mass ratio is identical to the proton's, down to 16 parts per trillion.
Possible applications
Antiprotons have been shown within laboratory experiments to have the potential to treat certain cancers, in a similar method currently used for ion (proton) therapy. The primary difference between antiproton therapy and proton therapy is that following ion energy deposition the antiproton annihilates, depositing additional energy in the cancerous region.
| Physical sciences | Antimatter | Physics |
87945 | https://en.wikipedia.org/wiki/Terminator%20%28solar%29 | Terminator (solar) | A terminator or twilight zone is a moving line that divides the daylit side and the dark night side of a planetary body. The terminator is defined as the locus of points on a planet or moon where the line through the center of its parent star is tangent. An observer on the terminator of such an orbiting body with an atmosphere would experience twilight due to light scattering by particles in the gaseous layer.
Earth's terminator
On Earth, the terminator is a circle with a diameter that is approximately that of Earth. The terminator passes through any point on Earth's surface twice a day, at sunrise and at sunset, apart from polar regions where this only occurs when the point is not experiencing midnight sun or polar night. The circle separates the portion of Earth experiencing daylight from that experiencing darkness (night). While a little over one half of Earth is illuminated at any point in time (with exceptions during eclipses), the terminator path varies by time of day due to Earth's rotation on its axis. The terminator path also varies by time of year due to Earth's orbital revolution around the Sun; thus, the plane of the terminator is nearly parallel to planes created by lines of longitude during the equinoxes, and its maximum angle is approximately 23.5° to the pole during the solstices.
Surface transit speed
At the equator, under flat conditions (without obstructions like mountains or at a height above any such obstructions), the terminator moves at approximately . This speed can appear to increase when near obstructions, such as the height of a mountain, as the shadow of the obstruction will be cast over the ground in advance of the terminator along a flat landscape. The speed of the terminator decreases as it approaches the poles, where it can reach a speed of zero (full-day sunlight or darkness).
Supersonic aircraft like jet fighters or Concorde and Tupolev Tu-144 supersonic transports are the only aircraft able to overtake the maximum speed of the terminator at the equator. However, slower vehicles can overtake the terminator at higher latitudes, and it is possible to walk faster than the terminator at the poles, near to the equinoxes. The visual effect is that of seeing the sun rise in the west, or set in the east.
Grey-line radio propagation
Strength of radio propagation changes between day- and night-side of the ionosphere. This is primarily because the D layer, which absorbs high frequency signals, disappears rapidly on the dark side of the terminator, whereas the E and F layers above the D layer take longer to form. This time-difference puts the ionosphere into a unique intermediate state along the terminator, called the "grey line".
Amateur radio operators take advantage of conditions along the terminator to perform long-distance communications. Called "gray-line" or "grey-line" propagation, this signal path is a type of skywave propagation. Under good conditions, radio waves can travel along the terminator to antipodal points.
Gallery
Lunar terminator
The lunar terminator is the division between the illuminated and dark hemispheres of the Moon. It is the lunar equivalent of the division between night and day on the Earth spheroid, although the Moon's much lower rate of rotation means it takes longer for it to pass across the surface. At the equator, it moves at , as fast as an athletic human can run on earth.
Due to the angle at which sunlight strikes this portion of the Moon, shadows cast by craters and other geological features are elongated, thereby making such features more apparent to the observer. This phenomenon is similar to the lengthening of shadows on Earth when the Sun is low in the sky. For this reason, much lunar photographic study centers on the illuminated area near the lunar terminator, and the resulting shadows provide accurate descriptions of the lunar terrain.
Lunar terminator illusion
The lunar terminator (or tilt) illusion is an optical illusion arising from the expectation of an observer on Earth that the direction of sunlight illuminating the Moon (i.e. a line perpendicular to the terminator) should correspond with the position of the Sun, but does not appear to do so. The illusion results from misinterpreting the arrangement of objects in the sky according to intuition based on planar geometry.
Scientific significance
Examination of a terminator can yield information about the surface of a planetary body; for example, the presence of an atmosphere can create a fuzzier terminator. As the particles within an atmosphere are at a higher elevation, the light source can remain visible even after it has set at ground level. These particles scatter the light, reflecting some of it to the ground. Hence, the sky can remain illuminated even after the sun has set. Images showing a planetary terminator can be used to map topography: the position of the tip of a mountain behind the terminator line is measured when the Sun still or already illuminates it while the base of the mountain remains in shadow.
Low Earth orbit satellites take advantage of the fact that certain polar orbits set near the terminator do not suffer from eclipse, therefore their solar cells are continuously lit by sunlight. Such orbits are called dawn-dusk orbits, a type of Sun-synchronous orbit. This prolongs the operational life of a LEO satellite, as onboard battery life is prolonged. It also enables specific experiments that require minimum interference from the Sun, as the designers can opt to install the relevant sensors on the dark side of the satellite.
| Physical sciences | Celestial mechanics | Astronomy |
88003 | https://en.wikipedia.org/wiki/Menstrual%20cycle | Menstrual cycle | The menstrual cycle is a series of natural changes in hormone production and the structures of the uterus and ovaries of the female reproductive system that makes pregnancy possible. The ovarian cycle controls the production and release of eggs and the cyclic release of estrogen and progesterone. The uterine cycle governs the preparation and maintenance of the lining of the uterus (womb) to receive an embryo. These cycles are concurrent and coordinated, normally last between 21 and 35 days, with a median length of 28 days. Menarche (the onset of the first period) usually occurs around the age of 12 years; menstrual cycles continue for about 30–45 years.
Naturally occurring hormones drive the cycles; the cyclical rise and fall of the follicle stimulating hormone prompts the production and growth of oocytes (immature egg cells). The hormone estrogen stimulates the uterus lining (endometrium) to thicken to accommodate an embryo should fertilization occur. The blood supply of the thickened lining provides nutrients to a successfully implanted embryo. If implantation does not occur, the lining breaks down and blood is released. Triggered by falling progesterone levels, menstruation (a "period", in common parlance) is the cyclical shedding of the lining, and is a sign that pregnancy has not occurred.
Each cycle occurs in phases based on events either in the ovary (ovarian cycle) or in the uterus (uterine cycle). The ovarian cycle consists of the follicular phase, ovulation, and the luteal phase; the uterine cycle consists of the menstrual, proliferative and secretory phases. Day one of the menstrual cycle is the first day of the period, which lasts for about five days. Around day fourteen, an egg is usually released from the ovary.
The menstrual cycle can cause some women to experience premenstrual syndrome with symptoms that may include tender breasts, and tiredness. More severe symptoms that affect daily living are classed as premenstrual dysphoric disorder, and are experienced by 3–8% of women. During the first few days of menstruation some women experience period pain that can spread from the abdomen to the back and upper thighs. The menstrual cycle can be modified by hormonal birth control.
Cycles and phases
The menstrual cycle encompasses the ovarian and uterine cycles. The ovarian cycle describes changes that occur in the follicles of the ovary, whereas the uterine cycle describes changes in the endometrial lining of the uterus. Both cycles can be divided into phases. The ovarian cycle consists of alternating follicular and luteal phases, and the uterine cycle consists of the menstrual phase, the proliferative phase, and the secretory phase. The menstrual cycle is controlled by the hypothalamus in the brain, and the anterior pituitary gland at the base of the brain. The hypothalamus releases gonadotropin-releasing hormone (GnRH), which causes the nearby anterior pituitary to release follicle-stimulating hormone (FSH) and luteinizing hormone (LH). Before puberty, GnRH is released in low steady quantities and at a steady rate. After puberty, GnRH is released in large pulses, and the frequency and magnitude of these determine how much FSH and LH are produced by the pituitary.
Measured from the first day of one menstruation to the first day of the next, the length of a menstrual cycle varies but has a median length of 28 days. The cycle is often less regular at the beginning and end of a woman's reproductive life. At puberty, a child's body begins to mature into an adult body capable of sexual reproduction; the first period (called menarche) occurs at around 12 years of age and continues for about 30–45 years. Menstrual cycles end at menopause, which is usually between 45 and 55 years of age.
Ovarian cycle
Between menarche and menopause the ovaries regularly alternate between luteal and follicular phases during the monthly menstrual cycle. Stimulated by gradually increasing amounts of estrogen in the follicular phase, discharges of blood flow stop and the uterine lining thickens. Follicles in the ovary begin developing under the influence of a complex interplay of hormones, and after several days one, or occasionally two, become dominant, while non-dominant follicles shrink and die. About mid-cycle, some 10–12 hours after the increase in luteinizing hormone, known as the LH surge, the dominant follicle releases an oocyte, in an event called ovulation.
After ovulation, the oocyte lives for 24 hours or less without fertilization, while the remains of the dominant follicle in the ovary become a corpus luteum – a body with the primary function of producing large amounts of the hormone progesterone. Under the influence of progesterone, the uterine lining changes to prepare for potential implantation of an embryo to establish a pregnancy. The thickness of the endometrium continues to increase in response to mounting levels of estrogen, which is released by the antral follicle (a mature ovarian follicle) into the blood circulation. Peak levels of estrogen are reached at around day thirteen of the cycle and coincide with ovulation. If implantation does not occur within about two weeks, the corpus luteum degenerates into the corpus albicans, which does not produce hormones, causing a sharp drop in levels of both progesterone and estrogen. This drop causes the uterus to lose its lining in menstruation; it is around this time that the lowest levels of estrogen are reached.
In an ovulatory menstrual cycle, the ovarian and uterine cycles are concurrent and coordinated and last between 21 and 35 days, with a population average of 27–29 days. Although the average length of the human menstrual cycle is similar to that of the lunar cycle, there is no causal relation between the two.
Follicular phase
The ovaries contain a finite number of egg stem cells, granulosa cells and theca cells, which together form primordial follicles. At around 20 weeks into gestation some 7 million immature eggs have already formed in an ovary. This decreases to around 2 million by the time a girl is born, and 300,000 by the time she has her first period. On average, one egg matures and is released during ovulation each month after menarche. Beginning at puberty, these mature to primary follicles independently of the menstrual cycle. The development of the egg is called oogenesis and only one cell survives the divisions to await fertilization. The other cells are discarded as polar bodies, which cannot be fertilized. The follicular phase is the first part of the ovarian cycle and it ends with the completion of the antral follicles. Meiosis (cell division) remains incomplete in the egg cells until the antral follicle is formed. During this phase usually only one ovarian follicle fully matures and gets ready to release an egg. The follicular phase shortens significantly with age, lasting around 14 days in women aged 18–24 compared with 10 days in women aged 40–44.
Through the influence of a rise in follicle stimulating hormone (FSH) during the first days of the cycle, a few ovarian follicles are stimulated. These follicles, which have been developing for the better part of a year in a process known as folliculogenesis, compete with each other for dominance. All but one of these follicles will stop growing, while one dominant follicle – the one that has the most FSH receptors – will continue to maturity. The remaining follicles die in a process called follicular atresia. Luteinizing hormone (LH) stimulates further development of the ovarian follicle. The follicle that reaches maturity is called an antral follicle, and it contains the ovum (egg cell).
The theca cells develop receptors that bind LH, and in response secrete large amounts of androstenedione. At the same time the granulosa cells surrounding the maturing follicle develop receptors that bind FSH, and in response start secreting androstenedione, which is converted to estrogen by the enzyme aromatase. The estrogen inhibits further production of FSH and LH by the pituitary gland. This negative feedback regulates levels of FSH and LH. The dominant follicle continues to secrete estrogen, and the rising estrogen levels make the pituitary more responsive to GnRH from the hypothalamus. As estrogen increases this becomes a positive feedback signal, which makes the pituitary secrete more FSH and LH. This surge of FSH and LH usually occurs one to two days before ovulation and is responsible for stimulating the rupture of the antral follicle and release of the oocyte.
Ovulation
Around day fourteen, the egg is released from the ovary. Called ovulation, this occurs when a mature egg is released from the ovarian follicles into the pelvic cavity and enters the fallopian tube, about 10–12 hours after the peak in LH surge. Typically only one of the 15–20 stimulated follicles reaches full maturity, and just one egg is released. Ovulation only occurs in around 10% of cycles during the first two years following menarche, and by the age of 40–50, the number of ovarian follicles is depleted. LH initiates ovulation at around day 14 and stimulates the formation of the corpus luteum. Following further stimulation by LH, the corpus luteum produces and releases estrogen, progesterone, relaxin (which relaxes the uterus by inhibiting contractions of the myometrium), and inhibin (which inhibits further secretion of FSH).
The release of LH matures the egg and weakens the follicle wall in the ovary, causing the fully developed follicle to release its oocyte. If it is fertilized by a sperm, the oocyte promptly matures into an ootid, which blocks the other sperm cells and becomes a mature egg. If it is not fertilized by a sperm, the oocyte degenerates. The mature egg has a diameter of about , and is the largest human cell.
Which of the two ovaries – left or right – ovulates appears random; no left and right coordinating process is known. Occasionally both ovaries release an egg; if both eggs are fertilized, the result is fraternal twins. After release from the ovary into the pelvic cavity, the egg is swept into the fallopian tube by the fimbria – a fringe of tissue at the end of each fallopian tube. After about a day, an unfertilized egg disintegrates or dissolves in the fallopian tube, and a fertilized egg reaches the uterus in three to five days.
Fertilization usually takes place in the ampulla, the widest section of the fallopian tubes. A fertilized egg immediately starts the process of embryonic development. The developing embryo takes about three days to reach the uterus, and another three days to implant into the endometrium. It has reached the blastocyst stage at the time of implantation: this is when pregnancy begins. The loss of the corpus luteum is prevented by fertilization of the egg. The syncytiotrophoblast (the outer layer of the resulting embryo-containing blastocyst that later becomes the outer layer of the placenta) produces human chorionic gonadotropin (hCG), which is very similar to LH and preserves the corpus luteum. During the first few months of pregnancy, the corpus luteum continues to secrete progesterone and estrogens at slightly higher levels than those at ovulation. After this and for the rest of the pregnancy, the placenta secretes high levels of these hormones – along with hCG, which stimulates the corpus luteum to secrete more progesterone and estrogens, blocking the menstrual cycle. These hormones also prepare the mammary glands for milk production.
Luteal phase
Lasting about 14 days, the luteal phase is the final phase of the ovarian cycle and it corresponds to the secretory phase of the uterine cycle. During the luteal phase, the pituitary hormones FSH and LH cause the remaining parts of the dominant follicle to transform into the corpus luteum, which produces progesterone. The increased progesterone starts to induce the production of estrogen. The hormones produced by the corpus luteum also suppress production of the FSH and LH that the corpus luteum needs to maintain itself. The level of FSH and LH fall quickly, and the corpus luteum atrophies. Falling levels of progesterone trigger menstruation and the beginning of the next cycle. For an individual woman, the follicular phase often varies in length from cycle to cycle; by contrast, the length of her luteal phase will be fairly consistent from cycle to cycle at 10 to 16 days (average 14 days).
Uterine cycle
The uterine cycle has three phases: menses, proliferative and secretory.
Menstruation
Menstruation (also called menstrual bleeding, menses or a period) is the first and most evident phase of the uterine cycle and first occurs at puberty. Called menarche, the first period occurs at the age of around twelve or thirteen years. The average age is generally later in the developing world and earlier in the developed world. In precocious puberty, it can occur as early as age eight years, and this can still be normal.
Menstruation is initiated each month by falling levels of estrogen and progesterone and the release of prostaglandins, which constrict the spiral arteries. This causes them to spasm, contract and break up. The blood supply to the endometrium is cut off and the cells of the top layer of the endometrium (the stratum functionalis) become deprived of oxygen and die. Later the whole layer is lost and only the bottom layer, the stratum basalis, is left in place. An enzyme called plasmin breaks up the blood clots in the menstrual fluid, which eases the flow of blood and broken down lining from the uterus. The flow of blood continues for 2–6 days and around 30–60 milliliters of blood is lost, and is a sign that pregnancy has not occurred.
The flow of blood normally serves as a sign that a woman has not become pregnant, but this cannot be taken as certainty, as several factors can cause bleeding during pregnancy. Menstruation occurs on average once a month from menarche to menopause, which corresponds with a woman's fertile years. The average age of menopause in women is 52 years, and it typically occurs between 45 and 55 years of age. Menopause is preceded by a stage of hormonal changes called perimenopause.
Eumenorrhea denotes normal, regular menstruation that lasts for around the first 5 days of the cycle. Women who experience menorrhagia (heavy menstrual bleeding) are more susceptible to iron deficiency than the average person.
Proliferative phase
The proliferative phase is the second phase of the uterine cycle when estrogen causes the lining of the uterus to grow and proliferate. The latter part of the follicular phase overlaps with the proliferative phase of the uterine cycle. As they mature, the ovarian follicles secrete increasing amounts of estradiol, an estrogen. The estrogens initiate the formation of a new layer of endometrium in the uterus with the spiral arterioles.
As estrogen levels increase, cells in the cervix produce a type of cervical mucus that has a higher pH and is less viscous than usual, rendering it more friendly to sperm. This increases the chances of fertilization, which occurs around day 11 to day 14. This cervical mucus can be detected as a vaginal discharge that is copious and resembles raw egg whites. For women who are practicing fertility awareness, it is a sign that ovulation may be about to take place, but it does not mean ovulation will definitely occur.
Secretory phase
The secretory phase is the final phase of the uterine cycle and it corresponds to the luteal phase of the ovarian cycle. During the secretory phase, the corpus luteum produces progesterone, which plays a vital role in making the endometrium receptive to the implantation of a blastocyst (a fertilized egg, which has begun to grow). Glycogen, lipids, and proteins are secreted into the uterus and the cervical mucus thickens. In early pregnancy, progesterone also increases blood flow and reduces the contractility of the smooth muscle in the uterus and raises basal body temperature.
If pregnancy does not occur the ovarian and uterine cycles start over again.
Anovulatory cycles and short luteal phases
Only two-thirds of overtly normal menstrual cycles are ovulatory, that is, cycles in which ovulation occurs. The other third lack ovulation or have a short luteal phase (less than ten days) in which progesterone production is insufficient for normal physiology and fertility. Cycles in which ovulation does not occur (anovulation) are common in girls who have just begun menstruating and in women around menopause. During the first two years following menarche, ovulation is absent in around half of cycles. Five years after menarche, ovulation occurs in around 75% of cycles and this reaches 80% in the following years. Anovulatory cycles are often overtly identical to normally ovulatory cycles. Any alteration to balance of hormones can lead to anovulation. Stress, anxiety and eating disorders can cause a fall in GnRH, and a disruption of the menstrual cycle. Chronic anovulation occurs in 6–15% of women during their reproductive years. Around menopause, hormone feedback dysregulation leads to anovulatory cycles. Although anovulation is not considered a disease, it can be a sign of an underlying condition such as polycystic ovary syndrome. Anovulatory cycles or short luteal phases are normal when women are under stress or athletes increasing the intensity of training. These changes are reversible as the stressors decrease or, in the case of the athlete, as she adapts to the training.
Menstrual health
Although a normal and natural process, some women experience premenstrual syndrome with symptoms that may include acne, tender breasts, and tiredness. More severe symptoms that affect daily living are classed as premenstrual dysphoric disorder and are experienced by 3 to 8% of women. Dysmenorrhea (menstrual cramps or period pain) is felt as painful cramps in the abdomen that can spread to the back and upper thighs during the first few days of menstruation. Debilitating period pain is not normal and can be a sign of something severe such as endometriosis. These issues can significantly affect a woman's health and quality of life and timely interventions can improve the lives of these women.
There are common culturally communicated misbeliefs that the menstrual cycle affects women's moods, causes depression or irritability, or that menstruation is a painful, shameful or unclean experience. Often a woman's normal mood variation is falsely attributed to the menstrual cycle. Much of the research is weak, but there appears to be a very small increase in mood fluctuations during the luteal and menstrual phases, and a corresponding decrease during the rest of the cycle. Changing levels of estrogen and progesterone across the menstrual cycle exert systemic effects on aspects of physiology including the brain, metabolism, and musculoskeletal system. The result can be subtle physiological and observable changes to women's athletic performance including strength, aerobic, and anaerobic performance.
Changes to the brain have also been observed throughout the menstrual cycle but do not translate into measurable changes in intellectual achievement – including academic performance, problem-solving, and memory. Improvements in spatial reasoning ability during the menstruation phase of the cycle are probably caused by decreases in levels of estrogen and progesterone.
In some women, ovulation features a characteristic pain called mittelschmerz (a German term meaning middle pain). The cause of the pain is associated with the ruptured follicle, causing a small amount of blood loss.
Even when normal, the changes in hormone levels during the menstrual cycle can increase the incidence of disorders such as autoimmune diseases, which might be caused by estrogen enhancement of the immune system.
Around 40% of women with epilepsy find that their seizures occur more frequently at certain phases of their menstrual cycle. This catamenial epilepsy may be due to a drop in progesterone if it occurs during the luteal phase or around menstruation, or a surge in estrogen if it occurs at ovulation. Women who have regular periods can take medication just before and during menstruation. Options include progesterone supplements, increasing the dose of their regular anticonvulsant drug, or temporarily adding an anticonvulsant such as clobazam or acetazolamide. If this is ineffective, or when a woman's menstrual cycle is irregular, then treatment is to stop the menstrual cycle occurring. This may be achieved using medroxyprogesterone, triptorelin or goserelin, or by sustained use of oral contraceptives.
Hormonal contraception
Hormonal contraceptives prevent pregnancy by inhibiting the secretion of the hormones, FSH, LH and GnRH. Hormonal contraception that contains estrogen, such as combined oral contraceptive pills (COCPs), stop the development of the dominant follicle and the mid-cycle LH surge and thus ovulation. Sequential dosing and discontinuation of the COCP can mimic the uterine cycle and produce bleeding that resembles a period. In some cases, this bleeding is lighter.
Progestin-only methods of hormonal contraception do not always prevent ovulation but instead work by stopping the cervical mucus from becoming sperm-friendly. Hormonal contraception is available in a variety of forms such as pills, patches, skin implants and hormonal intrauterine devices (IUDs).
Evolution and other species
Most female mammals have an estrous cycle, but only ten primate species, four bat species, the elephant shrews and the Cairo spiny mouse (Acomys cahirinus) have a menstrual cycle. The cycles are the same as in humans apart from the length, which ranges from 9 to 37 days. The lack of immediate relationship between these groups suggests that four distinct evolutionary events have caused menstruation to arise. There are four theories on the evolutionary significance of menstruation:
Control of sperm-borne pathogens. This hypothesis held that menstruation protected the uterus against pathogens introduced by sperm. Hypothesis 1 does not take into account that copulation can take place weeks before menstruation and that potentially infectious semen is not controlled by menstruation in other species.
Energy conservation. This hypothesis claimed that it took less energy to rebuild a uterine lining than to maintain it if pregnancy did not occur. Hypothesis 2 does not explain other species that also do not maintain a uterine lining but do not menstruate.
A theory based on spontaneous decidualization (a process that results in significant changes to cells of the endometrium in preparation for, and during, pregnancy). Decidualization leads to the differentiation of the endometrial stroma, which involves cells of the immune system, the formation of a new blood supply, hormones and tissue differentiation. In non-menstruating mammals, decidualization is driven by the embryo, not the mother. According to this theory, menstruation is an unintended consequence of the decidualization process and the body uses spontaneous decidualization to identify and reject defective embryos early on. This process happens because the decidual cells of the stroma can recognize and respond to defects in a developing embryo by stopping the secretion of cytokines needed for the embryo to implant.
Uterine pre-conditioning. This hypothesis claims that a monthly pre-conditioning of the uterus is needed in species, such as humans, that have deeply invasive (deep-rooted) placentas. In the process leading to the formation of a placenta, maternal tissues are invaded. This hypothesis holds that menstruation was not evolutionary, rather the result of a coincidental pre-conditioning of the uterus to protect uterine tissue from the deeply rooting placenta, in which a thicker endometrium develops. Hypothesis 4 does not explain menstruation in non-primates.
| Biology and health sciences | Animal reproduction | null |
88042 | https://en.wikipedia.org/wiki/Myopia | Myopia | Myopia, also known as near-sightedness and short-sightedness, is an eye disease where light from distant objects focuses in front of, instead of on, the retina. As a result, distant objects appear blurry while close objects appear normal. Other symptoms may include headaches and eye strain. Severe myopia is associated with an increased risk of macular degeneration, retinal detachment, cataracts, and glaucoma.
Myopia results from the length of the eyeball growing too long or less commonly the lens being too strong. It is a type of refractive error. Diagnosis is by the use of cycloplegics during eye examination.
Tentative evidence indicates that the risk of myopia can be decreased by having young children spend more time outside. This decrease in risk may be related to natural light exposure. Myopia can be corrected with eyeglasses, contact lenses, or by refractive surgery. Eyeglasses are the simplest and safest method of correction. Contact lenses can provide a relatively wider corrected field of vision, but are associated with an increased risk of infection. Refractive surgeries like LASIK and PRK permanently change the shape of the cornea. Other procedures include implantable collamer lens (ICL) inside the anterior chamber in front of the natural eye lens. ICL doesn't affect the cornea.
Myopia is the most common eye problem and is estimated to affect 1.5 billion people (22% of the world population). Rates vary significantly in different areas of the world. Rates among adults are between 15% and 49%. Among children, it affects 1% of rural Nepalese, 4% of South Africans, 12% of people in the US, and 37% in some large Chinese cities. In China the proportion of girls is slightly higher than boys. Rates have increased since the 1950s. Uncorrected myopia is one of the most common causes of vision impairment globally along with cataracts, macular degeneration, and vitamin A deficiency.
Signs and symptoms
A myopic individual can see clearly out to a certain distance (the far point of the eye), but objects placed beyond this distance appear blurred. If the extent of the myopia is great enough, even standard reading distances can be affected. Upon routine examination of the eyes, the vast majority of myopic eyes appear structurally identical to nonmyopic eyes.
Onset is often in school children, with worsening between the ages of 8 and 15.
Myopic individuals have larger pupils than far-sighted (hypermetropic) and emmetropic individuals, likely due to requiring less accommodation (which results in pupil constriction).
Causes
The underlying cause is believed to be a combination of genetic and environmental factors. Risk factors include doing work that involves focusing on close objects, greater time spent indoors, urbanization, and a family history of the condition. It is also associated with a high socioeconomic class and higher level of education.
A 2012 review could not find strong evidence for any single cause, although many theories have been discredited. Twin studies indicate that at least some genetic factors are involved. Myopia has been increasing rapidly throughout the developed world, suggesting environmental factors are involved.
A single-author literature review in 2021 proposed that myopia is the result of corrective lenses interfering with emmetropization.
Genetics
A risk for myopia may be inherited from one's parents. Genetic linkage studies have identified 18 possible loci on 15 different chromosomes that are associated with myopia, but none of these loci is part of the candidate genes that cause myopia. Instead of a simple one-gene locus controlling the onset of myopia, a complex interaction of many mutated proteins acting in concert may be the cause. Instead of myopia being caused by a defect in a structural protein, defects in the control of these structural proteins might be the actual cause of myopia. A collaboration of all myopia studies worldwide identified 16 new loci for refractive error in individuals of European ancestry, of which 8 were shared with Asians. The new loci include candidate genes with functions in neurotransmission, ion transport, retinoic acid metabolism, extracellular matrix remodeling and eye development. The carriers of the high-risk genes have a tenfold increased risk of myopia. Aberrant genetic recombination and gene splicing in the OPNLW1 and OPNMW1 genes that code for two retinal cone photopigment proteins can produce high myopia by interfering with refractive development of the eye.
Human population studies suggest that contribution of genetic factors accounts for 60–90% of variance in refraction. However, the currently identified variants account for only a small fraction of myopia cases, suggesting the existence of a large number of yet unidentified low-frequency or small-effect variants, which underlie the majority of myopia cases.
Environmental factors
Environmental factors which increase the risk of myopia include insufficient light exposure, low physical activity, near work, and increased years of education.
One hypothesis is that a lack of normal visual stimuli causes improper development of the eyeball. Under this hypothesis, "normal" refers to the environmental stimuli that the eyeball evolved to. Modern humans who spend most of their time indoors, in dimly or fluorescently lit buildings may be at risk of development of myopia.
People, and children especially, who spend more time doing physical exercise and outdoor play have lower rates of myopia, suggesting the increased magnitude and complexity of the visual stimuli encountered during these types of activities decrease myopic progression. There is preliminary evidence that the protective effect of outdoor activities on the development of myopia is due, at least in part, to the effect of long hours of exposure to daylight on the production and the release of retinal dopamine.
Myopia can be induced with minus spherical lenses, and overminus in prescription lenses can induce myopia progression. Overminus during refraction can be avoided through various techniques and tests, such as fogging, plus to blur, and the duochrome test.
The near work hypothesis, also referred to as the "use-abuse theory" states that spending time involved in near work strains the intraocular and extraocular muscles. Some studies support the hypothesis, while other studies do not. While an association is present, it is not clearly causal.
Myopia is also more common in children with diabetes, childhood arthritis, uveitis, and systemic lupus erythematosus.
Other factors
Research indicates a relationship between body mass index (BMI) and myopia, with both low and high BMI associated with an increased risk of developing myopia. A nationwide study of 1.3 million Israeli adolescents found that individuals with underweight status had higher chances of mild-to-moderate and high myopia compared to those with low-normal BMI.
Similarly, a study involving Korean young adult men reported that those who were of average or shorter height and lean had a higher prevalence of high myopia.
Mechanism
Because myopia is a refractive error, the physical cause of myopia is comparable to any optical system that is out of focus. Borish and Duke-Elder classified myopia by these physical causes:
Axial myopia is attributed to an increase in the eye's axial length.
Refractive myopia is attributed to the condition of the refractive elements of the eye. Borish further subclassified refractive myopia:
Curvature myopia is attributed to excessive, or increased, curvature of one or more of the refractive surfaces of the eye, especially the cornea. In those with Cohen syndrome, myopia appears to result from high corneal and lenticular power.
Index myopia is attributed to variation in the index of refraction of one or more of the ocular media.
As with any optical system experiencing a defocus aberration, the effect can be exaggerated or masked by changing the aperture size. In the case of the eye, a large pupil emphasizes refractive error and a small pupil masks it. This phenomenon can cause a condition in which an individual has a greater difficulty seeing in low-illumination areas, even though there are no symptoms in bright light, such as daylight.
Under rare conditions, edema of the ciliary body can cause an anterior displacement of the lens, inducing a myopia shift in refractive error.
Diagnosis
A diagnosis of myopia is typically made by an eye care professional, usually an optometrist or ophthalmologist. This is by refracting the eye with the use of cycloplegics such as atropine with responses recorded when accommodation is relaxed. Diagnosis of progressive myopia requires regular eye examination using the same method.
Types
Myopia can be classified into two major types; anatomical and clinical. The types of myopia based on anatomical features are axial, curvature, index and displacement of refractive element. Congenital, simple and pathological myopia are the clinical types of myopia.
Various forms of myopia have been described by their clinical appearance:
Simple myopia: Myopia in an otherwise normal eye, typically less than 4.00 to 6.00 diopters. This is the most common form of myopia.
Degenerative myopia, also known as malignant, pathological, or progressive myopia, is characterized by marked fundus changes, such as posterior staphyloma, and associated with a high refractive error and subnormal visual acuity after correction. This form of myopia gets progressively worse over time. Degenerative myopia has been reported as one of the main causes of visual impairment.
Pseudomyopia is the blurring of distance vision brought about by spasm of the accommodation system.
Nocturnal myopia: Without adequate stimulus for accurate accommodation, the accommodation system partially engages, pushing distance objects out of focus.
Nearwork-induced transient myopia (NITM): short-term myopic far point shift immediately following a sustained near visual task. Some authors argue for a link between NITM and the development of permanent myopia.
Instrument myopia: over-accommodation when looking into an instrument such as a microscope.
Induced myopia, also known as acquired myopia, sometimes , results from various medications, increases in glucose levels, nuclear sclerosis, oxygen toxicity (e.g., from underwater diving or from oxygen and hyperbaric therapy) or other anomalous conditions. Sulphonamide therapy can cause ciliary body edema, resulting in anterior displacement of the lens, pushing the eye out of focus. Elevation of blood-glucose levels can also cause edema (swelling) of the crystalline lens as a result of sorbitol accumulating in the lens. This edema often causes temporary myopia. Scleral buckles, used in the repair of retinal detachments may induce myopia by increasing the axial length of the eye.
Index myopia is attributed to variation in the index of refraction of one or more of the ocular media. Cataracts may lead to index myopia.
Form deprivation myopia occurs when the eyesight is deprived by limited illumination and vision range, or the eye is modified with artificial lenses or deprived of clear form vision. In lower vertebrates, this kind of myopia seems to be reversible within short periods of time. Myopia is often induced this way in various animal models to study the pathogenesis and mechanism of myopia development.
Degree
The degree of myopia is described in terms of the power of the ideal correction, which is measured in diopters:
Myopia between −0.00 and −0.50 diopters is usually classified as emmetropia.
Low myopia usually describes myopia between −0.50 and −3.00 diopters.
Moderate myopia usually describes myopia between −3.00 and −6.00 diopters. Those with moderate amounts of myopia are more likely to have pigment dispersion syndrome or pigmentary glaucoma.
High myopia usually describes myopia of −6.00 or more. People with high myopia are more likely to have retinal detachments and primary open angle glaucoma. They are also more likely to experience floaters, shadow-like shapes which appear in the field of vision. In addition to this, high myopia is linked to macular degeneration, cataracts, and significant visual impairment.
Age at onset
Myopia is sometimes classified by the age at onset:
Congenital myopia, also known as infantile myopia, is present at birth and persists through infancy.
Youth onset myopia occurs in early childhood or teenage, and the ocular power can keep varying until the age of 21, before which any form of corrective surgery is usually not recommended by ophthalmic specialists around the world.
School myopia appears during childhood, particularly the school age years. This form of myopia is attributed to the use of the eyes for close work during the school years. A 2004–2015 Singapore–Sydney study of children of Chinese descent found that time spent on outdoor activities was a factor.
Adult onset myopia
Early adult onset myopia occurs between ages 20 and 40.
Late adult onset myopia occurs after age 40.
Prevention and control
Various methods have been employed in an attempt to decrease the progression of myopia, although studies show mixed results. Many myopia treatment studies have a number of design drawbacks: small numbers, lack of adequate control group, and failure to mask examiners from knowledge of treatments used. The best approach is to combine multiple prevention and control methods. Among myopia specialists, mydriatic eyedrops are the most favored approach, applied by almost 75% in North America and more than 80% in Australia.
Spending time outdoors
Some studies have indicated that having children spend time outdoors reduces the incidence of myopia. A 2017 study investigated the leading causal theory of association between greenspace exposure and spectacles use as a proxy for myopia, finding a 28% reduction in the likelihood of spectacles use per interquartile range increase in time spent in greenspace. In Taiwan, government policies that require schools to send all children outdoors for a minimum amount of time have driven down the prevalence of myopia in children.
Glasses and contacts
The use of reading glasses when doing close work may improve vision by reducing or eliminating the need to accommodate. Altering the use of eyeglasses between full-time, part-time, and not at all does not appear to alter myopia progression. The American Optometric Association's Clinical Practice Guidelines found evidence of effectiveness of bifocal lenses and recommends it as the method for "myopia control". In some studies, bifocal and progressive lenses have not shown differences in altering the progression of myopia compared to placebo.
In the United States, the Food and Drug Administration (FDA) has approved myopia control contact lenses such as CooperVision’s MiSight and Johnson & Johnson Vision’s Acuvue Abiliti. Yet the agency has yet to approve any myopia control spectacle lenses.
Medication
Anti-muscarinic topical medications in children under 18 years of age may slow the worsening of myopia. These treatments include pirenzepine gel, cyclopentolate eye drops, and atropine eye drops. While these treatments were shown to be effective in slowing the progression of myopia and reducing eyeball elongation associated with the condition, side effects included light sensitivity and near blur.
Other methods
Scleral reinforcement surgery is aimed to cover the thinning posterior pole with a supportive material to withstand intraocular pressure and prevent further progression of the posterior staphyloma. The strain is reduced, although damage from the pathological process cannot be reversed. By stopping the progression of the disease, vision may be maintained or improved. The use of orthoK can also slow down axial lens elongation.
Treatment
The National Institutes of Health says there is no known way of preventing myopia, and the use of glasses or contact lenses does not affect its progression, unless the glasses or contact lenses are too strong of a prescription. There is no universally accepted method of preventing myopia and proposed methods need additional study to determine their effectiveness. Optical correction using glasses or contact lenses is the most common treatment; other approaches include orthokeratology, and refractive surgery. Medications (mostly atropine) and vision therapy can be effective in addressing the various forms of pseudomyopia.
Glasses and contacts
Corrective lenses bend the light entering the eye in a way that places a focused image accurately onto the retina. The power of any lens system can be expressed in diopters, the reciprocal of its focal length in meters. Corrective lenses for myopia have negative powers because a divergent lens is required to move the far point of focus out to the distance. More severe myopia needs lens powers further from zero (more negative). However, strong eyeglass prescriptions create distortions such as prismatic movement and chromatic aberration. Strongly myopic wearers of contact lenses do not experience these distortions because the lens moves with the cornea, keeping the optic axis in line with the visual axis and because the vertex distance has been reduced to zero.
Surgery
Refractive surgery includes procedures which alter the corneal curvature of some structure of the eye or which add additional refractive means inside the eye.
Photorefractive keratectomy
Photorefractive keratectomy (PRK) involves ablation of corneal tissue from the corneal surface using an excimer laser. The amount of tissue ablation corresponds to the amount of myopia. While PRK is a relatively safe procedure for up to 6 dioptres of myopia, the recovery phase post-surgery is usually painful.
LASIK
In a LASIK pre-procedure, a corneal flap is cut into the cornea and lifted to allow the excimer laser beam access to the exposed corneal tissue. After that, the excimer laser ablates the tissue according to the required correction. When the flap again covers the cornea, the change in curvature generated by the laser ablation proceeds to the corneal surface. Though LASIK is usually painless and involves a short rehabilitation period post-surgery, it can potentially result in flap complications and loss of corneal stability (post-LASIK keratectasia).
Phakic intra-ocular lens
Instead of modifying the corneal surface, as in laser vision correction (LVC), this procedure involves implanting an additional lens inside the eye (i.e., in addition to the already existing natural lens). While it usually results in good control of the refractive change, it can induce potential serious long-term complications such as glaucoma, cataract and endothelial decompensation.
Orthokeratology
Orthokeratology or simply Ortho-K is a temporary corneal reshaping process using rigid gas permeable (RGP) contact lenses. Overnight wearing of specially designed contact lenses will temporarily reshape cornea, so patients may see clearly without any lenses in daytime. Orthokeratology can correct myopia up to –6D. Several studies shown that Ortho-K can reduce myopia progression also. Risk factors of using Ortho-K lenses include microbial keratitis, corneal edema, etc. Other contact lens related complications like corneal aberration, photophobia, pain, irritation, redness etc. are usually temporary conditions, which may be eliminated by proper usage of lenses.
Intrastromal corneal ring segment
The Intrastromal corneal ring segment (ICRS), commonly used in keratoconus treatment now, was originally designed to correct mild to moderate myopia. The thickness is directly related to flattening and the diameter of the ring is proportionally inverse to the flattening of cornea. So, if diameter is smaller or thickness is greater, resulting myopia correction will be greater.
Alternative medicine
A number of alternative therapies have been claimed to improve myopia, including vision therapy, "behavioural optometry", various eye exercises and relaxation techniques, and the Bates method. Scientific reviews have concluded that there was "no clear scientific evidence" that eye exercises are effective in treating myopia and as such they "cannot be advocated".
Epidemiology
Global refractive errors have been estimated to affect 800 million to 2.3 billion. The incidence of myopia within sampled population often varies with age, country, sex, race, ethnicity, occupation, environment, and other factors. Variability in testing and data collection methods makes comparisons of prevalence and progression difficult.
The prevalence of myopia has been reported as high as 70–90% in some Asian countries, 30–40% in Europe and the United States, and 10–20% in Africa. Myopia is about twice as common in Jewish people than in people of non-Jewish ethnicity. Myopia is less common in African people and associated diaspora. In Americans between the ages of 12 and 54, myopia has been found to affect African Americans less than Caucasians.
A new 2024 study published in the British Journal of Ophthalmology reveals that over one-third of children worldwide were nearsighted in 2023, with this figure projected to rise to nearly 40% by 2050. The prevalence of myopia among children and adolescents has increased significantly over the past 30 years, rising from 24% in 1990 to almost 36% in 2023. Researchers noted a sharp spike in cases following the COVID-19 pandemic and highlighted regional differences in myopia rates.
Asia
In some parts of Asia, myopia is very common.
Singapore is believed to have the highest prevalence of myopia in the world; up to 80% of people there have myopia, but the accurate figure is unknown.
China's myopia rate is 31%: 400 million of its 1.3 billion people are myopic. The prevalence of myopia in high school in China is 77%, and in college is more than 80%.
In some areas, such as China and Malaysia, up to 41% of the adult population is myopic to 1.00 dpt, and up to 80% to 0.5 dpt.
A study of Jordanian adults aged 17 to 40 found over half (54%) were myopic.
A study indicated that the prevalence of myopia among urban children in India of aged 5 to 15 increased from 4.44% in 1999 to 21.15% in 2019. Projections suggest that by 2050, this figure could reach 48.14%.
Some research suggests the prevalence of myopia in Indian children is less than 15%.
In South Korea among the general population, national data indicates that 70.6% of the adult population has myopia, with 8.0% affected by high myopia. The prevalence decreases with age, from 81.3% in individuals aged 19 to 24 years to 55.2% in those aged 45 to 49 years.
Europe
In first-year undergraduate students in the United Kingdom 50% of British whites and 53% of British Asians were myopic.
A recent review found 27% of Western Europeans aged 40 or over have at least −1.00 diopters of myopia and 5% have at least −5.00 diopters.
North America
Myopia is common in the United States, with research suggesting this condition has increased dramatically in recent decades. In 1971–1972, the National Health and Nutrition Examination Survey provided the earliest nationally representative estimates for myopia prevalence in the U.S., and found the prevalence in persons aged 12–54 was 25%. Using the same method, in 1999–2004, myopia prevalence was estimated to have climbed to 42%.
A study of 2,523 children in grades 1 to 8 (age, 5–17 years) found nearly one in 10 (9%) have at least −0.75 diopters of myopia. In this study, 13% had at least +1.25 D hyperopia (farsightedness), and 28% had at least 1.00-D difference between the two principal meridians (cycloplegic autorefraction) of astigmatism. For myopia, Asians had the highest prevalence (19%), followed by Hispanics (13%). Caucasian children had the lowest prevalence of myopia (4%), which was not significantly different from African Americans (7%).
A recent review found 25% of Americans aged 40 or over have at least −1.00 diopters of myopia and 5% have at least −5.00 diopters.
Australia
In Australia, the overall prevalence of myopia (worse than −0.50 diopters) has been estimated to be 17%. In one recent study, less than one in 10 (8%) Australian children between the ages of four and 12 were found to have myopia greater than −0.50 diopters. A recent review found 16% of Australians aged 40 or over have at least −1.00 diopters of myopia and 3% have at least −5.00 diopters.
South America
In Brazil, a 2005 study estimated 6% of Brazilians between the ages of 12 and 59 had −1.00 diopter of myopia or more, compared with 3% of the indigenous people in northwestern Brazil. Another found nearly 1 in 8 (13%) of the students in the city of Natal were myopic.
History
The difference between the near-sighted and far-sighted people was noted already by Aristotle. Graeco-Roman physician Galen first used the term "myopia" (from Greek words "myein" meaning "to close or shut" and "ops" (gen. opos) meaning "eye") for near-sightedness. The first spectacles for correcting myopia were invented by a German cardinal in the year 1451. Johannes Kepler in his Clarification of Ophthalmic Dioptrics (1604) first demonstrated that myopia was due to the incident light focusing in front of the retina. Kepler also showed that myopia could be corrected by concave lenses. In 1632, Vopiscus Fortunatus Plempius examined a myopic eye and confirmed that myopia was due to a lengthening of its axial diameter.
The idea that myopia was caused by the eye strain involved in reading or doing other work close to the eyes was a consistent theme for several centuries. In Taiwan, faced with a staggering rise in the number of young military recruits needing glasses, the schools were told to give students' eyes a 10-minute break after every half-hour of reading; however, the rate of myopia continued to climb. The policy that reversed the epidemic of myopia was the government ordering all schools to have the children outside for a minimum of 80 minutes every day.
Society and culture
The terms "myopia" and "myopic" (or the common terms "short-sightedness" or "short-sighted", respectively) have been used metaphorically to refer to cognitive thinking and decision making that is narrow in scope or lacking in foresight or in concern for wider interests or for longer-term consequences. It is often used to describe a decision that may be beneficial in the present, but detrimental in the future, or a viewpoint that fails to consider anything outside a very narrow and limited range. Hyperopia, the biological opposite of myopia, may also be used metaphorically for a value system or motivation that exhibits "farsighted" or possibly visionary thinking and behavior; that is, emphasizing long-term interests at the apparent expense of near-term benefit.
Keeping children indoors, whether to promote early academic activities, because urban development choices leave no place for children to play outside, or because people avoid sunlight because of a preference for lighter skin color, causes myopia. Taiwan has developed an aggressive program to identify pre-school-age children with pre-myopia and treat them with atropine, and to have schools send students outdoors every day. The Tian-tian 120 program ("Every day 120") encourages 120 minutes of outdoor time each day. Compared to the cost of lifelong treatment for myopia with glasses, and in some cases, preventable blindness, the US$13 spent on screening young children for pre-myopia is considered a good investment in public health.
Because myopia can be mitigated through lifestyle choices, it is possible that being myopic will become a marker of an impoverished or neglected childhood, with wealthy families ensuring that their children spend enough time outdoors to prevent or at least reduce it, and poor families, who rely on lower-quality childcare arrangements or not having access to a safe outdoor space, being unable to provide the same benefits to their children.
Correlations
Numerous studies have found correlations between myopia, on the one hand, and intelligence and academic achievement, on the other; it is not clear whether there is a causal relationship. Myopia is also correlated with increased microsaccade amplitude, suggesting that blurred vision from myopia might cause instability in fixational eye movements.
Etymology
The term myopia is of Koine Greek origin: and () . It is derived from the ancient Greek () (man), from () and () (GEN ()). The opposite of myopia in English is hypermetropia, or far-sightedness.
| Biology and health sciences | Disabilities | Health |
88078 | https://en.wikipedia.org/wiki/Prostate%20cancer | Prostate cancer | Prostate cancer is the uncontrolled growth of cells in the prostate, a gland in the male reproductive system below the bladder. Abnormal growth of prostate tissue is usually detected through screening tests, typically blood tests that check for prostate-specific antigen (PSA) levels. Those with high levels of PSA in their blood are at increased risk for developing prostate cancer. Diagnosis requires a biopsy of the prostate. If cancer is present, the pathologist assigns a Gleason score, and a higher score represents a more dangerous tumor. Medical imaging is performed to look for cancer that has spread outside the prostate. Based on the Gleason score, PSA levels, and imaging results, a cancer case is assigned a stage 1 to 4. A higher stage signifies a more advanced, more dangerous disease.
Most prostate tumors remain small and cause no health problems. These are managed with active surveillance, monitoring the tumor with regular tests to ensure it has not grown. Tumors more likely to be dangerous can be destroyed with radiation therapy or surgically removed by radical prostatectomy. Those whose cancer spreads beyond the prostate are treated with hormone therapy which reduces levels of the androgens (male sex hormones) that prostate cells need to survive. Eventually cancer cells can grow resistant to this treatment. This most-advanced stage of the disease, called castration-resistant prostate cancer, is treated with continued hormone therapy alongside the chemotherapy drug docetaxel. Some tumors metastasize (spread) to other areas of the body, particularly the bones and lymph nodes. There, tumors cause severe bone pain, leg weakness or paralysis, and eventually death. Prostate cancer prognosis depends on how far the cancer has spread at diagnosis. Most men diagnosed have tumors confined to the prostate; 99% of them survive more than 10 years from their diagnoses. Tumors that have metastasized to distant body sites are most dangerous, with five-year survival rates of 30–40%.
The risk of developing prostate cancer increases with age; the average age of diagnosis is 67. Those with a family history of any cancer are more likely to have prostate cancer, particularly those who inherit cancer-associated variants of the BRCA2 gene. Each year 1.2 million cases of prostate cancer are diagnosed, and 350,000 die of the disease, making it the second-leading cause of cancer and cancer death in men. One in eight men is diagnosed with prostate cancer in his lifetime and one in forty dies of the disease. Prostate tumors were first described in the mid-19th century, during surgeries on men with urinary obstructions. Initially, prostatectomy was the primary treatment for prostate cancer. By the mid-20th century, radiation treatments and hormone therapies were developed to improve prostate cancer treatment. The invention of hormone therapies for prostate cancer was recognized with the 1966 Nobel Prize to Charles B. Huggins and the 1977 Prize to Andrzej W. Schally.
Signs and symptoms
Early prostate cancer usually causes no symptoms. As the cancer advances, it may cause erectile dysfunction, blood in the urine or semen, or trouble urinating – commonly including frequent urination and slow or weak urine stream. More than half of men over age 50 experience some form of urination problem, typically due to issues other than prostate cancer such as benign prostatic hyperplasia (non-cancerous enlargement of the prostate).
Advanced prostate tumors can metastasize to nearby lymph nodes and bones, particularly in the pelvis, hips, spine, ribs, head, and neck. There they can cause fatigue, unexplained weight loss, and back or bone pain that does not improve with rest. Metastases can damage the bones around them, and around a quarter of those with metastatic prostate cancer develop a bone fracture. Growing metastases can also compress the spinal cord causing weakness in the legs and feet, or limb paralysis.
Screening
Most cases of prostate cancer are diagnosed through screening tests, when tumors are too small to cause any symptoms. This is done through blood tests to measure levels of the protein prostate-specific antigen (PSA), which are elevated in those with enlarged prostates, whether due to prostate cancer or benign prostatic hyperplasia. The typical man's blood has around 1 nanogram (ng) of PSA per milliliter (mL) of blood tested. Those with PSA levels below average are very unlikely to develop dangerous prostate cancer over the next 8 to 10 years. Men with PSA levels above 4 ng/mL are at increased risk – around 1 in 4 will develop prostate cancer – and are often referred for a prostate biopsy. PSA levels over 10 ng/mL indicate an even higher risk: over half of men in this group develop prostate cancer. Men with high PSA levels are often recommended to repeat the blood test four to six weeks later, as PSA levels can fluctuate unrelated to prostate cancer. Benign prostatic hyperplasia, prostate infection, recent ejaculation, and some urological procedures can increase PSA levels; taking 5α-reductase inhibitors can decrease PSA levels.
Those with elevated PSA may undergo secondary screening blood tests that measure subtypes of PSA and other molecules to better predict the likelihood that a person will develop aggressive prostate cancer. Many measure "free PSA" – the fraction of PSA unbound to other blood proteins, usually around 10% to 30%. Men who have a lower percentage of free PSA are more likely to have prostate cancer. Several common tests more accurately detect prostate cancer cases by also measuring subtypes of free PSA, including the Prostate Health Index (measures a fragment called −2proPSA) and 4K score (measures intact free PSA). Other tests measure blood levels of additional prostate-related proteins such as kallikrein-2 (also measured by 4K score), or urine levels of mRNA molecules common to prostate tumors like PCA3 and TMPRSS2 fused to ERG.
Several large studies have found that men screened for prostate cancer have a reduced risk of dying from the disease; however, detection of cancer cases that would not have otherwise impacted health can cause anxiety, and lead to unneeded biopsies and treatments, both of which can cause unwanted complications. Major national health organizations offer differing recommendations, attempting to balance the benefits of early diagnosis with the potential harms of treating people whose tumors are unlikely to impact health. Most medical guidelines recommend that men at high risk of prostate cancer (due to age, family history, ethnicity, or prior evidence of high blood PSA levels) be counseled on the risks and benefits of PSA testing, and be offered access to screening tests. Medical guidelines generally recommend against screening for men over age 70, or with a life expectancy of less than 10 years, as a newly diagnosed prostate cancer is unlikely to impact their natural lifespan. Uptake of screening varies by geography – more than 80% of men are screened in the US and Western Europe, 20% of men in Japan, and screening is rare in regions with a low Human Development Index.
Diagnosis
Men suspected of having prostate cancer may undergo several tests to assess the prostate. One common procedure is the digital rectal examination, in which a doctor inserts a lubricated finger into the rectum to feel the nearby prostate. Tumors feel like stiff, irregularly shaped lumps against the rest of the prostate. Hardening of the prostate can also be due to benign prostatic hyperplasia; around 20–25% of those with abnormal findings on their rectal exams have prostate cancer. Several urological societies' guidelines recommend magnetic resonance imaging (MRI) to evaluate the prostate for potential tumors in men with high PSA levels. MRI results can help distinguish those who have potentially dangerous tumors from those who do not.
A definitive diagnosis of prostate cancer requires a biopsy of the prostate. Prostate biopsies are typically taken by a needle passing through the rectum or perineum, guided by transrectal ultrasonography, MRI, or a combination of the two. Ten to twelve samples are taken from several regions of the prostate to improve the chances of finding any tumors. Biopsies are sent for a histopathologic diagnosis of prostate cancer, wherein they are examined under a microscope by a pathologist, who determines the type and extent of cancerous cells present. Cancers are first classified based on their appearance under a microscope. Over 95% of prostate cancers are classified as adenocarcinomas (resembling gland tissue), with the rest largely squamous-cell carcinoma (resembling squamous cells, a type of epithelial cell) and transitional cell carcinoma (resembling transitional cells).
Next, tumor samples are graded based on how much the tumor tissue differs from normal prostate tissue; the more different the tumor appears, the faster the tumor is likely to grow. The Gleason grading system is commonly used, where the pathologist assigns numbers ranging from 3 (most similar to healthy prostate tissue) to 5 (least similar) to different regions of the biopsied tissue. They then calculate a "Gleason score" by adding the two numbers that represent the largest areas of the biopsy sample. The lowest possible Gleason score of 6 represents a biopsy most similar to healthy prostate; the highest Gleason score of 10 represents the most severely cancerous. Gleason scores are commonly grouped into "Gleason grade groups", which predict disease prognosis: a Gleason score of 6 is Gleason grade group 1 (best prognosis). A score of 7 (with Gleason scores 4 + 3, or Gleason scores 3 + 4, with the most prominent listed first) can be grade group 2 or 3; it is grade group 2 if the less severe Gleason score (3) covered more area; grade group 3 if the more severe Gleason score (4) covered more area. A score of 8 is grade group 4. A score of 9 or 10 is grade group 5 (worst prognosis).
The extent of cancer spread is assessed by MRI or PSMA scan – a positron emission tomography (PET) imaging technique where a radioactive label that binds the prostate protein prostate-specific membrane antigen is used to detect metastases distant from the prostate. CT scans may also be used, but are less able to detect spread outside the prostate than MRI. Bone scintigraphy is used to test for spread of cancer to bones.
Staging
After diagnosis, the tumor is staged to determine the extent of its growth and spread. Prostate cancer is typically staged using the American Joint Committee on Cancer's (AJCC) three-component TNM system, with scores assigned for the extent of the tumor (T), spread to any lymph nodes (N), and the presence of metastases (M). Scores of T1 and T2 represent tumors that remain within the prostate: T1 is for tumors not detectable by imaging or digital rectal exam; T2 is for tumors detectable by imaging or rectal exam, but still confined within the prostate. T3 is for tumors that grow beyond the prostate – T3a for tumors with any extension outside the prostate; T3b for tumors that invade the adjacent seminal vesicles. T4 is for tumors that have grown into organs beyond the seminal vesicles. The N and M scores are binary (yes or no). N1 represents any spread to the nearby lymph nodes. M1 represents any metastases to other body sites.
The AJCC then combines the TNM scores, Gleason grade group, and results of the PSA blood test to categorize cancer cases into one of four stages, and their subdivisions. Cancer cases with localized tumors (T1 or T2), no spread (N0 and M0), Gleason grade group 1, and PSA less than 10 ng/mL are designated stage I. Those with localized tumors and PSA between 10 and 20 ng/mL are designated stage II – subdivided into IIA for Gleason grade group 1, IIB for grade group 2, and IIC for grade group 3 or 4. Stage III is the designation for any of three higher risk factors: IIIA is for a PSA level about 20 ng/mL; IIIB is for T3 or T4 tumors; IIIC is for a Gleason grade group of 5. Stage IV is for cancers that have spread to lymph nodes (N1, stage IVA) or other organs (M1, stage IVB).
The United Kingdom National Institute for Health and Care Excellence recommends a five-stage system based on disease prognosis called the Cambridge Prognostic Group, with prognostic groups CPG 1 to CPG 5. CPG 1 is the same as AJCC stage I. Cases with localized tumors (T1 or T2) and either Gleason grade group 2 or higher PSA levels (10 to 20 ng/mL) are designated CPG 2. CPG 3 represents either Gleason grade group 3, or the combination of the CPG 2 criteria. CPG 4 is similar to AJCC stage 3 – any of Gleason grade group 4, PSA levels above 20 ng/mL, or a tumor that has grown beyond the prostate (T3). CPG 5 is for the highest risk cases: either a T4 tumor, Gleason grade group 5, or any two of the CPG 4 criteria.
Prevention
No drug or vaccine is approved by regulatory agencies for the prevention of prostate cancer. Several studies have shown 5α-reductase inhibitors to reduce the total incidence of prostate cancer; however, it is unclear as of 2022 whether they reduce any cases of dangerous disease.
Finasteride and Dutasteride, while used to treat Benign Prostatic Hyperplasia (BPH), has been demonstrated to reduce the risk of developing prostate cancer. Finasteride blocks only the type 2 form of 5α-reductase, whereas Dutasteride blocks both type 1 and 2 forms of the enzyme.
Management
Treatment of prostate cancer varies based on how advanced the cancer is, the risk it may spread, and the affected person's health and personal preferences. Those with localized disease at low risk for spread are often more likely to be harmed by the side effects of treatment than the disease itself, and so are regularly tested for a worsening of their disease. Those at higher risk may receive treatment to eliminate the tumor – typically prostatectomy (surgery to remove the prostate) or radiation therapy, sometimes alongside hormone therapy. Those with metastatic disease are treated with chemotherapy, as well as radiation or other agents to alleviate the symptoms of metastatic tumors. Blood PSA levels are monitored every few months to assess the effectiveness of treatments, and whether the disease is recurring or advancing.
Localized disease
Men diagnosed with low-risk cases of prostate cancer often defer treatment and are monitored regularly for cancer progression by active surveillance, which involves testing for tumor growth at fixed intervals by PSA tests (around every six months), digital rectal exam (annually), and MRI or repeat biopsies (every one to three years). This program continues until increases in PSA levels, Gleason grade, or tumor size indicate a higher-risk tumor that may require intervention. At least half of men remain on active surveillance, never requiring more direct treatment for their prostate tumors.
Those who elect to have therapy receive radiation therapy or a prostatectomy; these have similar rates of cancer control, but different side effects. Radiation can be delivered by intensity-modulated radiation therapy (IMRT), which allows for high doses (greater than 80 Gy) to be delivered to the prostate with relatively little radiation to other organs, or by brachytherapy, where a radioactive source is surgically inserted into the prostate. IMRT is given over several sessions, with treatments repeated five days per week for several weeks. Brachytherapy is typically performed in a single session, with the radioactive source permanently implanted into the prostate, where it expends its radioactivity within the next few months. With either technique, radiation damage to nearby organs can increase the risk of subsequent bladder cancer and cause erectile dysfunction, infertility, irreversible lumbar plexopathy and radiation proctitis – damage to the rectum that can cause diarrhea, bloody stools, fecal incontinence, and pain.
Radical prostatectomy aims to surgically remove the cancerous part of the prostate, along with the seminal vesicles, and the end of the vas deferens (the duct that delivers sperm from the testes). In wealthier countries, this is typically done by robot-assisted surgery, where robotic tools inserted through small holes in the abdomen allow a surgeon to make small and exact movements during surgery. This method results in shorter hospital stays, less blood loss, and fewer complications than traditional open surgery. In places where robot-assisted surgery is unavailable, prostatectomy can be performed laparoscopically (using a camera and hand tools through small holes in the abdomen), or through traditional open surgery with an incision above the penis (retropubic approach) or below the scrotum (perineal approach). The four approaches result in similar rates of cancer control. Damage to nearby tissue during surgery can result in erectile dysfunction and urinary incontinence. Erectile dysfunction is more likely in those who are older or had previous erectile issues. Incontinence is more common in those who are older and have shorter urethras. Both for cancer progression outcomes and surgical side effects, the skill and experience of the individual surgeon doing the procedure are among the greatest determinants of success.
After prostatectomy, PSA levels drop rapidly, reaching very low or undetectable levels within two months. Radiotherapy also substantially reduces PSA levels, but more slowly and less completely, with PSA levels reaching their nadir two years after radiotherapy. After either treatment, PSA levels are monitored regularly. Up to half of those treated will eventually have a rise in PSA levels, suggesting the tumor or small metastases are growing again. People with high or rising PSA levels are often offered another round of radiation therapy directed at the former tumor site. This reduces risk for further progression by 75%. Those suspected of metastases can undergo PET scanning with sensitive radiotracers C-11 choline, F-18 fluciclovine, and F-18 or Ga-68 attached to a PSMA-targeting drug, each of which is able to detect small metastases more sensitively than alternative imaging methods.
Metastatic disease
For those with metastatic disease, the standard of care is androgen deprivation therapy (also called "chemical castration"), drugs that reduce levels of androgens (male sex hormones) that prostate cells require to grow. Various drugs are used to lower androgen levels by blocking the synthesis or action of testosterone, the primary androgen. The first line of treatment typically involves GnRH agonists like leuprolide, goserelin, or triptorelin by injection monthly or less frequently as needed. GnRH agonists cause a brief rise in testosterone levels at treatment initiation, which can worsen disease in people with significant symptoms of metastases. In these people, GnRH antagonists like degarelix or relugolix are given instead, and can also rapidly reduce testosterone levels. Reducing testosterone can cause various side effects, including hot flashes, reduction in muscle mass and bone density, reduced sex drive, fatigue, personality changes, and an increased risk of diabetes, cardiovascular disease, and depression. Hormone therapy halts tumor growth in more than 95% of those treated, and PSA levels return to normal in up to 70%.
Despite reduced testosterone levels, metastatic prostate tumors eventually continue to grow – manifested by rising blood PSA levels, and metastases to nearby bones. This is the most advanced stage of the disease, called castration-resistant prostate cancer (CRPC). CRPC tumors continuously evolve resistance to treatments, necessitating several lines of therapy, each used in sequence to extend survival. The standard of care is the chemotherapy docetaxel along with antiandrogen drugs, namely the androgen receptor antagonists enzalutamide, apalutamide, and darolutamide, as well as the testosterone production inhibitor abiraterone acetate. An alternative is the cell therapy procedure Sipuleucel-T, where the affected person's immune cells are removed, treated to more effectively target prostate cancer cells, and re-injected. Tumors that evolve resistance to docetaxel may receive the second-generation taxane drug cabazitaxel.
Some CRPC treatments are used only in men whose tumors have certain characteristics that make the therapy more likely to be effective. Men whose tumors express the protein PSMA may receive the radiopharmaceutical Lu-177 PSMA, which binds to and destroys PSMA-positive cells. Those whose tumors have defective DNA damage repair benefit from treatment with the immune checkpoint inhibitor drug pembrolizumab and PARP inhibitors, namely olaparib, rucaparib, or niraparib.
Supportive care
Bone metastases – present in around 85% of those with metastatic prostate cancer – are the primary cause of symptoms and death from metastatic prostate cancer. Those with constant pain are prescribed nonsteroidal anti-inflammatory drugs. However, people with bone metastases can experience "breakthrough pain", sudden bursts of severe pain that resolve within around 15 minutes, before pain medications can take effect. Single sites of pain can be treated with external beam radiation therapy to shrink nearby tumors. More dispersed bone pain can be treated with radioactive compounds that disproportionately accumulate in bone, like radium-223 and samarium-153-EDTMP, which help reduce the size of bone tumors. Similarly, the systemic chemotherapeutics used for metastatic prostate cancer can reduce pain as they shrink tumors. Other bone modifying agents like zoledronic acid and denosumab can reduce prostate cancer bone pain, even though they have little effect on tumor size. Metastases compress the spinal cord in up to 12% of those with metastatic prostate cancer causing pain, weakness, numbness, and paralysis. Inflammation in the spine can be treated with high-dose steroids, as well as surgery and radiotherapy to shrink spinal tumors and relieve pressure on the spinal cord.
Those with advanced prostate cancer suffer fatigue, lethargy, and a generalized weakness. This is caused in part by gastrointestinal problems, with loss of appetite, weight loss, nausea, and constipation all common. These are typically treated with appetite-increasing drugs – megestrol acetate or corticosteroids – antiemetics, or treatments that focus on underlying gastrointestinal issues. General weakness can also be caused by anemia, itself caused by a combination of the disease itself, poor nutrition, and damage to the bone marrow from cancer treatments or bone metastases. Anemia can be treated in various ways depending on the cause, or can be addressed directly with blood transfusions. Organ damage and metastases in the lymph nodes can lead to uncomfortable accumulation of fluid (called lymphedema) in the genitals or lower limbs. These swellings can be extremely painful, curtailing an affected person's ability to urinate, have sex, or walk normally. Lymphedema can be treated by applying pressure to aid drainage, surgically draining pooled fluid, and cleaning and treating nearby damaged skin.
People with prostate cancer are around twice as likely to experience anxiety or depression compared to those without cancer. When added to normal prostate cancer treatments, psychological interventions such as psychoeducation and cognitive behavioral therapy can help reduce anxiety, depression, and general distress.
As those severely ill with metastatic prostate cancer approach the end of their lives, most experience confusion and may hallucinate or have trouble recognizing loved ones. Confusion is caused by various conditions, including kidney failure, sepsis, dehydration, and as a side effect of various drugs, especially opioids. Most people sleep for long periods, and some feel drowsy when awake. Restlessness is also common, sometimes caused by physical discomfort from constipation or urinary retention, sometimes caused by anxiety. In their last few days, affected men's breathing may become shallow and slow, with long pauses between breaths. Breathing may be accompanied by a rattling noise as fluid lingers in the throat, but this is not uncomfortable for the affected person. Their hands and feet may cool to the touch, and skin become blotchy or blue due to weaker blood circulation. Many stop eating and drinking, resulting in dry-feeling mouth, which can be aided by moistening the mouth and lips. The person becomes less and less responsive, and eventually the heart and breathing stop.
Prognosis
The prognosis of diagnosed prostate cancer varies widely based on the cancer's grade and stage at the time of diagnosis; those with lower stage disease have vastly improved prognoses. Around 80% of prostate cancer diagnoses are in men whose cancer is still confined to the prostate. These men can survive long after diagnosis, with as many as 99% still alive 10 years from diagnosis. Men whose cancer has metastasized to a nearby part of the body (around 15% of diagnoses) have poorer prognoses, with five-year survival rates of 60–80%. Those with metastases in distant body sites (around 5% of diagnoses) have relatively poor prognoses, with five-year survival rates of 30–40%.
Those who have low blood PSA levels at diagnosis, and whose tumors have a low Gleason grade and less-advanced clinical stage tend to have better prognoses. After prostatectomy or radiotherapy, those who have a short time between treatment and a subsequent rise in PSA levels, or quickly rising PSA levels are more likely to die from their cancers. Castration-resistant metastatic prostate cancer is incurable, and kills a majority of those whose disease reaches this stage.
Cause
Prostate cancer is caused by the accumulation of genetic mutations to the DNA of cells in the prostate. These mutations affect genes involved in cell growth, replication, cell death, and DNA damage repair. With these processes dysregulated, some cells replicate abnormally, forming a clump of cells called a tumor. As the tumor grows, its cells accumulate more mutations, allowing it to stimulate the growth of new blood vessels to support further growth. Eventually, a tumor can grow large enough to invade nearby organs such as the seminal vesicles or bladder. In advanced tumors, cells can develop the ability to detach from their original tissue site, and evade the immune system. These cells can spread through the lymphatic system to nearby lymph nodes, or through the bloodstream to the bone marrow and (more rarely) other body sites. At these new sites, the cancer cells disrupt normal body function and continue to grow. Metastases cause most of the discomfort associated with prostate cancer, and can eventually kill the affected person.
Pathophysiology
Most prostate tumors begin in the peripheral zone – the outermost part of the prostate. As cells begin to grow out of control, they form a small clump of disregulated cells called a prostatic intraepithelial neoplasia (PIN). Some PINs continue to grow, forming layers of tissue that stop expressing genes common to their original tissue location – p63, cytokeratin 5, and cytokeratin 14 – and instead begin expressing genes typical of cells in the innermost lining of the pancreatic duct – cytokeratin 8 and cytokeratin 18. These multilayered PINs also often overexpress the gene AMACR, which is associated with prostate cancer progression.
Some PINs can eventually grow into tumors. This is commonly accompanied by large-scale changes to the genome, with chromosome sequences being rearranged or copied repeatedly. Some genomic alterations are particularly common in early prostate cancer, namely gene fusion between TMPRSS2 and the oncogene ERG (up to 60% of prostate tumors), mutations that disable SPOP (up to 15% of tumors), and mutations that hyperactivate FOXA1 (up to 5% of tumors).
Metastatic prostate cancer tends to have more genetic mutations than localized disease. Many of these mutations are in genes that protect from DNA damage, such as p53 (mutated in 8% of localized tumors, more than 27% of metastatic ones) and RB1 (1% of localized tumors, more than 5% of metastatic ones). Similarly mutations in the DNA repair-related genes BRCA2 and ATM are rare in localized disease but found in at least 7% and 5% of metastatic disease cases respectively.
The transition from castrate-sensitive to castrate-resistant prostate cancer is also accompanied by the acquisition of various gene mutations. In castrate-resistant disease, more than 70% of tumors have mutations in the androgen receptor signaling pathway – amplifications and gain-of-function mutations in the receptor gene itself, amplification of its activators (for example, FOXA1), or inactivating mutations in its negative regulators (for example, ZBTB16 and NCOR1). These androgen receptor disruptions are only found in up to 6% of biopsies of castrate-sensitive metastatic disease. Similarly, deletions of the tumor suppressor PTEN are harbored by 12–17% of castrate-sensitive tumors, but over 40% of castrate-resistant tumors. Less commonly, tumors have aberrant activation of the Wnt signaling pathway via disruption of members APC (9% of tumors) or CTNNB1 (4% of tumors); or dysregulation of the PI3K pathway via PI3KCA/PI3KCB mutations (6% of tumors) or AKT1 (2% of tumors).
Epidemiology
Prostate cancer is the second-most frequently diagnosed cancer in men, and the second-most frequent cause of cancer death in men (after lung cancer). Around 1.2 million new cases of prostate cancer are diagnosed each year, and over 350,000 people die of the disease, annually. One in eight men are diagnosed with prostate cancer in their lifetime, and around one in forty die of the disease. Rates of prostate cancer rise with age. Due to this, prostate cancer rates are generally higher in parts of the world with higher life expectancy, which also tend to be areas with higher gross domestic product and higher human development index. Australia, Europe, North America, New Zealand, and parts of South America have the highest incidence. South Asia, Central Asia, and sub-Saharan Africa have the lowest incidence of prostate cancer; though incidence is increasing quickly in these regions. Prostate cancer is the most diagnosed cancer in men in over half of the world's countries, and the leading cause of cancer death in men in around a quarter of countries.
Prostate cancer is rare in those under 40 years old, and most cases occur in those over 60 years, with the average person diagnosed at 67. The average age of those who die from prostate cancer is 77. Only a minority of prostate cancer cases are diagnosed. Autopsies of men who died at various ages have shown cancer in the prostates of over 40% of men over age 50. Incidence rises with age, and nearly 70% of men autopsied at age 80–89 had cancer in their prostates.
Genetics
Prostate cancer is more common in families with a history of any cancer. Men with an affected first-degree relative (father or brother) have more than twice the risk of developing prostate cancer, and those with two first-degree relatives have a five-fold greater risk compared with men with no family history. Increased risk also runs in some ethnic groups, with men of African and African-Caribbean ancestry at particularly high risk – having prostate cancer at higher rates, and having more-aggressive prostate cancers that develop at earlier ages. Large genome-wide association studies have identified over 100 gene variants associated with increased prostate cancer risk. The greatest risk increase is associated with variations in BRCA2 (up to an eight-fold increased risk) and HOXB13 (three-fold increased risk), both of which are involved in repairing DNA damage. Variants in other genes involved in DNA damage repair have also been associated with an increased risk of developing prostate cancer – particularly early-onset prostate cancer – including BRCA1, ATM, NBS1, MSH2, MSH6, PMS2, CHEK2, RAD51D, and PALB2. Additionally, variants in the genome near the oncogene MYC are associated with increased risk. As are single-nucleotide polymorphisms in the vitamin D receptor common in African-Americans, and in the androgen receptor, CYP3A4, and CYP17 involved in testosterone synthesis and signaling. Together, known gene variants are estimated to cause around 25% of prostate cancer cases, including 40% of early-onset prostate cancers.
Body and lifestyle
Men who are taller are at a slightly increased risk for developing prostate cancer, as are men who are obese. High levels of blood cholesterol are also associated with increased prostate cancer risk; consequently, those who take the cholesterol-lowering drugs, statins, have a reduced risk of advanced prostate cancer. Chronic inflammation can cause various cancers. Potential links between infection (or other sources of inflammation) and prostate cancer have been studied but none definitively found, and one large study found no link between prostate cancer and a history of gonorrhea, syphilis, chlamydia, or infection with various human papillomaviruses.
Regular vigorous exercise may reduce one's chance of developing advanced prostate cancer, as can several dietary interventions. Those with a diet rich in cruciferous vegetables (certain leafy greens, broccoli, and cauliflower), fish, genistein (found in soy), or lycopene (found in tomatoes) are at a reduced risk of symptomatic prostate cancer. Conversely, those who consume high levels of dietary fats, polycyclic aromatic hydrocarbons (from cooking red meats), or calcium may be at an increased risk of developing advanced prostate cancer. Several dietary supplements have been studied and found not to impact prostate cancer risk, including selenium, vitamin C, vitamin D, and vitamin E.
Special populations
Transgender women and gender non-conforming people who have prostates can develop prostate cancer. Those who have undergone gender-affirming hormone therapy or gender-affirming surgery have reduced risk of developing prostate cancer, relative to cisgender men of similar age. Screening tests in this group are complicated, as transgender women may have lower PSA levels than cisgender men due to their reduced testosterone levels. PSA levels greater than 1 ng/mL are generally considered above normal by gender care specialists. Digital rectal exams of the prostate are often impossible in women who have undergone vaginoplasty, as the length and rigidity of the new vagina can obstruct access to the prostate from the rectum.
History
A prostate mass was first described in 1817 by the English surgeon George Langstaff, following the autopsy of a man who had died at age 68 with lower-body pain and urinary issues. In 1853, London Hospital surgeon John Adams described another prostate tumor from a man who had died with urinary issues; Adams had a pathologist examine the tumor, providing the first confirmed case of a cancerous tumor in the prostate. The disease was initially rarely described; an 1893 report found only 50 cases described in the medical literature. Around the turn of the 19th century, prostate surgery to relieve urinary obstruction became more common, allowing surgeons and pathologists to examine the removed prostate tissue. Two studies around the time found cancer in as many as 10% of surgical specimens, suggesting prostate cancer was a fairly common cause of prostate enlargement.
For much of the 20th century, the primary therapy for prostate cancer was surgery to remove the prostate. Perineal prostatectomy was first performed in 1904 by Hugh H. Young at Johns Hopkins Hospital. Young's method became the widespread standard, initially done primarily to relieve symptoms of urinary blockage. In 1931 a new surgical method, transurethral resection of the prostate, became available, replacing perineal prostatectomy for symptomatic relief of obstruction. In 1945, Terence Millin described a retropubic prostatectomy approach, which provided easier access to pelvic lymph nodes to assist in staging the extent of disease, and was easier for surgeons to learn. This was improved upon by Patrick C. Walsh's 1983 description of a retropubic prostatectomy approach that avoided damage to the nerves near the prostate, preserving erectile function.
Radiation therapy for prostate cancer was used occasionally in the early 20th century, with radium implanted into the urethra or rectum to reduce the tumor size and associated symptoms. In the 1950s the advent of more powerful radiation machines allowed for external beam radiotherapy to reach the prostate. By the 1960s, this was often combined with hormone therapy to improve the potency of therapy. In the 1970s, Willet Whitmore pioneered an open surgery technique where needles of Iodine-125 were placed directly into the prostate. This was improved upon by Henrik H. Holm in 1983 by using transrectal ultrasound to guide the implantation of radioactive material.
The observation that the testicles (and the hormones they secrete) influence prostate size was made as early as the late 18th century via castration experiments in animals. However, occasional experimentation over the next century bore mixed results, likely due to the inability to separate prostate tumors from prostates enlarged due to benign prostatic hyperplasia. In 1941, Charles B. Huggins and Clarence V. Hodges published two studies using surgical castration or oral estrogen to reduce androgen levels and improve prostate cancer symptoms. Huggins was awarded the 1966 Nobel Prize in Physiology or Medicine for this discovery, the first systemic therapy for prostate cancer. In the 1960s, large studies showed estrogen therapy to be as effective as surgical castration at treating prostate cancer, but that those on estrogen therapy were at increased risk of suffering blood clots. Through the 1980s, Andrzej W. Schally's studies of GnRH led to the development of GnRH agonists, which were found to be as effective as estrogen without the increased risk of clotting. Schally was awarded the 1977 Nobel Prize in Physiology or Medicine for his work on GnRH and prostate cancer.
Systemic chemotherapy for prostate cancer has been studied since the 1950s but clinical trials failed to show benefits in most people who receive the drugs. In 1996, the US Food and Drug Administration approved the systemic chemotherapy mitoxantrone for those with castration-resistant prostate cancer based on trials showing that it improved symptoms even though it failed to enhance survival. In 2004, docetaxel was approved as the first chemotherapy to increase survival in those with castration-resistant prostate cancer. After additional trials in 2015, docetaxel use was extended to those with castration-sensitive prostate cancer.
Society and culture
Prostate cancer screening and awareness have been widely promoted since the early 2000s by Prostate Cancer Awareness Month in September and Movember in November. However, an analysis of internet searches suggests neither event changes the level of prostate cancer interest or discussion much, in contrast to the more established Breast Cancer Awareness Month.
Research
Prostate cancer is a major topic of ongoing research. From 2016–2020, over $1.26 billion was invested in prostate cancer research, representing around 5% of global cancer research funds. This places prostate cancer 10th among 18 common cancer types in funding per cancer death, and 9th in funding per disability-adjusted life year lost.
Research into prostate cancer relies on a number of laboratory models to test aspects of the disease. Several prostate immortalized cell lines are widely used, namely the classic lines DU145, PC-3, and LNCaP, as well as more recent cell lines 22Rv1, LAPC-4, VCaP, and MDA-PCa-2a and −2b. Research requiring more complex models of the prostate uses organoids – clusters of prostate cells that can be grown from human prostate tumors or stem cells. Modeling tumor growth and metastasis requires a model organism, typically a mouse. Researchers can either surgically implant human prostate tumors into immunocompromised mice (a technique called a patient derived xenograft), or induce prostate tumors in mice with genetic engineering. These genetically engineered mouse models typically use a Cre recombinase system to disrupt tumor suppressors or activate oncogenes specifically in prostate cells.
| Biology and health sciences | Cancer | null |
88213 | https://en.wikipedia.org/wiki/Apsis | Apsis | An apsis (; ) is the farthest or nearest point in the orbit of a planetary body about its primary body. The line of apsides (also called apse line, or major axis of the orbit) is the line connecting the two extreme values.
Apsides pertaining to orbits around the Sun have distinct names to differentiate themselves from other apsides; these names are aphelion for the farthest and perihelion for the nearest point in the solar orbit. The Moon's two apsides are the farthest point, apogee, and the nearest point, perigee, of its orbit around the host Earth. Earth's two apsides are the farthest point, aphelion, and the nearest point, perihelion, of its orbit around the host Sun. The terms aphelion and perihelion apply in the same way to the orbits of Jupiter and the other planets, the comets, and the asteroids of the Solar System.
General description
There are two apsides in any elliptic orbit. The name for each apsis is created from the prefixes ap-, apo- () for the farthest or peri- () for the closest point to the primary body, with a suffix that describes the primary body. The suffix for Earth is -gee, so the apsides' names are apogee and perigee. For the Sun, the suffix is -helion, so the names are aphelion and perihelion.
According to Newton's laws of motion, all periodic orbits are ellipses. The barycenter of the two bodies may lie well within the bigger body—e.g., the Earth–Moon barycenter is about 75% of the way from Earth's center to its surface. If, compared to the larger mass, the smaller mass is negligible (e.g., for satellites), then the orbital parameters are independent of the smaller mass.
When used as a suffix—that is, -apsis—the term can refer to the two distances from the primary body to the orbiting body when the latter is located: 1) at the periapsis point, or 2) at the apoapsis point (compare both graphics, second figure). The line of apsides denotes the distance of the line that joins the nearest and farthest points across an orbit; it also refers simply to the extreme range of an object orbiting a host body (see top figure; see third figure).
In orbital mechanics, the apsides technically refer to the distance measured between the barycenter of the 2-body system and the center of mass of the orbiting body. However, in the case of a spacecraft, the terms are commonly used to refer to the orbital altitude of the spacecraft above the surface of the central body (assuming a constant, standard reference radius).
Terminology
The words "pericenter" and "apocenter" are often seen, although periapsis/apoapsis are preferred in technical usage.
For generic situations where the primary is not specified, the terms pericenter and apocenter are used for naming the extreme points of orbits (see table, top figure); periapsis and apoapsis (or apapsis) are equivalent alternatives, but these terms also frequently refer to distances—that is, the smallest and largest distances between the orbiter and its host body (see second figure).
For a body orbiting the Sun, the point of least distance is the perihelion (), and the point of greatest distance is the aphelion (); when discussing orbits around other stars the terms become periastron and apastron.
When discussing a satellite of Earth, including the Moon, the point of least distance is the perigee (), and of greatest distance, the apogee (from Ancient Greek: Γῆ (Gē), "land" or "earth").
For objects in lunar orbit, the point of least distance are called the pericynthion () and the greatest distance the apocynthion (). The terms perilune and apolune, as well as periselene and aposelene are also used. Since the Moon has no natural satellites this only applies to man-made objects.
Etymology
The words perihelion and aphelion were coined by Johannes Kepler to describe the orbital motions of the planets around the Sun.
The words are formed from the prefixes peri- (Greek: περί, near) and apo- (Greek: ἀπό, away from), affixed to the Greek word for the Sun, (ἥλιος, or hēlíos).
Various related terms are used for other celestial objects. The suffixes -gee, -helion, -astron and -galacticon are frequently used in the astronomical literature when referring to the Earth, Sun, stars, and the Galactic Center respectively. The suffix -jove is occasionally used for Jupiter, but -saturnium has very rarely been used in the last 50 years for Saturn. The -gee form is also used as a generic closest-approach-to "any planet" term—instead of applying it only to Earth.
During the Apollo program, the terms pericynthion and apocynthion were used when referring to orbiting the Moon; they reference Cynthia, an alternative name for the Greek Moon goddess Artemis. More recently, during the Artemis program, the terms perilune and apolune have been used.
Regarding black holes, the term peribothron was first used in a 1976 paper by J. Frank and M. J. Rees, who credit W. R. Stoeger for suggesting creating a term using the greek word for pit: "bothron".
The terms perimelasma and apomelasma (from a Greek root) were used by physicist and science-fiction author Geoffrey A. Landis in a story published in 1998, thus appearing before perinigricon and aponigricon (from Latin) in the scientific literature in 2002.
Terminology summary
The suffixes shown below may be added to prefixes peri- or apo- to form unique names of apsides for the orbiting bodies of the indicated host/(primary) system. However, only for the Earth, Moon and Sun systems are the unique suffixes commonly used. Exoplanet studies commonly use -astron, but typically, for other host systems the generic suffix, -apsis, is used instead.
Perihelion and aphelion
The perihelion (q) and aphelion (Q) are the nearest and farthest points respectively of a body's direct orbit around the Sun.
Comparing osculating elements at a specific epoch to those at a different epoch will generate differences. The time-of-perihelion-passage as one of six osculating elements is not an exact prediction (other than for a generic two-body model) of the actual minimum distance to the Sun using the full dynamical model. Precise predictions of perihelion passage require numerical integration.
Inner planets and outer planets
The two images below show the orbits, orbital nodes, and positions of perihelion (q) and aphelion (Q) for the planets of the Solar System as seen from above the northern pole of Earth's ecliptic plane, which is coplanar with Earth's orbital plane. The planets travel counterclockwise around the Sun and for each planet, the blue part of their orbit travels north of the ecliptic plane, the pink part travels south, and dots mark perihelion (green) and aphelion (orange).
The first image (below-left) features the inner planets, situated outward from the Sun as Mercury, Venus, Earth, and Mars. The reference Earth-orbit is colored yellow and represents the orbital plane of reference. At the time of vernal equinox, the Earth is at the bottom of the figure. The second image (below-right) shows the outer planets, being Jupiter, Saturn, Uranus, and Neptune.
The orbital nodes are the two end points of the "line of nodes" where a planet's tilted orbit intersects the plane of reference; here they may be 'seen' as the points where the blue section of an orbit meets the pink.
Lines of apsides
The chart shows the extreme range—from the closest approach (perihelion) to farthest point (aphelion)—of several orbiting celestial bodies of the Solar System: the planets, the known dwarf planets, including Ceres, and Halley's Comet. The length of the horizontal bars correspond to the extreme range of the orbit of the indicated body around the Sun. These extreme distances (between perihelion and aphelion) are the lines of apsides of the orbits of various objects around a host body.
Earth perihelion and aphelion
Currently, the Earth reaches perihelion in early January, approximately 14 days after the December solstice. At perihelion, the Earth's center is about astronomical units (AU) or from the Sun's center. In contrast, the Earth reaches aphelion currently in early July, approximately 14 days after the June solstice. The aphelion distance between the Earth's and Sun's centers is currently about or .
The dates of perihelion and aphelion change over time due to precession and other orbital factors, which follow cyclical patterns known as Milankovitch cycles. In the short term, such dates can vary up to 2 days from one year to another. This significant variation is due to the presence of the Moon: while the Earth–Moon barycenter is moving on a stable orbit around the Sun, the position of the Earth's center which is on average about from the barycenter, could be shifted in any direction from it—and this affects the timing of the actual closest approach between the Sun's and the Earth's centers (which in turn defines the timing of perihelion in a given year).
Because of the increased distance at aphelion, only 93.55% of the radiation from the Sun falls on a given area of Earth's surface as does at perihelion, but this does not account for the seasons, which result instead from the tilt of Earth's axis of 23.4° away from perpendicular to the plane of Earth's orbit. Indeed, at both perihelion and aphelion it is summer in one hemisphere while it is winter in the other one. Winter falls on the hemisphere where sunlight strikes least directly, and summer falls where sunlight strikes most directly, regardless of the Earth's distance from the Sun.
In the northern hemisphere, summer occurs at the same time as aphelion, when solar radiation is lowest. Despite this, summers in the northern hemisphere are on average warmer than in the southern hemisphere, because the northern hemisphere contains larger land masses, which are easier to heat than the seas.
Perihelion and aphelion do however have an indirect effect on the seasons: because Earth's orbital speed is minimum at aphelion and maximum at perihelion, the planet takes longer to orbit from June solstice to September equinox than it does from December solstice to March equinox. Therefore, summer in the northern hemisphere lasts slightly longer (93 days) than summer in the southern hemisphere (89 days).
Astronomers commonly express the timing of perihelion relative to the First Point of Aries not in terms of days and hours, but rather as an angle of orbital displacement, the so-called longitude of the periapsis (also called longitude of the pericenter). For the orbit of the Earth, this is called the longitude of perihelion, and in 2000 it was about 282.895°; by 2010, this had advanced by a small fraction of a degree to about 283.067°, i.e. a mean increase of 62" per year.
For the orbit of the Earth around the Sun, the time of apsis is often expressed in terms of a time relative to seasons, since this determines the contribution of the elliptical orbit to seasonal variations. The variation of the seasons is primarily controlled by the annual cycle of the elevation angle of the Sun, which is a result of the tilt of the axis of the Earth measured from the plane of the ecliptic. The Earth's eccentricity and other orbital elements are not constant, but vary slowly due to the perturbing effects of the planets and other objects in the solar system (Milankovitch cycles).
On a very long time scale, the dates of the perihelion and of the aphelion progress through the seasons, and they make one complete cycle in 22,000 to 26,000 years. There is a corresponding movement of the position of the stars as seen from Earth, called the apsidal precession. (This is closely related to the precession of the axes.) The dates and times of the perihelions and aphelions for several past and future years are listed in the following table:
Other planets
The following table shows the distances of the planets and dwarf planets from the Sun at their perihelion and aphelion.
Mathematical formulae
These formulae characterize the pericenter and apocenter of an orbit:
Pericenter Maximum speed, , at minimum (pericenter) distance, .
Apocenter Minimum speed, , at maximum (apocenter) distance, .
While, in accordance with Kepler's laws of planetary motion (based on the conservation of angular momentum) and the conservation of energy, these two quantities are constant for a given orbit:
Specific relative angular momentum
Specific orbital energy
where:
is the distance from the apocenter to the primary focus
is the distance from the pericenter to the primary focus
a is the semi-major axis:
μ is the standard gravitational parameter
e is the eccentricity, defined as
Note that for conversion from heights above the surface to distances between an orbit and its primary, the radius of the central body has to be added, and conversely.
The arithmetic mean of the two limiting distances is the length of the semi-major axis a. The geometric mean of the two distances is the length of the semi-minor axis b.
The geometric mean of the two limiting speeds is
which is the speed of a body in a circular orbit whose radius is .
Time of perihelion
Orbital elements such as the time of perihelion passage are defined at the epoch chosen using an unperturbed two-body solution that does not account for the n-body problem. To get an accurate time of perihelion passage you need to use an epoch close to the perihelion passage. For example, using an epoch of 1996, Comet Hale–Bopp shows perihelion on 1 April 1997. Using an epoch of 2008 shows a less accurate perihelion date of 30 March 1997. Short-period comets can be even more sensitive to the epoch selected. Using an epoch of 2005 shows 101P/Chernykh coming to perihelion on 25 December 2005, but using an epoch of 2012 produces a less accurate unperturbed perihelion date of 20 January 2006.
Numerical integration shows dwarf planet Eris will come to perihelion around December 2257. Using an epoch of 2021, which is 236 years early, less accurately shows Eris coming to perihelion in 2260.
4 Vesta came to perihelion on 26 December 2021, but using a two-body solution at an epoch of July 2021 less accurately shows Vesta came to perihelion on 25 December 2021.
Short arcs
Trans-Neptunian objects discovered when 80+ AU from the Sun need dozens of observations over multiple years to well constrain their orbits because they move very slowly against the background stars. Due to statistics of small numbers, trans-Neptunian objects such as when it had only 8 observations over an observation arc of 1 year that have not or will not come to perihelion for roughly 100 years can have a 1-sigma uncertainty of in the perihelion date.
| Physical sciences | Celestial mechanics | Astronomy |
88295 | https://en.wikipedia.org/wiki/Golden%20eagle | Golden eagle | The golden eagle (Aquila chrysaetos) is a bird of prey living in the Northern Hemisphere. It is the most widely distributed species of eagle. Like all eagles, it belongs to the family Accipitridae. They are one of the best-known birds of prey in the Northern Hemisphere. These birds are dark brown, with lighter golden-brown plumage on their napes. Immature eagles of this species typically have white on the tail and often have white markings on the wings. Golden eagles use their agility and speed combined with powerful feet and large, sharp talons to hunt a variety of prey, mainly hares, rabbits, and marmots and other ground squirrels.
Golden eagles maintain home ranges or territories that may be as large as . They build large nests in cliffs and other high places to which they may return for several breeding years. Most breeding activities take place in the spring; they are monogamous and may remain together for several years or possibly for life. Females lay up to four eggs, and then incubate them for six weeks. Typically, one or two young survive to fledge in about three months. These juvenile golden eagles usually attain full independence in the fall, after which they wander widely until establishing a territory for themselves in four to five years.
Once widespread across the Holarctic, it has disappeared from many areas that are heavily populated by humans. Despite being extirpated from or uncommon in some of its former range, the species is still widespread, being present in sizeable stretches of Eurasia, North America, and parts of North Africa. It is the largest and least populous of the five species of true accipitrid to occur as a breeding species in both the Palearctic and the Nearctic.
For centuries, this species has been one of the most highly regarded birds used in falconry. Because of its hunting prowess, the golden eagle is regarded with great mystic reverence in some ancient, tribal cultures. It is one of the most extensively studied species of raptor in the world in some parts of its range, such as the Western United States and the Western Palearctic.
Taxonomy and systematics
This species was first described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae as Falco chrysaetos. Since birds were grouped largely on superficial characteristics at that time, many species were grouped by Linnaeus into the genus Falco. The type locality was given simply as "Europa"; it was later restricted to Sweden. It was moved to the new genus Aquila by French ornithologist Mathurin Jacques Brisson in 1760. Aquila is Latin for "eagle", possibly derived from aquilus, "dark in colour" and chrysaetos is Ancient Greek for the golden eagle from khrusos, "gold" and aetos, "eagle".
The golden eagle is part of a broad group of raptors called "booted eagles" which are defined by the feature that all species have feathering over their tarsus, unlike many other accipitrids which have bare legs. Included in this group are all species described as "hawk eagles" including the genera Spizaetus and Nisaetus, as well as assorted monotypical genera such as Oroaetus, Lophaetus, Stephanoaetus, Polemaetus, Lophotriorchis and Ictinaetus.
The genus Aquila is distributed across every continent but for South America and Antarctica. Up to 20 species have been classified in the genus, but more recently the taxonomic placement of some of the traditional species has been questioned. Traditionally, the Aquila eagles have been grouped superficially as largish, mainly brownish or dark-colored booted eagles that vary little in transition from their juvenile to their adult plumages. Genetic research has recently indicated the golden eagle is included in a clade with Verreaux's eagle in Africa as well as the Gurney's eagle (A. gurneyi) and the wedge-tailed eagle (clearly part of an Australasian radiation of the lineage). This identification of this particular clade has long been suspected based on similar morphological characteristics amongst these large-bodied species. More surprisingly, the smaller, much paler-bellied sister species Bonelli's eagle (A. fasciatus) and African hawk-eagle (A. spilogaster), previously included in the genus Hieraaetus, have been revealed to be genetically much closer to the Verreaux's and golden eagle lineage than to other species traditionally included in the genus Aquila. Other largish Aquila species, the eastern imperial, the Spanish imperial, the tawny and the steppe eagles, are now thought to be separate, close-knit clade, which attained some similar characteristics to the prior clade via convergent evolution.
Genetically, the "spotted eagles" (A. pomarina, hastata and clanga), have been discovered to be more closely related to the long-crested eagle (Lophaetus occipitalis) and the black eagle (Ictinaetus malayensis), and many generic reassignments have been advocated. The genus Hieraaetus, including the booted eagle (H. pennatus), little eagle (H. morphnoides) and Ayres's hawk-eagle (H. ayresii), consists of much smaller species, that are in fact smallest birds called eagles outside of the unrelated Spilornis serpent-eagle genus. This genus has recently been eliminated by many authorities and is now occasionally also included in Aquila, although not all ornithological unions have followed this suit in this re-classification. The small-bodied Wahlberg's eagle (H. wahlbergi) has been traditionally considered a Aquila species due to its lack of change from juvenile to adult plumage and brownish color but it is actually genetically aligned to the Hieraaetus lineage. Cassin's hawk-eagle (H. africanus) is also probably closely related to the Hieraaetus group rather than the Spizaetus/Nisaetus "hawk-eagle" group (in which it was previously classified) which is not known to have radiated to Africa.
Subspecies and distribution
There are six extant subspecies of golden eagle that differ slightly in size and plumage. Individuals of any of the subspecies are somewhat variable and the differences between the subspecies are clinal, especially in terms of body size. Other than these characteristics, there is little variation across the range of the species. Some recent studies have gone so far as to propose that only two subspecies be recognized based on genetic markers: Aquila chrysaetos chrysaetos (including A. c. homeyeri) and A. c. canadensis (including A. c. japonica, A. c. daphanea and A. c. kamtschatica).
Aquila chrysaetos chrysaetos (Linnaeus, 1758) – sometimes referred to as the European golden eagle. This is the nominate subspecies. This subspecies is found almost throughout Europe, including the British Isles (mainly in Scotland), the majority of Scandinavia, southern and northernmost France, Italy and Austria. In Eastern Europe, it is found from Estonia to Romania, Greece, Serbia and Bulgaria in southeastern Europe. It is also distributed through European Russia, reportedly reaching its eastern limit around the Yenisei River in Russia, also ranging south at a similar longitude into western Kazakhstan and northern Iran. Male wing length is from , averaging , and female wing length is from , averaging . Males weigh from , averaging , and females weigh from , averaging . The male of this subspecies has a wingspan of , with an average of , with the female's typical wingspan range is , with an average of . This is a medium-sized subspecies and is the palest. As opposed to golden eagles found further east in Eurasia, the adults of this subspecies are a tawny golden-brown on the upperside. The nape patch is often gleaming golden in color and the feathers here are exceptionally long.
Aquila chrysaetos homeyeri Severtzov, 1888 – commonly known as the Iberian golden eagle. This subspecies occurs in almost the entirety of the Iberian Peninsula as well as the island of Crete, though it is absent from the rest of continental Europe. It also ranges in North Africa in a narrow sub-coastal strip from Morocco to Tunisia. A completely isolated population of golden eagles is found in Ethiopia's Bale Mountains, at the southern limit of the species' range worldwide. Although this latter population has not been formally assigned to a subspecies, there is a high probability that it belongs with A. c. homeyeri. This subspecies also ranges in much of Asia Minor, mainly Turkey, spottily through the Middle East and the Arabian Peninsula into northern Yemen and Oman to its eastern limits throughout the Caucasus, much of Iran and north to southwestern Kazakhstan. Male wing length is from , averaging , and female wing length is from , averaging . Weight is from with no known reports of average masses. This subspecies is slightly smaller and darker plumaged than the nominate subspecies, but it is not as dark as the golden eagles found further to the east. The forehead and crown are dark brownish, with the nape patch being short-feathered and a relatively light rusty color.
Aquila chrysaetos daphanea Severtzov, 1888 – known variously as the Asian golden eagle, Himalayan golden eagle or berkut. This subspecies is distributed in central Kazakhstan, eastern Iran, and the easternmost Caucasus, distributed to Manchuria and central China and along the Himalayas from northern Pakistan to Bhutan and discontinuing in northeastern Myanmar (rarely ranging over into northernmost India). This subspecies is the largest on average. Male wing length is from , averaging , and female wing length is from , averaging . No range of body weights are known, but males will weigh approximately and females . Although the wingspan of this subspecies reportedly averages , some individuals can have much longer wings. One female berkut had an authenticated wingspan of , although she was a captive specimen. It is generally the second-darkest subspecies, being blackish on the back. The forehead and crown are dark with a blackish cap near the end of the crown. The feathers of the nape and top-neck are rich brown-red. The nape feathers are slightly shorter than in the nominate subspecies and are similar in length to A. c. homeyeri.
Aquila chrysaetos japonica Severtzov, 1888 – commonly known as the Japanese golden eagle. This subspecies is found in northern Japan (the islands of Honshu, Hokkaido and discontinuously in Kyushu) and undefined parts of Korea. Male wing length is from , averaging , and female wing length is from , averaging . No range of body weights are known, but males will weigh approximately and females . This is, by far, the smallest-bodied subspecies. It is also the darkest, with even adults being a slaty-grayish black on the back and crown and juveniles being similar, but with darker black plumage contrasting with brownish color and white scaling on the wings, flank and tail. This subspecies has bright rufous nape feathers that are quite loose and long. Adult Japanese golden eagles often maintain extensive white mottling on the inner-webs of the tail that tend to be more typical of juvenile eagles in other subspecies.
Aquila chrysaetos canadensis (Linnaeus, 1758) – commonly known as the North American golden eagle. Occupies the species' entire range in North America, which comprises the great majority of Alaska, western Canada, Western United States and Mexico. The species is found breeding occasionally in all Canadian provinces but for Nova Scotia. It is currently absent in the Eastern United States as breeding species east of a line from North Dakota down through westernmost Nebraska and Oklahoma to West Texas. The southern limits of its range are in central Mexico, from the Guadalajara area in the west to the Tampico area in the east; it is the "Mexican eagle" featured on the coat of arms of Mexico. It is the subspecies with the largest breeding range and is probably the most numerous subspecies, especially if A. c. kamtschatica is included. Male wing length is from , averaging , and female wing length is from , averaging . The average wingspan in both sexes is about . Males weigh from , averaging , and females typically weigh from , averaging . The subspecies does not appear to follow Bergmann's rule (the rule that widely distributed organisms are larger-bodied further away from the Equator), as specimens of both sexes from Idaho had a mean weight of and where slightly heavier than those from Alaska, with a mean weight of . It is medium-sized, being generally intermediate in size between the nominate and A. c. homeyeri, but with much overlap. It is blackish to dark brown on the back. The long feathers of the nape and top-neck are rusty-reddish and slightly narrower and darker than in the nominate subspecies.
Aquila chrysaetos kamtschatica Severtzov, 1888 – sometimes referred to as the Siberian golden eagle or the Kamchatkan golden eagle. This subspecies ranges from Western Siberia (where overlap with A. c. chrysaetos is probable), across most of Russia, including the Altay (spilling over into Northern Mongolia), to the Kamchatka Peninsula and the Anadyrsky District. This subspecies is often included in A. c. canadensis. Male wing length is from , averaging , and female wing length is from , averaging . No weights are known in this subspecies. The coloration of these eagles is almost exactly the same as in A. c. canadensis. The main difference is that this subspecies is much larger in size, being nearly the equal of A. c. daphanea if going on wing-length.
The larger Middle Pleistocene golden eagles of France (and possibly elsewhere) are referred to a paleosubspecies Aquila chrysaetos bonifacti, and the huge specimens of the Late Pleistocene of Liko Cave (Crete) have been named Aquila chrysaetos simurgh (Weesie, 1988). Similarly, an ancestral golden eagle, with a heavier, broader skull, larger wings and shorter legs when compared to modern birds, has been found in the La Brea Tar Pits of southern California.
Description
Size
The golden eagle is a very large raptor, in length. Its wings are broad and the wingspan is . The wingspan of golden eagles is the fifth largest among living eagle species. Females are larger than males, with a bigger difference in larger subspecies. Females of the large Himalayan golden eagles are about 37% heavier than males and have nearly 9% longer wings, whereas in the smaller Japanese golden eagles, females are only 26% heavier with around 6% longer wings. In the largest subspecies (A. c. daphanea), males and females weigh typically , respectively. In the smallest subspecies, A. c. japonica, males weigh and females . In the species overall, males average around and females around . The maximum size of golden eagles is debated. Large subspecies are the heaviest representatives of the genus Aquila and this species is on average the seventh-heaviest living eagle species. The golden eagle is the second heaviest breeding eagle in North America, Europe and Africa and the fourth heaviest in Asia. For some time, the largest known mass authenticated for a wild female was the specimen from the A. c. chrysaetos subspecies which weighed around and spanned across the wings. American golden eagles are typically somewhat smaller than the large Eurasian species, but a massive female that was banded and released in 2006 around Wyoming's Bridger-Teton National Forest became the heaviest wild golden eagle on record, at . Captive birds have been measured with a wingspan of and a mass of , though this mass was for an eagle bred for falconry, which tend to be unnaturally heavy.
The standard measurements of the species include a wing chord length of , a tail length of and a tarsus length of . The culmen (upper ridge of beak) reportedly averages around , with a range of . The bill length from the gape measures around . The long, straight and powerful hallux-claw (hind claw) can range from , about one centimetre longer than in a bald eagle and a little more than one centimetre less than a harpy eagle.
Colour
Adults of both sexes have similar plumage and are primarily dark brown, with some grey on the inner wing and tail, and a paler, typically golden colour on the back of the crown and nape that gives the species its common name. Unlike other Aquila species, where the tarsal feathers are typically similar in colour to the rest of the plumage, the tarsal feathers of golden eagles tend to be paler, ranging from light golden to white. In addition, some full-grown birds (especially in North America) have white "epaulettes" on the upper part of each scapular feather tract. The bill is dark at the tip, fading to a lighter horn colour, with a yellow cere. As in many accipitrids, the bare portion of the feet is yellow. There are subtle differences in colouration among subspecies, described below.
Juvenile golden eagles are similar to adults but tend to be darker, appearing black on the back especially in East Asia. They have a less faded colour. Young birds are white for about two-thirds of their tail length, ending with a broad, black band. Occasionally, juvenile eagles have white patches on the remiges at the bases of the inner primaries and the outer secondaries, forming a crescent marking on the wings which tends to be divided by darker feathers. Rarely, juvenile birds may have only traces of white on the tail. Compared to the relatively consistently white tail, the white patches on the wing are extremely variable; some juveniles have almost no white visible. Juveniles of less than 12 months of age tend to have the most white in their plumage. By their second summer, the white underwing coverts are usually replaced by a characteristic rusty brown colour. By the third summer, the upper-wing coverts are largely replaced by dark brown feathers, although not all feathers moult at once which leaves many juvenile birds with a grizzled pattern. The tail follows a similar pattern of maturation to the wings. Due to the variability between individuals, juvenile eagles cannot be reliably aged by sight alone. Many golden eagles still have white on the tail during their first attempt at nesting. The final adult plumage is not fully attained until the birds are between and years old.
Moulting
This species moults gradually beginning in March or April until September or October each year. Moulting usually decreases in winter. Moult of the contour feathers begins on the head and neck region and progresses along the feather tracts in a general front-to-back direction. Feathers on head, neck, back and scapulars may be replaced annually. With large feathers of the wing and tail, moult begins with the innermost feathers and proceeds outwards in a straightforward manner known as "descendant" moult.
Vocalisations
While many accipitrids are not known for their strong voices, golden eagles have a particular tendency for silence, even while breeding. That being said, some vocalization has been recorded, usually centering around the nesting period. The voice of the golden eagle is considered weak, high, and shrill, has been called "quite pathetic" and "puppy-like", and seems incongruous with the formidable size and nature of the species. Most known vocalisations seem to function as contact calls between eagles, sometimes adults to their offspring, occasionally territorial birds to intruders and rarely between a breeding pair. In western Montana, nine distinct calls were noted: a chirp, a , a , a , a cluck, a wonk, a honk and a hiss.
Flight
Golden eagles are sometimes considered the best fliers among eagles and perhaps among all raptorial birds. They are equipped with broad, long wings with somewhat finger-like indentations on the tips of the wing. Golden eagles are unique among their genus in that they often fly in a slight dihedral, which means the wings are often held in a slight, upturned V-shape. When they need to flap, golden eagles appear at their most laboured, but this is less common than soaring or gliding––. Flapping flight usually consists of 6–8 deep wing-beats, interspersed with 2–3 second glides. While soaring, the wings and tail are held in one plane with the primary tips often spread. A typical, unhurried soaring speed in golden eagles is around . When hunting or displaying, the golden eagle can glide very fast, reaching speeds of up to . When stooping (diving) in the direction of prey or during territorial displays, the eagle holds its legs up against its tail, and holds its wings tight and partially closed against its body. When diving after prey, a golden eagle can reach . Although less agile and manoeuvrable, the golden eagle is apparently quite the equal and possibly even the superior of the peregrine falcon's stooping and gliding speeds. This makes the golden eagle one of the two fastest living animals. Although most flight in golden eagles has a clear purpose (e.g., territoriality, hunting), some flights, such as those by solitary birds or between well-established breeding pairs, seem to be play.
Distinguishing from other species
Size readily distinguishes this species from most other raptors when it is seen well. Most other raptors are considerably smaller. Buteo hawks, which are perhaps most similar to the golden eagle in structure among the species outside of the "booted eagle" group, are often counted among the larger very common raptors. However, a mid-sized Buteo is dwarfed by a golden eagle, as an adult female eagle has about double the wingspan and about five times the weight. Buteos are also usually distinctly paler below, although some species occur in dark morphs which can be darker than a golden eagle. Among raptorial birds that share the golden eagle's range, only some Old World vultures and the California condor are distinctly larger, with longer, broader wings, typically held more evenly in a slower, less forceful flight; they often have dramatically different colour patterns. In North America, the golden eagle may be confused with the turkey vulture from a great distance, as it is a large species that, like the golden eagle, often flies with a pronounced dihedral. The turkey vulture can be distinguished by its less controlled, forceful flying style (they frequently rock back and forth unsteadily in even moderate winds) and its smaller, thinner body, much smaller head and, at closer range, its slaty black-brown colour and silvery wing secondaries. Compared to Haliaeetus eagles, the golden eagle has wings that are only somewhat more slender but are more hawk-like and lack the flat, plank-like wing positioning seen in the other genus. Large northern Haliaeetus species usually have a larger bill and larger head which protrudes more distinctly than a golden eagle's in flight. The tail of the golden eagle is longer on average than those of Haliaeetus eagles, appearing to be two or three times the length of the head in soaring flight, whereas in the other eagles the head is often more than twice the length of the tail. Confusion is most likely between juvenile Haliaeetus and golden eagles, as the adult golden has a more solidly golden-brown coloration and all Haliaeetus eagles have obvious distinctive plumages as adults. Haliaeetus eagles are often heavily streaked in their juvenile phase. Juvenile golden eagles can have large patches of white on their wings and tail that are quite different from the random, sometimes large and splotchy-looking distribution of white typical of juvenile Haliaeetus.
Distinguishing the golden eagle from other Aquila eagles in Eurasia is more difficult. Identification may rely on the golden eagle's relatively long tail and patterns of white or grey on the wings and tail. Unlike golden eagles, other Aquila eagles do not generally fly in a pronounced dihedral. At close range, the golden to rufous nape-shawl of the golden eagle is distinctive from other Aquila. Most other Aquila eagles have darker plumage, although the smaller tawny eagle is often paler than the golden eagle (the overlap in range is verified only in Bale Mountains, Ethiopia). Among Eurasian Aquila, the adult eastern imperial and Spanish imperial eagle come closest to reaching the size of golden eagles, but both are distinguished by their longer necks, flatter wings in flight, white markings on their shoulder forewing-coverts, paler cream-straw coloured nape patch and generally darker colouration. Juvenile imperial eagles are much paler overall (caramel-cream in the Spanish; cream and tawny streaks in the eastern) and are not likely to be confused. Steppe eagles can also approach golden eagles in size but are more compact and smaller headed with little colour variation to their dark earth-brown plumage, apart from juvenile birds which have distinctive cream-coloured bands running through their coverts and secondaries. Verreaux's eagles are most similar in size and body shape to the golden, the body of the Verreaux's eagle being slightly longer overall but marginally less heavy and long-winged than the golden eagle's. The plumage is very distinctly different, however, as Verreaux's eagles are almost entirely jet-black except for some striking, contrasting white on the wing primaries, shoulders and upper-wing. This closely related species is known to co-occur with the golden eagle only in the Bale Mountains of Ethiopia. Other booted eagles in the golden eagle's range are unlikely to be confused due to differences in size and form. The only species in the genus Aquila that exceeds the golden eagle in average wingspan and length is the wedge-tailed eagle of Australasia; however, the wedge-tailed eagle is a slightly less heavy bird.
Habitat and distribution
Golden eagles are fairly adaptable in habitat but often reside in areas with a few shared ecological characteristics. They are best suited to hunting in open or semi-open areas and search them out year-around. Native vegetation seems to be attractive to them and they typically avoid developed areas of any type from urban to agricultural as well as heavily forested regions. In desolate areas (e.g., the southern Yukon), they can occur regularly at roadkills and garbage dumps. The largest numbers of golden eagles are found in mountainous regions today, with many eagles doing a majority of their hunting and nesting on rock formations. However, they are not solely tied to high elevations and can breed in lowlands if the local habitats are suitable. Below are more detailed descriptions of habitats occupied by golden eagles in both continents where they occur.
Eurasia
In the Arctic fringe of Eurasia, golden eagles occur along the edge of the tundra and the taiga from the Kola peninsula to Anadyr in eastern Siberia, nesting in forests and hunting over nearby arctic heathland. Typical vegetation is stunted, fragmented larch woodland merging into low birch-willow scrub and various heathland. In the rocky, wet, windy maritime climate of Scotland, Ireland, and western Scandinavia, the golden eagle dwells in mountains. These areas include upland grasslands, blanket bog, and sub-Arctic heaths but also fragmented woodland and woodland edge, including boreal forests. In Western Europe, golden eagle habitat is dominated by open, rough grassland, heath and bogs, and rocky ridges, spurs, crags, scree, slopes and grand plateaux. In Sweden, Finland, the Baltic States, Belarus and almost the entire distribution in Russia all the way to the Pacific Ocean, golden eagles occur sparsely in lowland taiga forest. These areas are dominated by stands of evergreens such as pine, larch and spruce, occasionally supplemented by birch and alder stands in southern Scandinavia and the Baltic States. This is largely marginal country for golden eagles and they occur where tree cover is thin and abuts open habitat. Golden eagle taiga habitat usually consists of extensive peatland formations caused by poorly drained soils.
In central Europe, golden eagles today occur almost exclusively in the major mountain ranges, such as the Pyrenees, Alps, Carpathians, and the Caucasus. Here, the species nests near the tree line and hunt subalpine and alpine pastures, grassland and heath above. Golden eagles also occur in moderately mountainous habitat along the Mediterranean Sea, from the Iberian Peninsula and the Atlas Mountains in Morocco, to Greece, Turkey and Iraq. This area is characterized by low mountains, Mediterranean maquis vegetation, and sub-temperate open woodland. The local pine-oak vegetation, with a variety of Sclerophyllous shrubs are well-adapted to prolonged summer droughts. From Turkey and the southern Caspian Sea to the foothills of the Hindu Kush Mountains in Afghanistan, the typical golden eagle habitat is temperate desert-like mountain ranges surrounded by steppe landscapes interspersed with forest. Here the climate is colder and more continental than around the Mediterranean.
Golden eagles occupy the alpine ranges from the Altai Mountains and the Pamir Mountains to Tibet, in the great Himalayan massif, and Xinjiang, China, where they occupy the Tien Shan range. In these mountain ranges, the species often lives at very high elevations, living above tree line at more than , often nesting in rocky scree and hunting in adjacent meadows. In Tibet, golden eagles inhabit high ridges and passes in the Lhasa River watershed, where they regularly join groups of soaring Himalayan vultures (Gyps himalayensis). One golden eagle was recorded circling at above sea-level in Khumbu in May 1975. In the mountains of Japan and Korea, the golden eagle occupies deciduous scrub woodland and carpet-like stands of Siberian dwarf pine (Pinus pumila) that merge into grasslands and alpine heathland.
The golden eagle occurs in mountains from the Adrar Plateau in Mauritania to northern Yemen and Oman where the desert habitat is largely bereft of vegetation but offers many rocky plateaus to support both the eagles and their prey. In Israel, their habitat is mainly rocky slopes and wide wadi areas, chiefly in desert and to a lesser extent in semi-desert and Mediterranean climates, extending to open areas. In Northeastern Africa, the habitat is often of a sparse, desert-like character and is quite similar to the habitat in Middle East and the Arabian peninsula. In Ethiopia's Bale Mountains, where the vegetation is more lush and the climate is clearly less arid than in Northeastern Africa, the golden eagle occupies verdant mountains.
North America
The biomes occupied by golden eagles are roughly concurrent with those of Eurasia. In western and northern Alaska and northern Canada to the Ungava Peninsula in Quebec, the eagles occupy the Arctic fringe of North America (the species does not range into the true high Arctic tundra), where open canopy gives way to dwarf-shrub heathland with cottongrass and tussock tundra. In land-locked areas of the sub-Arctic, golden eagles are by far the largest raptor. From the Alaska Range to Washington and Oregon, it is often found in high mountains above the tree line or on bluffs and cliffs along river valleys below the tree line. In Washington state, golden eagles can be found in clear-cut sections of otherwise dense coniferous forest zones with relatively little annual precipitation. From east of the Canadian Rocky Mountains to the mountains of Labrador, the golden eagle is found in small numbers in boreal forest peatlands and similar mixed woodland areas. In the foothills of the Rocky Mountains in the United States are plains and prairies where golden eagles are widespread, especially where there is a low human presence. Here, grassland on low rolling hills and flat plains are typical, interrupted only by cottonwood stands around river valleys and wetlands where the eagles may build their nests.
Golden eagles also occupy the desert-like Great Basin from southern Idaho to northern Arizona and New Mexico. In this habitat, trees are generally absent other than junipers with vegetation being dominated by sagebrush (Artemisia) and other low shrub species. Although the vegetation varies a bit more, similar habitat is occupied by golden eagles in Mexico. However, golden eagles are typically absent in North America from true deserts, like the Sonora Desert, where annual precipitation is less than . Golden eagles occupy the mountains and coastal areas of California and Baja California in Mexico where hot, dry summers and moist winters are typical. The golden eagles here often nest in chaparral and oak woodland, oak savanna and grassland amongst low rolling hill typified by diverse vegetation. In the Eastern United States, the species once bred widely in the Appalachian Plateau near burns, open marshes, meadows, bogs and lakes. In Eastern North America, the species still breeds on the Gaspe Peninsula, Quebec. Until 1999, a pair of golden eagles were still known to nest in Maine but they are now believed to be absent as a breeding bird from the Eastern United States. The golden eagles who breed in eastern Canada winter on montane grass and heath fields in the Appalachian Plateau region, especially in Pennsylvania, New York, West Virginia, Maryland and Virginia. Most sightings in the Eastern United States recently are concentrated within or along southwestern border of the Appalachian Plateau (30% of records) and within the Coastal Plain physiographic region (33% of records).
Though they do regularly nest in the marsh-like peatland of the boreal forest, golden eagles are not generally associated with wetlands and, in fact, they can be found near some of the most arid spots on earth. In the wintering population of Eastern United States, however, they are often associated with steep river valleys, reservoirs, and marshes in inland areas as well as estuarine marshlands, barrier islands, managed wetlands, sounds, and mouths of major river systems in coastal areas. These wetlands are attractive due to a dominance of open vegetation, large concentrations of prey, and the general absence of human disturbance. In the midwestern United States, they are not uncommon during winter near reservoirs and wildlife refuges that provide foraging opportunities at waterfowl concentrations.
Feeding
Golden eagles usually hunt during daylight hours, but were recorded hunting from one hour before sunrise to one hour after sunset during the breeding season in southwestern Idaho. The hunting success rate of golden eagles was calculated in Idaho, showing that, out of 115 hunting attempts, 20% were successful in procuring prey. A fully-grown golden eagle requires about of food per day but in the life of most eagles there are cycles of feast and famine, and eagles have been known to go without food for up to a week and then gorge on up to at one sitting.
The diet of golden eagles is composed primarily of small mammals such as rabbits, hares, ground squirrels, prairie dogs, and marmots. They also eat other birds (usually of medium size, such as gamebirds), reptiles, and fish in smaller numbers. Golden eagles occasionally capture large prey, including seals, ungulates, coyotes, and badgers. They have also been known to capture large flying birds such as geese or cranes. They have also been known to prey on other raptors, including owls and falcons.
Activity and movements
Despite the dramatic ways in which they attain food and interact with raptors of their own and other species, the daily life of golden eagles is often rather uneventful. In Idaho, adult male golden eagles were observed to sit awake on a perch for an average of 78% of daylight, whereas adult females sat on nest or perched for an average of 85% of the day. During the peak of summer in Utah, hunting and territorial flights occurred mostly between 9:00 and 11:00 am and 4:00 and 6:00 pm, with the remaining 15 or so hours of daylight spent perching or resting. Golden Eagles visit water sources for drinking, bathing, and preening, particularly during summer months. When conditions are heavily anticyclonic, there is less soaring during the day. During winter in Scotland, golden eagles soar frequently in order to scan the environment for carrion. In the more wooded environments of Norway during autumn and winter, much less aerial activity is reported, since the eagles tend to avoid detection by actively contour-hunting rather than looking for carrion. Golden eagles are believed to sleep through much of the night. Although usually highly solitary outside of the bond between breeding pairs, exceptionally cold weather in winter may cause eagles to put their usual guard down and perch together. The largest known congregation of golden eagles was observed on an extremely cold winter's night in eastern Idaho when 124 individuals were observed perched closely along a line of 85 power poles.
Migration
Most populations of golden eagles are sedentary, but the species is actually a partial migrant. Golden eagles are very hardy species, being well adapted to cold climates, however they cannot abide declining available food sources in the northern stretches of their range. Eagles raised at latitudes greater than 60° N are usually migratory, though a short migration may be untaken by those who breed or hatch at about 50° N. During migration, they often use soaring-gliding flight, rather than powered flight. In Finland, most banded juveniles move between due south, whereas adults stay locally through winter. Further east, conditions are too harsh for even wintering territorial adults. Golden eagles that breed from the Kola peninsula to Anadyr in the Russian Far East migrate south to winter on the Russian and Mongolian steppes, and the North China Plains. The flat, relatively open landscapes in these regions hold relatively few resident breeding golden eagles. Similarly the entire population of golden eagles from northern and central Alaska and northern Canada migrates south. At Mount Lorette in Alberta, approximately 4,000 golden eagles may pass during the fall, the largest recorded migration of golden eagles on earth. Here the mountain ranges are relatively moderate and consistent, thus being reliable for thermals and updrafts which made long-distance migrating feasible. Birds hatched in Denali National Park in Alaska traveled from to their winter ranges in western North America. These western migrants may winter anywhere from southern Alberta and Montana to New Mexico and Arizona and from inland California to Nebraska. Adults who bred in northeastern Hudson Bay area of Canada reached their wintering grounds, which range from central Michigan to southern Pennsylvania to northeastern Alabama, in 26 to 40 days, with arrival dates from November to early December. The departure dates from wintering grounds are variable. In southwestern Canada, they leave their wintering grounds by 6 April to 8 May (the mean being 21 April); in southwestern Idaho, wintering birds leave from 20 March to 13 April (mean of 29 March); and in the Southwestern United States, wintering birds may depart by early March. Elsewhere in the species' breeding range, golden eagles (i.e., those who breed in the contiguous Western United States, all of Europe but for Northern Scandinavia, North Africa and all of Asia but for Northern Russia) are non-migratory and tend to remain within striking distance of their breeding territories throughout the year. In Scotland, among all recovered, banded golden eagles (36 out of 1000, the rest mostly died or disappeared) the average distance between ringing and recovery was , averaging in juveniles and in older birds. In the dry Southwestern United States, golden eagles tend to move to higher elevations once the breeding season is complete. In North Africa, populations breeding at lower latitudes, like Morocco, are mostly sedentary, although some occasionally disperse after breeding to areas outside of the normal breeding range.
Territoriality
Territoriality is believed to be the primary cause of interactions and confrontations between non-paired golden eagles. Golden eagles maintain some of the largest known home ranges (or territories) of any bird species but there is much variation of home range size across the range, possibly dictated by food abundance and habitat preference. Home ranges in most of the range can vary from . In San Diego County in California, the home ranges varied from , with an average of . However, some home ranges have been much smaller, such as in southwestern Idaho where, possibly due to an abundance of jackrabbits, home ranges as small as are maintained. The smallest known home ranges on record for golden eagles are in the Bale Mountains of Ethiopia, where they range from . 46% of undulating displays in Montana occurred shortly after the juvenile eagles left their parents range, suggesting that some residents defend and maintain territories year-round. Elsewhere it is stated that home ranges are less strictly maintained during winter but hunting grounds are basically exclusive. In Israel and Scotland, aggressive encounters peaked from winter until just before egg-laying and were less common during the nesting season. Threat displays include undulating flight and aggressive direct flapping flight with exaggerated downstrokes. Most displays by mature golden eagles (67% for males and 76% for females) occur, rather than around the nest, at the edge of their home ranges. In Western Norway, most recorded undulating flight displays occur during the pre-laying period in late winter/early spring. Display flights seem to be triggered by the presence of other golden eagles. The use of display flights has a clear benefit in that it lessens the need for physical confrontations, which can be fatal. Usually, non-breeding birds are treated aggressively by the golden eagle maintaining their home range, normally being chased to the apparent limit of the range but with no actual physical contact. The territorial flight of the adult golden eagle is sometimes preceded or followed by intense bouts of undulating displays. The invader often responds by rolling over and presenting talons to the aggressor. Rarely, the two eagles will lock talons and tumble through the air; sometimes fall several revolutions and in some cases even tumble to the ground before releasing their grip. In some parts of the Alps, the golden eagle population has reached the saturation point in appropriate habitat and apparently violent confrontations are more common than in other parts of the range. Golden eagles may express their aggression via body language while perched, typically the adult female when confronted by an intruding eagle: the head and body are upright, feathers on head and neck are erect; the wings may be slightly spread and beak open; often accompanied by intense gaze. They then often engage in a similar posture with wings spread wide and oriented toward the threat; sometimes rocking back on tail and even flopping over onto the back with talons extended upward as defense. Such behavior may be accompanied by wing slap against the threatening intruder. When approached by an intruder, the defending eagle turns away, partially spreads tail, lowers head, and remains still; adults on the nest may lower head and "freeze" when approached by a person or a helicopter. Females in Israel displayed more than males and mostly against interspecific intruders; males apparently displayed primarily as part of courtship. Five of 7 aggressive encounters at carcasses during winter in Norway were won by females; in 15 of 21 conflicts, the younger bird dominated an older conspecific. However, obvious juvenile eagles (apparent to the adult eagles due to the amount of white on their wings and tail) are sometimes allowed to penetrate deeply into a pair's home range and all parties commonly ignore each other. In North Dakota, it was verified that parent eagles were not aggressive towards their own young after the nesting period and some juveniles stayed on their parents territory until their 2nd spring and then left by their own accord.
Reproduction
Golden eagles usually mate for life. A breeding pair is formed in a courtship display. This courtship includes undulating displays by both in the pair, with the male bird picking up a piece of rock or a small stick, and dropping it only to enter into a steep dive and catch it in mid-air, repeating the maneuver 3 or more times. The female takes a clump of earth and drops and catches it in the same fashion. Golden eagles typically build several eyries within their territory (preferring cliffs) and use them alternately for several years. Their nesting areas are characterized by the extreme regularity of the nest spacing. Mating and egg-laying timing for golden eagle is variable depending on the locality. Copulation normally lasts 10–20 seconds. Mating seems to occur around 40–46 days before the initial egg-laying. The golden eagle chick may be heard from within the egg 15 hours before it begins hatching. After the first chip is broken off of the egg, there is no activity for around 27 hours. Hatching activity accelerates and the shell is broken apart in 35 hours. The chick is completely free in 37 hours.
In the first 10 days, chicks mainly lie down on the nest substrate. They are capable of preening on their second day but their parents keep them warm until around 20 days. They grow considerably, weighing around . They also start sitting up more. Around 20 days of age, the chicks generally start standing, which becomes the main position over the course of the next 40 days. The whitish down continues until around 25 days of age, at which point it is gradually replaced by dark contour feathers that eclipse the down and the birds attain a general piebald appearance. After hatching, 80% of food items and 90% of food biomass is captured and brought to the nest by the adult male. Fledging occurs at 66 to 75 days of age in Idaho and 70 to 81 days in Scotland. The first attempted flight departure after fledging can be abrupt, with the young jumping off and using a series of short, stiff wing-beats to glide downward or being blown out of nest while wing-flapping. 18 to 20 days after first fledging, the young eagles will take their first circling flight, but they cannot gain height as efficiently as their parents until approximately 60 days after fledging. In Cumbria, young golden eagles were first seen hunting large prey 59 days after fledging. 75 to 85 days after fledging, the young were largely independent of parents. Generally, breeding success seems to be greatest where prey is available in abundance.
Longevity
Golden eagles are fairly long-living birds in natural conditions if they survive their first few years. The survival rate of raptorial birds tends to increase with larger body size, with a 30–50% annual loss of population rate in small falcons/accipiters, a 15–25% loss of population rate in medium-sized hawks (e.g., Buteos or kites) and a 5% or less rate of loss in eagles and vultures. The oldest known wild golden eagle was a bird banded in Sweden which was recovered 32 years later. The longest-lived known wild golden eagle in North America was 31 years and 8 months. The longest-lived known captive golden eagle, a specimen in Europe, survived to 46 years of age. The estimated adult annual survival rate on the Isle of Skye in Scotland is around 97.5%. When this extrapolated into an estimated lifespan this results in years as the average for adult golden eagles in this area, which is probably far too high an estimate. Survival rates are usually much lower in juvenile eagles than in adult eagles. In the western Rocky Mountains, 50% of golden eagles banded in the nest died by the time they were years and an estimated 75% died by the time they were 5 years old. Near a wind turbine facility in west-central California, estimated survival rates, based on conventional telemetry of 257 individuals, were 84% for first-year eagles, 79% for 1- to 3-year-olds and adult floaters and 91% for breeders; with no difference in survival rates between sexes. Survival rates may be lower for migrating populations of golden eagles. A 19–34% survival rate was estimated for juvenile eagles from Denali National Park in their first 11 months. The average life expectancy of golden eagles in Germany is 13 years, extrapolated from a reported mere 92.5% survival rate.
Natural mortality
Natural sources of mortality are largely reported in anecdotes. On rare occasions, golden eagles have been killed by competing predators or by hunting mammalian carnivores, including the aforementioned wolverine, snow leopard, cougar, brown bear and white-tailed eagle attacks. Most competitive attacks resulting in death probably occur at the talons of other golden eagles. Nestlings and fledglings are more likely to be killed by another predator than free-flying juveniles and adults. It has been suspected that golden eagle nests may be predated more frequently by other predators (especially birds, which are often the only other large animals that can access a golden eagle nest without the assistance of man-made climbing equipment) in areas where golden eagles are regularly disturbed at the nest by humans. Jeff Watson believed that common raven occasionally eats golden eagle eggs but only in situations where the parent eagles have abandoned their nesting attempt. However, there are no confirmed accounts of predation by other bird species on golden eagle nests. Occasionally, golden eagles may be killed by their prey in self-defense. There is an account of a golden eagle dying from the quills of a North American porcupine (Erethizon dorsatum) it had attempted to hunt. On the Isle of Rùm in Scotland, there are a few cases of red deer trampling golden eagles to death, probably the result of a doe having intercepted a bird that was trying to kill a fawn. Although usually well out-matched by the predator, occasionally other large birds can put up a formidable fight against a golden eagle. An attempted capture of a great blue heron by a golden eagle resulted in the death of both birds from wounds sustained in the ensuing fight. There is at least one case in Scotland of a golden eagle dying after being "oiled" by a northern fulmar, a bird whose primary defense against predators is to disgorge an oily secretion which may inhibit the predator's ability to fly. Of natural sources of death, starvation is probably under-reported. 11 of 16 dead juvenile eagles which had hatched in Denali National Park had died of starvation. Of 36 deaths of golden eagles in Idaho, 55% were possibly attributable to natural causes, specifically 8 (26%) from unknown trauma, 3 (10%) from disease and 6 (19%) from unknown causes. Of 266 golden eagle deaths in Spain, only 6% were from unknown causes that could not be directly attributed to human activities. Avian cholera caused by bacteria (Pasteurella multocida) infects eagles that eat waterfowl that have died from the disease. The protozoan Trichomonas sp. caused the deaths of four fledglings in a study of wild golden eagles in Idaho. Several further diseases that contribute to golden eagle deaths have been examined in Japan. A captive eagle died from two malignant tumors – one in the liver and one in the kidney.
Killing permits
In December 2016, the US Fish and Wildlife Service proposed allowing wind-turbine electric generation companies to kill golden eagles without penalty, so long as "companies take steps to minimize the losses". If issued, the permits would last 30 years, six times the current 5-year permits.
In human culture
Human beings have been fascinated by the golden eagle as early as the beginning of recorded history. Most early-recorded cultures regarded the golden eagle with reverence. In pre-Hispanic Mesoamerica, the eagle was a major Mexica (Aztec) symbol: the tribal and sun god, Huitzilopochtli, had told his people that when they saw the sun (i.e., Huitzilopochtli) in the form of an eagle perched on a cactus whose fruit was red and shaped like a human heart, there they should build their city, Tenochtitlan. The scene—shown on a well-known sculpture, in early manuscripts, and on the present-day Mexican flag—surely had astronomical and geomantic, as well as mythological meaning.
It was only after the Industrial Revolution, when sport-hunting became widespread and commercial stock farming became internationally common, that humans started to widely regard golden eagles as a threat to their livelihoods. This period also brought about the firearm and industrialized poisons, which made it easy for humans to kill the evasive and powerful birds.
In 2017 the French Army trained golden eagles to catch drones. The golden eagle is officially Utah's state bird of prey.
Status and conservation
At one time, the golden eagle lived in a great majority of temperate Europe, North Asia, North America, North Africa, and Japan. Although widespread and quite secure in some areas, in many parts of the range golden eagles have experienced sharp population declines and have even been extirpated from some areas. The number of golden eagles from around the range is estimated to be between 170,000 and 250,000 while the estimates of breeding pairs are from 60,000 to 100,000. It has the largest known range of any member of its family, with a range estimated at 140 million square kilometers. If its taxonomic order is considered, it is the second most wide-ranging species after only the osprey (Pandion haliaetus). Few other eagle species are as numerous, though some species like the tawny eagle, wedge-tailed eagle and bald eagle have total estimated populations of a similar size to the golden eagle's despite their more restricted distributions. The world's most populous eagle may be the African fish eagle (Haliaeetus vocifer), which has a stable total population estimated at 300,000 and is found solely in Africa. On a global scale, the golden eagle is not considered threatened by the IUCN.
| Biology and health sciences | Accipitriformes and Falconiformes | null |
11442363 | https://en.wikipedia.org/wiki/Boverisuchus | Boverisuchus | Boverisuchus is an extinct genus of planocraniid crocodyliforms known from the early to middle Eocene (Ypresian to Lutetian stages) of Germany and western North America. It was a relatively small crocodyliform with an estimated total length of approximately .
History
The type species Boverisuchus magnifrons was first named by paleontologist Oskar Kuhn in 1938, from the Lutetian of Germany alongside Weigeltisuchus geiseltalensis. Most paleontologists have considered both species to represent junior synonyms of the type species of Pristichampsus, P. rollinatii. Following a revision of the genus Pristichampsus by Brochu (2013), P. rollinati was found to be based on insufficiently diagnostic material and therefore is a nomen dubium while Boverisuchus was reinstated as a valid genus. Brochu (2013) also assigned Crocodylus vorax, which has been referred to as Pristichampsus vorax since Langston (1975), as the second species of Boverisuchus. According to Brochu (2013), material from the middle Eocene of Italy and Texas may represent another yet unnamed species. The two Asian species of Planocrania were found to be most closely related to Boverisuchus using a phylogenetic analysis. The name Planocraniidae was reinstated to contain these genera and replace Pristichampsidae.
Phylogeny
Phylogenetic analyses based purely on morphological data have generally placed Planocraniidae in a basal position within the crocodilian crown group. Some of these analyses have found that planocraniids lie just outside Brevirostres, a group that includes alligators, caimans, and crocodiles but not gharials. However, molecular studies using DNA sequencing have found the group Brevirostres to be invalid upon finding that crocodiles and gavialids are more closely related than alligators.
A 2018 tip dating study by Lee & Yates using both molecular, morphological and stratigraphic data instead recovered the planocraniids outside crown group Crocodylia. Below is a cladogram from that study:
In 2021, Rio & Mannion conducted a new phylogenetic study using a heavily modified morphological data set, and also noted the lack of consensus and difficulty in placing Planocraniidae. In their study, they recovered Planocraniidae within Crocodylia, as the sister group to Longirostres, as shown in the cladogram below:
Description and habits
Based on other planocraniids, Boverisuchus is assumed to have had heavily armoured skin, and long limbs suggesting a cursorial (i.e. running) habitus. It also had hoof-like toes, suggesting that it lived more on land than in the water, and that it therefore probably hunted terrestrial mammals. The teeth of Boverisuchus were ziphodont; i.e., laterally compressed, sharp, and with serrated edges (characteristic of terrestrial crocodyliforms that are unable to dispatch their prey by drowning them). Due to their similarity to those of certain theropod dinosaurs they were initially mistaken for theropod teeth, leading paleontologists to believe that some non-avian dinosaurs survived the Cretaceous–Paleogene extinction event.
Some material referred to Pristichampsus rollinatii shows further features adapting the animal to this lifestyle. The tail was more reminiscent of a dinosaur's, being round in cross-section and lacking the osteoderm crest observed in extant crocodile species. It would also have been capable of galloping.
| Biology and health sciences | Prehistoric crocodiles | Animals |
19048968 | https://en.wikipedia.org/wiki/Spider%20web | Spider web | A spider web, spiderweb, spider's web, or cobweb (from the archaic word coppe, meaning 'spider') is a structure created by a spider out of proteinaceous spider silk extruded from its spinnerets, generally meant to catch its prey.
Spider webs have existed for at least 100 million years, as witnessed in a rare find of Early Cretaceous amber from Sussex, in southern England.
Many spiders build webs specifically to trap and catch insects to eat. However, not all spiders catch their prey in webs, and some do not build webs at all. The term "spider web" is typically used to refer to a web that is apparently still in use (i.e., clean), whereas "cobweb" refers to a seemingly abandoned (i.e., dusty) web. However, the word "cobweb" is also used by biologists to describe the tangled three-dimensional web of some spiders of the family Theridiidae. While this large family is known as the cobweb spiders, they actually have a huge range of web architectures; other names for this spider family include tangle-web spiders and comb-footed spiders.
Silk production
When spiders moved from the water to the land in the Early Devonian period, they started making silk to protect their bodies and their eggs. Most spiders have appendages called spinnerets. These are organs that produce silk with which the spiders spin webs (although some use the silk to catch their prey in other ways).
Spiders gradually started using silk for hunting purposes, first as guide lines and signal lines, then as ground or bush webs, and eventually as the aerial webs that are currently familiar.
Spiders produce silk from their spinneret glands located at the tip of their abdomen. Each gland produces a thread for a special purpose – for example a trailed safety line, sticky silk for trapping prey or fine silk for wrapping it. Spiders use different gland types to produce different silks, and some spiders are capable of producing up to eight different silks during their lifetime.
Most spiders have three pairs of spinnerets, each having its own function – there are also spiders with just one pair and others with as many as four pairs.
Webs allow a spider to catch prey without having to expend energy by running it down, making it an efficient method of gathering food. The hair and claws on spiders' legs allow them to cling to their webs. The oils on their bodies keep them from sticking to their own webs. However these energy savings are somewhat offset by the fact that constructing the web is in itself energetically costly, due to the large amount of protein required in the form of silk. In addition, after a time the silk will lose its stickiness and thus become inefficient at capturing prey. It is common for spiders to eat their own web daily to recoup some of the energy used in spinning. Through ingestion and digestion, the silk proteins are thus recycled. Due to the incredible strength of spider silk, scientists are currently studying it in the hope of creating a super-tough material with the same abilities.
Types
There are a few types of spider webs found in the wild, and many spiders are classified by the webs they weave. Different types of spider webs include:
Spiral orb webs, associated primarily with the family Araneidae, as well as Tetragnathidae and Uloboridae
Tangle webs or cobwebs, associated with the family Theridiidae
Funnel webs, with associations divided into primitive and modern
Tubular webs, which run up the bases of trees or along the ground
Sheet webs
Several different types of silk may be used in web construction, including a "sticky" capture silk and "fluffy" capture silk, depending on the type of spider. Webs may be in a vertical plane (most orb webs), a horizontal plane (sheet webs), or at any angle in between. It is hypothesized that these types of aerial webs co-evolved with the evolution of winged insects. As insects are spiders' main prey, it is likely that they would impose strong selectional forces on the foraging behavior of spiders. Most commonly found in the sheet-web spider families, some webs will have loose, irregular tangles of silk above them. These tangled obstacle courses serve to disorient and knock down flying insects, making them more vulnerable to being trapped on the web below. They may also help to protect the spider from predators such as birds and wasps. It is reported that several Nephila pilipes individuals can collectively construct an aggregated web system to counter bird predation from all directions.
Orb web construction
Most orb weavers construct webs in a vertical plane, although there are exceptions, such as Uloborus diversus, which builds a horizontal web. During the process of making an orb web, the spider will use its own body for measurements. There is variation in web construction among orb-weaving spiders, in particular, the species Zygiella x-notata is known for its characteristic missing sector web crossed by a single signal thread.
Many webs span gaps between objects which the spider could not cross by crawling. This is done by first producing a fine adhesive thread to drift on a faint breeze across a gap. When it sticks to a surface at the far end, the spider feels the change in the vibration. The spider reels in and tightens the first strand, then carefully walks along it and strengthens it with a second thread. This process is repeated until the thread is strong enough to support the rest of the web.
After strengthening the first thread, the spider continues to make a Y-shaped netting. The first three radials of the web are now constructed. More radials are added, making sure that the distance between each radial and the next is small enough to cross. This means that the number of radials in a web directly depends on the size of the spider plus the size of the web. It is common for a web to be about 20 times the size of the spider building it.
After the radials are complete, the spider fortifies the center of the web with about five circular threads. It makes a spiral of non-sticky, widely spaced threads to enable it to move easily around its own web during construction, working from the inside outward. Then, beginning from the outside and moving inward, the spider methodically replaces this spiral with a more closely spaced one made of adhesive threads. It uses the initial radiating lines as well as the non-sticky spirals as guide lines. The spaces between each spiral and the next are directly proportional to the distance from the tip of its back legs to its spinners. This is one way the spider uses its own body as a measuring/spacing device. While the sticky spirals are formed, the non-adhesive spirals are removed as there is no need for them any more.
After the spider has completed its web, it chews off the initial three center spiral threads then sits and waits, usually with the head facing downwards. If the web is broken without any structural damage during the construction, the spider does not make any initial attempts to rectify the problem.
The spider, after spinning its web, then waits on or near the web for a prey animal to become trapped. The spider senses the impact and struggle of a prey animal by vibrations transmitted through the web. A trap line is constructed by some species specifically to transmit this vibration. A spider positioned in the middle of the web makes for a highly visible prey for birds and other predators, even without web decorations; many day-hunting orb-web spinners reduce this risk by hiding at the edge of the web with one foot on a signal line from the hub or by appearing to be inedible or unappetizing.
Spiders do not usually adhere to their own webs, because they are able to spin both sticky and non-sticky types of silk, and are careful to travel across only non-sticky portions of the web. However, they are not immune to their own glue. Some of the strands of the web are sticky, and others are not. For example, if a spider has chosen to wait along the outer edges of its web, it may spin a non-sticky prey or signal line to the web hub to monitor web movement. However, in the course of spinning sticky strands, spiders have to touch these sticky strands. They do this without sticking by using careful movements, dense hairs and nonstick coatings on their feet to prevent adhesion.
Uses
Some spiders use their webs for hearing, where the giant webs function as extended and reconfigurable auditory sensors.
Not all use their webs for capturing prey directly, instead pouncing from concealment (e.g. trapdoor spiders) or running them down in open chase (e.g. wolf spiders). The net-casting spider balances the two methods of running and web spinning in its feeding habits. This spider weaves a small net which it attaches to its front legs. It then lurks in wait for potential prey and, when such prey arrives, lunges forward to wrap its victim in the net, bite and paralyze it. Hence, this spider expends less energy catching prey than a primitive hunter such as the wolf spider. It also avoids the energy loss of weaving a large orb web.
Many species also spin threads of silk to catch the wind and then sail on the wind to a new location.
Others manage to use the signaling-snare technique of a web without spinning a web at all. Several types of water-dwelling spiders rest their feet on the water's surface in much the same manner as an orb-web user. When an insect falls onto the water and is ensnared by surface tension, the spider can detect the vibrations and run out to capture the prey.
The diving bell spider and Desis marina, an intertidal species, use their web to trap air under water, where they can stay submerged long periods of time.
Human use
Cobweb paintings, which began during the 16th century in a remote valley of the Austrian Tyrolean Alps, were created on fabrics consisting of layered and wound cobwebs, stretched over cardboard to make a mat, and strengthened by brushing with milk diluted in water. A small brush was then used to apply watercolor to the cobwebs, or custom tools to create engravings. Fewer than a hundred cobweb paintings survive today, most of which are held in private collections.
In traditional European medicine, cobwebs were used on wounds and cuts to reduce bleeding and aid healing. This use was recorded in ancient Greece and Rome, and was mentioned in Shakespeare's A Midsummer Night's Dream. Spider webs have been shown to significantly reduce wound healing times. They are rich in vitamin K, which is essential in blood clotting, and their large surface area is also thought to help coagulation. During the 1st century BC, the Roman army used spider webs as field dressings, which also served as a fungicide.
The effects of some drugs can be measured by examining their effects on a spider's web-building.
In northeastern Nigeria, cow horn resonators in traditional xylophones often have holes covered with spider webs to create a buzzing sound.
Spider web strands have been used for crosshairs or reticles in telescopes.
Development of technologies to mass-produce spider silk has led to the manufacturing of prototype military protection, wound dressings and other medical devices, and consumer goods.
Spider webs can be used as a single step catalyst to make nanoparticles.
Physical and chemical properties
The stickiness of spiders' webs is due to droplets of glue suspended on the silk threads. Orb-weaver spiders, e.g. Larinioides cornutus, coat their threads with a hygroscopic aggregate. The glue's moisture absorbing properties use environmental humidity to keep the capture silk soft and tacky. The glue balls are multifunctional – that is, their behavior depends on how quickly something touching a glue ball attempts to withdraw. At high velocities, they function as an elastic solid, resembling rubber; at lower velocities, they simply act as a sticky glue. This allows them to retain a grip on attached food particles.
The web is electrically conductive which causes the silk threads to spring out to trap their quarry, as flying insects tend to gain a static charge which attracts the silk.
Neurotoxins have been detected in the glue balls of some spider webs. Presumably these toxins help immobilize prey, but their function could also be antimicrobial, or protection from ants or other animals that steal from the webs or might attack the spider.
Spider silk has greater tensile strength than the same weight of steel and much greater elasticity. Its microstructure is under investigation for potential applications in industry, including bullet-proof vests and artificial tendons. Researchers have used genetically modified mammals and bacteria to produce the proteins needed to make this material.
Communal spider webs
Occasionally, a group of spiders may build webs together in the same area.
Massive flooding in Pakistan during the 2010 monsoon drove spiders above the waterline, into trees. The result was trees covered with spider webs.
One such web, reported in 2007 at Lake Tawakoni State Park in Texas, measured across. Entomologists believe it may be the result of social cobweb spiders or of spiders building webs to spread out from one another. There is no consensus on how common this occurrence is.
In Brazil, there have been two instances of a phenomenon that became known as "raining spiders"; communal webs made by "social" spiders that cover such wide gaps and which strings are so difficult to see that hundreds of spiders seem to be floating in the air. The first occurred in Santo Antônio da Platina, Paraná, in 2013, and involved Anelosimus eximius individuals; the second was registered in Espírito Santo do Dourado, Minas Gerais, in January 2019, and involved Parawixia bistriata individuals.
Low gravity
It has been observed that being in Earth's orbit has an effect on the structure of spider webs in space.
Spider webs were spun in low Earth orbit in 1973 aboard Skylab, involving two female European garden spiders (cross spiders) called Arabella and Anita, as part of an experiment on the Skylab 3 mission. The aim of the experiment was to test whether the two spiders would spin webs in space, and, if so, whether these webs would be the same as those that spiders produced on Earth. The experiment was a student project of Judy Miles of Lexington, Massachusetts.
After the launch on July 28, 1973, and entering Skylab, the spiders were released by astronaut Owen Garriott into a box that resembled a window frame. The spiders proceeded to construct their web while a camera took photographs and examined the spiders' behavior in a zero-gravity environment. Both spiders took a long time to adapt to their weightless existence. However, after a day, Arabella spun the first web in the experimental cage, although it was initially incomplete.
The web was completed the following day. The crew members were prompted to expand the initial protocol. They fed and watered the spiders, giving them a house fly. The first web was removed on August 13 to allow the spider to construct a second web. At first, the spider failed to construct a new web. When given more water, it built a second web. This time, it was more elaborate than the first. Both spiders died during the mission, possibly from dehydration.
When scientists were given the opportunity to study the webs, they discovered that the space webs were finer than normal Earth webs, and although the patterns of the web were not totally dissimilar, variations were spotted, and there was a definite difference in the characteristics of the web. Additionally, while the webs were finer overall, the space web had variations in thickness in places: some places were slightly thinner, and others slightly thicker. This was unusual, because Earth webs have been observed to have uniform thickness.
Later experiments indicated that having access to a light source could orient the spiders and enable them to build their normal asymmetric webs when gravity was not a factor.
In culture
Spider webs play a crucial role in the 1952 children's novel Charlotte's Web. Webs are also featured in many other cultural depictions of spiders. In films, illustration, and other visual arts, spider webs may be used to readily suggest a "spooky" atmosphere, or imply neglect or the passage of time. Artificial "spider webs" are a common element of Halloween decorations. Spider webs are a common image in tattoo art, often symbolizing long periods of time spent in prison, or used simply to fill gaps between other images.
Some observers believe that a small spider is depicted on the United States one-dollar bill, in the upper-right corner of the front side (obverse), perched on the shield surrounding the number "1". This perception is enhanced by the resemblance of the background image of intertwining fine lines to a stylized spider web. However, other observers believe the figure is an owl.
The World Wide Web is thus named because of its tangled and interlaced structure, said to resemble that of a spider web.
Artificial spider webs are used by the superhero Spider-Man to restrain enemies and to make ropes on which to swing between buildings as quick transportation. Some incarnations of the character, such as the version in the Sam Raimi film trilogy and Spider-Man 2099, are shown to be able to produce organic webs.
The notable tensile strength of spider webs is often exaggerated in science fiction, often as a plot device to justify the presence of artificially giant spiders.
Posters used by the women at Greenham Common Women's Peace Camp often featured the symbol of a spider web, meant to symbolise the fragility as well as the perseverance of the protesters.
The Quran uses the fragility of the spider's web as a parable, comparing it to the faith of idolators.
Gallery
| Biology and health sciences | Shelters and structures | Animals |
7735022 | https://en.wikipedia.org/wiki/Cage%20%28graph%20theory%29 | Cage (graph theory) | In the mathematical field of graph theory, a cage is a regular graph that has as few vertices as possible for its girth.
Formally, an is defined to be a graph in which each vertex has exactly neighbors, and in which the shortest cycle has a length of exactly .
An is an with the smallest possible number of vertices, among all . A is often called a .
It is known that an exists for any combination of and . It follows that all exist.
If a Moore graph exists with degree and girth , it must be a cage. Moreover, the bounds on the sizes of Moore graphs generalize to cages: any cage with odd girth must have at least
vertices, and any cage with even girth must have at least
vertices. Any with exactly this many vertices is by definition a Moore graph and therefore automatically a cage.
There may exist multiple cages for a given combination of and . For instance there are three non-isomorphic , each with 70 vertices: the , the Harries graph and the Harries–Wong graph. But there is only one : the (with 112 vertices).
Known cages
A 1-regular graph has no cycle, and a connected 2-regular graph has girth equal to its number of vertices, so cages are only of interest for r ≥ 3. The (r,3)-cage is a complete graph Kr + 1 on r + 1 vertices, and the (r,4)-cage is a complete bipartite graph Kr,r on 2r vertices.
Notable cages include:
(3,5)-cage: the Petersen graph, 10 vertices
(3,6)-cage: the Heawood graph, 14 vertices
(3,7)-cage: the McGee graph, 24 vertices
(3,8)-cage: the Tutte–Coxeter graph, 30 vertices
(3,10)-cage: the Balaban 10-cage, 70 vertices
(3,11)-cage: the Balaban 11-cage, 112 vertices
(4,5)-cage: the Robertson graph, 19 vertices
(7,5)-cage: The Hoffman–Singleton graph, 50 vertices.
When r − 1 is a prime power, the (r,6) cages are the incidence graphs of projective planes.
When r − 1 is a prime power, the (r,8) and (r,12) cages are generalized polygons.
The numbers of vertices in the known (r,g) cages, for values of r > 2 and g > 2, other than projective planes and generalized polygons, are:
Asymptotics
For large values of g, the Moore bound implies that the number n of vertices must grow at least singly exponentially as a function of g. Equivalently, g can be at most proportional to the logarithm of n. More precisely,
It is believed that this bound is tight or close to tight . The best known lower bounds on g are also logarithmic, but with a smaller constant factor (implying that n grows singly exponentially but at a higher rate than the Moore bound). Specifically, the construction of Ramanujan graphs defined by satisfy the bound
This bound was improved slightly by .
It is unlikely that these graphs are themselves cages, but their existence gives an upper bound to the number of vertices needed in a cage.
| Mathematics | Graph theory | null |
15105978 | https://en.wikipedia.org/wiki/Well | Well | A well is an excavation or structure created on the earth by digging, driving, or drilling to access liquid resources, usually water. The oldest and most common kind of well is a water well, to access groundwater in underground aquifers. The well water is drawn up by a pump, or using containers, such as buckets that are raised mechanically or by hand. Water can also be injected back into the aquifer through the well. Wells were first constructed at least eight thousand years ago and historically vary in construction from a sediment of a dry watercourse to the qanats of Iran, and the stepwells and sakiehs of India. Placing a lining in the well shaft helps create stability, and linings of wood or wickerwork date back at least as far as the Iron Age.
Wells have traditionally been sunk by hand digging, as is still the case in rural areas of the developing world. These wells are inexpensive and low-tech as they use mostly manual labour, and the structure can be lined with brick or stone as the excavation proceeds. A more modern method called caissoning uses pre-cast reinforced concrete well rings that are lowered into the hole. Driven wells can be created in unconsolidated material with a well hole structure, which consists of a hardened drive point and a screen of perforated pipe, after which a pump is installed to collect the water. Deeper wells can be excavated by hand drilling methods or machine drilling, using a bit in a borehole. Drilled wells are usually cased with a factory-made pipe composed of steel or plastic. Drilled wells can access water at much greater depths than dug wells.
Two broad classes of well are shallow or unconfined wells completed within the uppermost saturated aquifer at that location, and deep or confined wells, sunk through an impermeable stratum into an aquifer beneath. A collector well can be constructed adjacent to a freshwater lake or stream with water percolating through the intervening material. The site of a well can be selected by a hydrogeologist, or groundwater surveyor. Water may be pumped or hand drawn. Impurities from the surface can easily reach shallow sources and contamination of the supply by pathogens or chemical contaminants needs to be avoided. Well water typically contains more minerals in solution than surface water and may require treatment before being potable. Soil salination can occur as the water table falls and the surrounding soil begins to dry out. Another environmental problem is the potential for methane to seep into the water.
History
Very early Neolithic wells are known from the Eastern Mediterranean. The oldest reliably dated well is from the pre-pottery neolithic (PPN) site of Kissonerga-Mylouthkia on Cyprus. At around 8400 BC a shaft (well 116) of circular diameter was driven through limestone to reach an aquifer at a depth of . Well 2070 from Kissonerga-Mylouthkia, dating to the late PPN, reaches a depth of . Other slightly younger wells are known from this site and from neighbouring Parekklisha-Shillourokambos. A first stone lined well of depth is documented from a drowned final PPN (c. 7000 BC) site at ‘Atlit-Yam off the coast near modern Haifa in Israel.
Wood-lined wells are known from the early Neolithic Linear Pottery culture, for example in Ostrov, Czech Republic, dated 5265 BC, Kückhoven (an outlying centre of Erkelenz), dated 5300 BC, and Eythra in Schletz (an outlying centre of Asparn an der Zaya) in Austria, dated 5200 BC.
The neolithic Chinese discovered and made extensive use of deep drilled groundwater for drinking. The Chinese text The Book of Changes, originally a divination text of the Western Zhou dynasty (1046 -771 BC), contains an entry describing how the ancient Chinese maintained their wells and protected their sources of water. A well excavated at the Hemedu excavation site was believed to have been built during the neolithic era. The well was cased by four rows of logs with a square frame attached to them at the top of the well. 60 additional tile wells southwest of Beijing are also believed to have been built around 600 BC for drinking and irrigation.
In Egypt, shadoofs and sakias are used. The sakia is much more efficient, as it can bring up water from a depth of 10 metres (versus the 3 metres of the shadoof). The sakia is the Egyptian version of the noria. Some of the world's oldest known wells, located in Cyprus, date to 7000–8,500 BC. Two wells from the Neolithic period, around 6500 BC, have been discovered in Israel. One is in Atlit, on the northern coast of Israel, and the other is in the Jezreel Valley.
Wells for other purposes came along much later, historically. The first recorded salt well was dug in the Sichuan province of China around 2,250 years ago. This was the first time that ancient water well technology was applied successfully for the exploitation of salt, and marked the beginning of Sichuan's salt drilling industry. The earliest known oil wells were also drilled in China, in 347 CE. These wells had depths of up to about and were drilled using bits attached to bamboo poles. The oil was burned to evaporate brine and produce salt. By the 10th century, extensive bamboo pipelines connected oil wells with salt springs. The ancient records of China and Japan are said to contain many allusions to the use of natural gas for lighting and heating. Petroleum was known as Burning water in Japan in the 7th century.
Types
Dug wells
Until recent centuries, all artificial wells were pumpless hand-dug wells of varying degrees of sophistication, and they remain a very important source of potable water in some rural developing areas, where they are routinely dug and used today. Their indispensability has produced a number of literary references, literal and figurative, including the reference to the incident of Jesus meeting a woman at Jacob's well (John 4:6) in the Bible and the "Ding Dong Bell" nursery rhyme about a cat in a well.
Hand-dug wells are excavations with diameters large enough to accommodate one or more people with shovels digging down to below the water table. The excavation is braced horizontally to avoid landslide or erosion endangering the people digging. They can be lined with stone or brick; extending this lining upwards above the ground surface to form a wall around the well serves to reduce both contamination and accidental falls into the well.
A more modern method called caissoning uses reinforced concrete or plain concrete pre-cast well rings that are lowered into the hole. A well-digging team digs under a cutting ring and the well column slowly sinks into the aquifer, whilst protecting the team from collapse of the well bore.
Hand-dug wells are inexpensive and low tech (compared to drilling) and they use mostly manual labour to access groundwater in rural locations of developing countries. They may be built with a high degree of community participation, or by local entrepreneurs who specialize in hand-dug wells. They have been successfully excavated to . They have low operational and maintenance costs, in part because water can be extracted by hand, without a pump. The water often comes from an aquifer or groundwater, and can be easily deepened, which may be necessary if the ground water level drops, by telescoping the lining further down into the aquifer. The yield of existing hand dug wells may be improved by deepening or introducing vertical tunnels or perforated pipes.
Drawbacks to hand-dug wells are numerous. It can be impractical to hand dig wells in areas where hard rock is present, and they can be time-consuming to dig and line even in favourable areas. Because they exploit shallow aquifers, the well may be susceptible to yield fluctuations and possible contamination from surface water, including sewage. Hand dug well construction generally requires the use of a well trained construction team, and the capital investment for equipment such as concrete ring moulds, heavy lifting equipment, well shaft formwork, motorized de-watering pumps, and fuel can be large for people in developing countries. Construction of hand dug wells can be dangerous due to collapse of the well bore, falling objects and asphyxiation, including from dewatering pump exhaust fumes.
The Woodingdean Water Well, hand-dug between 1858 and 1862, is the deepest hand-dug well at . The Big Well in Greensburg, Kansas, is billed as the world's largest hand-dug well, at deep and in diameter. However, the Well of Joseph in the Cairo Citadel at deep and the Pozzo di San Patrizio (St. Patrick's Well) built in 1527 in Orvieto, Italy, at deep by wide are both larger by volume.
Driven wells
Driven wells may be very simply created in unconsolidated material with a well hole structure, which consists of a hardened drive point and a screen (perforated pipe). The point is simply hammered into the ground, usually with a tripod and driver, with pipe sections added as needed. A driver is a weighted pipe that slides over the pipe being driven and is repeatedly dropped on it. When groundwater is encountered, the well is washed of sediment and a pump installed.
Drilled wells
Drilled wells are constructed using various types of drilling machines, such as top-head rotary, table rotary, or cable tool, which all use drilling stems that rotate to cut into the formation, thus the term "drilling."
Drilled wells can be excavated by simple hand drilling methods (augering, sludging, jetting, driving, hand percussion) or machine drilling (auger, rotary, percussion, down the hole hammer). Deep rock rotary drilling method is most common. Rotary can be used in 90% of formation types (consolidated).
Drilled wells can get water from a much deeper level than dug wells can − often down to several hundred metres.
Drilled wells with electric pumps are used throughout the world, typically in rural or sparsely populated areas, though many urban areas are supplied partly by municipal wells. Most shallow well drilling machines are mounted on large trucks, trailers, or tracked vehicle carriages. Water wells typically range from deep, but in some areas it can go deeper than .
Rotary drilling machines use a segmented steel drilling string, typically made up of 3m (10ft), to 8m (26ft) sections of steel tubing that are threaded together, with a bit or other drilling device at the bottom end. Some rotary drilling machines are designed to install (by driving or drilling) a steel casing into the well in conjunction with the drilling of the actual bore hole. Air and/or water is used as a circulation fluid to displace cuttings and cool bits during the drilling. Another form of rotary-style drilling, termed mud rotary, makes use of a specially made mud, or drilling fluid, which is constantly being altered during the drill so that it can consistently create enough hydraulic pressure to hold the side walls of the bore hole open, regardless of the presence of a casing in the well. Typically, boreholes drilled into solid rock are not cased until after the drilling process is completed, regardless of the machinery used.
The oldest form of drilling machinery is the cable tool, still used today. Specifically designed to raise and lower a bit into the bore hole, the spudding of the drill causes the bit to be raised and dropped onto the bottom of the hole, and the design of the cable causes the bit to twist at approximately revolution per drop, thereby creating a drilling action. Unlike rotary drilling, cable tool drilling requires the drilling action to be stopped so that the bore hole can be bailed or emptied of drilled cuttings. Cable tool drilling rigs are rare as they tend to be 10x slower to drill through materials compared to similar diameter rotary air or rotary mud equipped rigs.
Drilled wells are usually cased with a factory-made pipe, typically steel (in air rotary or cable tool drilling) or plastic/PVC (in mud rotary wells, also present in wells drilled into solid rock). The casing is constructed by welding, either chemically or thermally, segments of casing together. If the casing is installed during the drilling, most drills will drive the casing into the ground as the bore hole advances, while some newer machines will actually allow for the casing to be rotated and drilled into the formation in a similar manner as the bit advancing just below. PVC or plastic is typically solvent welded and then lowered into the drilled well, vertically stacked with their ends nested and either glued or splined together. The sections of casing are usually or more in length, and in diameter, depending on the intended use of the well and local groundwater conditions.
Surface contamination of wells in the United States is typically controlled by the use of a surface seal. A large hole is drilled to a predetermined depth or to a confining formation (clay or bedrock, for example), and then a smaller hole for the well is completed from that point forward. The well is typically cased from the surface down into the smaller hole with a casing that is the same diameter as that hole. The annular space between the large bore hole and the smaller casing is filled with bentonite clay, concrete, or other sealant material. This creates an impermeable seal from the surface to the next confining layer that keeps contaminants from traveling down the outer sidewalls of the casing or borehole and into the aquifer. In addition, wells are typically capped with either an engineered well cap or seal that vents air through a screen into the well, but keeps insects, small animals, and unauthorized persons from accessing the well.
At the bottom of wells, based on formation, a screening device, filter pack, slotted casing, or open bore hole is left to allow the flow of water into the well. Constructed screens are typically used in unconsolidated formations (sands, gravels, etc.), allowing water and a percentage of the formation to pass through the screen. Allowing some material to pass through creates a large area filter out of the rest of the formation, as the amount of material present to pass into the well slowly decreases and is removed from the well. Rock wells are typically cased with a PVC liner/casing and screen or slotted casing at the bottom, this is mostly present just to keep rocks from entering the pump assembly. Some wells utilize a filter pack method, where an undersized screen or slotted casing is placed inside the well and a filter medium is packed around the screen, between the screen and the borehole or casing. This allows the water to be filtered of unwanted materials before entering the well and pumping zone.
Classification
There are two broad classes of drilled-well types, based on the type of aquifer the well is in:
Shallow or unconfined wells are completed in the uppermost saturated aquifer at that location (the upper unconfined aquifer).
Deep or confined wells are sunk through an impermeable stratum into an aquifer that is sandwiched between two impermeable strata (aquitards or aquicludes). The majority of deep aquifers are classified as artesian because the hydraulic head in a confined well is higher than the level of the top of the aquifer. If the hydraulic head in a confined well is higher than the land surface it is a "flowing" artesian well (named after Artois in France).
A special type of water well may be constructed adjacent to freshwater lakes or streams. Commonly called a collector well but sometimes referred to by the trade name Ranney well or Ranney collector, this type of well involves sinking a caisson vertically below the top of the aquifer and then advancing lateral collectors out of the caisson and beneath the surface water body. Pumping from within the caisson induces infiltration of water from the surface water body into the aquifer, where it is collected by the collector well laterals and conveyed into the caisson where it can be pumped to the ground surface.
Two additional broad classes of well types may be distinguished, based on the use of the well:
production or pumping wells, are large diameter (greater than 15 cm in diameter) cased (metal, plastic, or concrete) water wells, constructed for extracting water from the aquifer by a pump (if the well is not artesian).
monitoring wells or piezometers, are often smaller diameter wells used to monitor the hydraulic head or sample the groundwater for chemical constituents. Piezometers are monitoring wells completed over a very short section of aquifer. Monitoring wells can also be completed at multiple levels, allowing discrete samples or measurements to be made at different vertical elevations at the same map location.
A water well constructed for pumping groundwater can be used passively as a monitoring well and a small diameter well can be pumped, but this distinction by use is common.
Siting
Before excavation, information about the geology, water table depth, seasonal fluctuations, recharge area and rate should be found if possible. This work can be done by a hydrogeologist, or a groundwater surveyor using a variety of tools including electro-seismic surveying, any available information from nearby wells, geologic maps, and sometimes geophysical imaging. These professionals provide advice that is almost as accurate a driller who has experience and knowledge of nearby wells/bores and the most suitable drilling technique based on the expected target depth.
Contamination
Shallow pumping wells can often supply drinking water at a very low cost. However, impurities from the surface easily reach shallow sources, which leads to a greater risk of contamination for these wells compared to deeper wells. Contaminated wells can lead to the spread of various waterborne diseases. Dug and driven wells are relatively easy to contaminate; for instance, most dug wells are unreliable in the majority of the United States. Some research has found that, in cold regions, changes in river flow and flooding caused by extreme rainfall or snowmelt can degrade well water quality.
Pathogens
Most of the bacteria, viruses, parasites, and fungi that contaminate well water comes from fecal material from humans and other animals. Common bacterial contaminants include E. coli, Salmonella, Shigella, and Campylobacter jejuni. Common viral contaminants include norovirus, sapovirus, rotavirus, enteroviruses, and hepatitis A and E. Parasites include Giardia lamblia, Cryptosporidium, Cyclospora cayetanensis, and microsporidia.
Chemical contamination
Chemical contamination is a common problem with groundwater. Nitrates from sewage, sewage sludge or fertilizer are a particular problem for babies and young children. Pollutant chemicals include pesticides and volatile organic compounds from gasoline, dry-cleaning, the fuel additive methyl tert-butyl ether (MTBE), and perchlorate from rocket fuel, airbag inflators, and other artificial and natural sources.
Several minerals are also contaminants, including lead leached from brass fittings or old lead pipes, chromium VI from electroplating and other sources, naturally occurring arsenic, radon, and uranium—all of which can cause cancer—and naturally occurring fluoride, which is desirable in low quantities to prevent tooth decay, but can cause dental fluorosis in higher concentrations.
Some chemicals are commonly present in water wells at levels that are not toxic, but can cause other problems. Calcium and magnesium cause what is known as hard water, which can precipitate and clog pipes or burn out water heaters. Iron and manganese can appear as dark flecks that stain clothing and plumbing, and can promote the growth of iron and manganese bacteria that can form slimy black colonies that clog pipes.
Prevention
The quality of the well water can be significantly increased by lining the well, sealing the well head, fitting a self-priming hand pump, constructing an apron, ensuring the area is kept clean and free from stagnant water and animals, moving sources of contamination (pit latrines, garbage pits, on-site sewer systems) and carrying out hygiene education. The well should be cleaned with 1% chlorine solution after construction and periodically every 6 months.
Well holes should be covered to prevent loose debris, animals, animal excrement, and wind-blown foreign matter from falling into the hole and decomposing. The cover should be able to be in place at all times, including when drawing water from the well. A suspended roof over an open hole helps to some degree, but ideally the cover should be tight fitting and fully enclosing, with only a screened air vent.
Minimum distances and soil percolation requirements between sewage disposal sites and water wells need to be observed. Rules regarding the design and installation of private and municipal septic systems take all these factors into account so that nearby drinking water sources are protected.
Education of the general population in society also plays an important role in protecting drinking water.
Mitigation
Cleanup of contaminated groundwater tends to be very costly. Effective remediation of groundwater is generally very difficult. Contamination of groundwater from surface and subsurface sources can usually be dramatically reduced by correctly centering the casing during construction and filling the casing annulus with an appropriate sealing material. The sealing material (grout) should be placed from immediately above the production zone back to surface, because, in the absence of a correctly constructed casing seal, contaminated fluid can travel into the well through the casing annulus. Centering devices are important (usually one per length of casing or at maximum intervals of 9 m) to ensure that the grouted annular space is of even thickness.
Upon the construction of a new test well, it is considered best practice to invest in a complete battery of chemical and biological tests on the well water in question. Point-of-use treatment is available for individual properties and treatment plants are often constructed for municipal water supplies that suffer from contamination. Most of these treatment methods involve the filtration of the contaminants of concern, and additional protection may be garnered by installing well-casing screens only at depths where contamination is not present.
Wellwater for personal use is often filtered with reverse osmosis water processors; this process can remove very small particles. A simple, effective way of killing microorganisms is to bring the water to a full boil for one to three minutes, depending on location. A household well contaminated by microorganisms can initially be treated by shock chlorination using bleach, generating concentrations hundreds of times greater than found in community water systems; however, this will not fix any structural problems that led to the contamination and generally requires some expertise and testing for effective application.
After the filtration process, it is common to implement an ultraviolet (UV) system to kill pathogens in the water. UV light affects the DNA of the pathogen by UV-C photons breaking through the cell wall. UV disinfection has been gaining popularity in the past decades as it is a chemical-free method of water treatment.
Environmental problems
A risk with the placement of water wells is soil salination which occurs when the water table of the soil begins to drop and salt begins to accumulate as the soil begins to dry out. Another environmental problem that is very prevalent in water well drilling is the potential for methane to seep through.
Soil salination
The potential for soil salination is a large risk when choosing the placement of water wells. Soil salination is caused when the water table of the soil drops over time and salt begins to accumulate. In turn, the increased amount of salt begins to dry the soil out. The increased level of salt in the soil can result in the degradation of soil and can be very harmful to vegetation.
Methane
Methane, an asphyxiant, is a chemical compound that is the main component of natural gas. When methane is introduced into a confined space, it displaces oxygen, reducing oxygen concentration to a level low enough to pose a threat to humans and other aerobic organisms but still high enough for a risk of spontaneous or externally caused explosion. This potential for explosion is what poses such a danger in regards to the drilling and placement of water wells.
Low levels of methane in drinking water are not considered toxic. When methane seeps into a water supply, it is commonly referred to as "methane migration". This can be caused by old natural gas wells near water well systems becoming abandoned and no longer monitored.
Lately, however, the described wells/pumps are no longer very efficient and can be replaced by either handpumps or treadle pumps. Another alternative is the use of self-dug wells, electrical deep-well pumps (for higher depths). Appropriate technology organizations as Practical Action are now supplying information on how to build/set-up (DIY) handpumps and treadle pumps in practice.
PFAS/PFOS Fire fighting foam
Per- and polyfluoroalkyl substances (PFAS or PFASs) are a group of synthetic organofluorine chemical compounds that have multiple fluorine atoms attached to an alkyl chain. PFAS are a group of "forever chemicals" that spread very quickly and very far in ground water polluting it permanently. Water wells near certain airports where any foam fire fighting or training activities occurred up to 2010 are likely to be contaminated by PFAS.
Water security
A study concluded that of ~39 million groundwater wells 6-20% are at high risk of running dry if local groundwater levels decline by less than five meters, or – as with many areas and possibly more than half of major aquifers – continue to decline.
Society and culture
Springs and wells have had cultural significance since prehistoric times, leading to the foundation of towns such as Wells and Bath in Somerset. Interest in health benefits led to the growth of spa towns including many with wells in their name, examples being Llandrindod Wells and Royal Tunbridge Wells.
Eratosthenes is sometimes claimed to have used a well in his calculation of the Earth's circumference; however, this is just a simplification used in a shorter explanation of Cleomedes, since Eratosthenes had used a more elaborate and precise method.
Many incidents in the Bible take place around wells, such as the finding of a wife for Isaac in Genesis and Jesus's talk with the Samaritan woman in the Gospels.
A simple model for water well recovery
For a well with impermeable walls, the water in the well is resupplied from the bottom of the well. The rate at which water flows into the well will depend on the pressure difference between the ground water at the well bottom and the well water at the well bottom. The pressure of a column of water of height z will be equal to the weight of the water in the column divided by the cross-sectional area of the column, so the pressure of the ground water a distance zT below the top of the water table will be:
where ρ is the mass density of the water and g is the acceleration due to gravity. When the water in the well is below the water table level, the pressure at the bottom of the well due to the water in the well will be less than Pg and water will be forced into the well. Referring to the diagram, if z is the distance from the bottom of the well to the well water level and zT is the distance from the bottom of the well to the top of the water table, the pressure difference will be:
Applying Darcy's Law, the volume rate (F) at which water is forced into the well will be proportional to this pressure difference:
where R is the resistance to the flow, which depends on the well cross section, the pressure gradient at the bottom of the well, and the characteristics of the substrate at the well bottom. (e.g., porosity). The volume flow rate into the well can be written as a function of the rate of change of the well water level:
Combining the above three equations yields a simple differential equation in z:
which may be solved:
where z0 is the well water level at time t=0 and τ is the well time constant:
Note that if dz/dt for a depleted well can be measured, it will be equal to and the time constant τ can be calculated. According to the above model, it will take an infinite amount of time for a well to fully recover, but if we consider a well that is 99% recovered to be "practically" recovered, the time for a well to practically recover from a level at z will be:
For a well that is fully depleted (z=0) it would take a time of about 4.6 τ to practically recover.
The above model does not take into account the depletion of the aquifer due to the pumping which lowered the well water level (See aquifer test and groundwater flow equation). Also, practical wells may have impermeable walls only up to, but not including the bedrock, which will give a larger surface area for water to enter the well.
Similar and related water structures
Types of ancient wells
Brick-lined well
Castle well, for use in the castle
Cistern, ancient Greek
Stepwell, ancient India
Modern construction techniques
Baptist well drilling, simple technique
Rodriguez well, for harvesting drinking water in polar regions
Spring supply, piped water supply from the well
Uses
Holy well, sacred wells in various religions
Abraham's well, sacred well in Israel
Ghat, sacred in Hinduism and Buddhism
Drainage and irrigation
Drainage by wells
Shadoof, an irrigation tool that is used to lift water from a water source onto land or into another waterway or basin
Washing
Lavoir, public place for washing clothes.
| Technology | Food, water and health | null |
16829895 | https://en.wikipedia.org/wiki/Smallpox | Smallpox | Smallpox was an infectious disease caused by variola virus (often called smallpox virus), which belongs to the genus Orthopoxvirus. The last naturally occurring case was diagnosed in October 1977, and the World Health Organization (WHO) certified the global eradication of the disease in 1980, making smallpox the only human disease to have been eradicated to date.
The initial symptoms of the disease included fever and vomiting. This was followed by formation of ulcers in the mouth and a skin rash. Over a number of days, the skin rash turned into the characteristic fluid-filled blisters with a dent in the center. The bumps then scabbed over and fell off, leaving scars. The disease was transmitted from one person to another primarily through prolonged face-to-face contact with an infected person or rarely via contaminated objects. Prevention was achieved mainly through the smallpox vaccine. Once the disease had developed, certain antiviral medications could potentially have helped, but such medications did not become available until after the disease was eradicated. The risk of death was about 30%, with higher rates among babies. Often, those who survived had extensive scarring of their skin, and some were left blind.
The earliest evidence of the disease dates to around 1500 BC in Egyptian mummies. The disease historically occurred in outbreaks. It was one of several diseases introduced by the Columbian exchange to the New World, resulting in large swathes of Native Americans dying. In 18th-century Europe, it is estimated that 400,000 people died from the disease per year, and that one-third of all cases of blindness were due to smallpox. Smallpox is estimated to have killed up to 300 million people in the 20th century and around 500 million people in the last 100 years of its existence. Earlier deaths included six European monarchs, including Louis XV of France in 1774. As recently as 1967, 15 million cases occurred a year. The final known fatal case occurred in the United Kingdom in 1978.
Inoculation for smallpox appears to have started in China around the 1500s. Europe adopted this practice from Asia in the first half of the 18th century. In 1796, Edward Jenner introduced the modern smallpox vaccine. In 1967, the WHO intensified efforts to eliminate the disease. Smallpox is one of two infectious diseases to have been eradicated, the other being rinderpest (a disease of even-toed ungulates) in 2011. The term "smallpox" was first used in England in the 16th century to distinguish the disease from syphilis, which was then known as the "great pox". Other historical names for the disease include pox, speckled monster, and red plague.
The United States and Russia retain samples of variola virus in laboratories, which has sparked debates over safety.
Classification
There are two forms of the smallpox. Variola major is the severe and most common form, with a more extensive rash and higher fever. Variola minor is a less common presentation, causing less severe disease, typically discrete smallpox, with historical death rates of 1% or less. Subclinical (asymptomatic) infections with variola virus were noted but were not common. In addition, a form called variola sine eruptione (smallpox without rash) was seen generally in vaccinated persons. This form was marked by a fever that occurred after the usual incubation period and could be confirmed only by antibody studies or, rarely, by viral culture. In addition, there were two very rare and fulminating types of smallpox, the malignant (flat) and hemorrhagic forms, which were usually fatal.
Signs and symptoms
The initial symptoms were similar to other viral diseases that are still extant, such as influenza and the common cold: fever of at least , muscle pain, malaise, headache and fatigue. As the digestive tract was commonly involved, nausea, vomiting, and backache often occurred. The early prodromal stage usually lasted 2–4 days. By days 12–15, the first visible lesions – small reddish spots called enanthem – appeared on mucous membranes of the mouth, tongue, palate, and throat, and the temperature fell to near-normal. These lesions rapidly enlarged and ruptured, releasing large amounts of virus into the saliva.
Variola virus tended to attack skin cells, causing the characteristic pimples, or macules, associated with the disease. A rash developed on the skin 24 to 48 hours after lesions on the mucous membranes appeared. Typically the macules first appeared on the forehead, then rapidly spread to the whole face, proximal portions of extremities, the trunk, and lastly to distal portions of extremities. The process took no more than 24 to 36 hours, after which no new lesions appeared. At this point, variola major disease could take several very different courses, which resulted in four types of smallpox disease based on the Rao classification: ordinary, modified, malignant (or flat), and hemorrhagic smallpox. Historically, ordinary smallpox had an overall fatality rate of about 30%, and the malignant and hemorrhagic forms were usually fatal. The modified form was almost never fatal. In early hemorrhagic cases, hemorrhages occurred before any skin lesions developed. The incubation period between contraction and the first obvious symptoms of the disease was 7–14 days.
Ordinary
At least 90% of smallpox cases among unvaccinated persons were of the ordinary type. In this form of the disease, by the second day of the rash the macules had become raised papules. By the third or fourth day, the papules had filled with an opalescent fluid to become vesicles. This fluid became opaque and turbid within 24–48 hours, resulting in pustules.
By the sixth or seventh day, all the skin lesions had become pustules. Between seven and ten days the pustules had matured and reached their maximum size. The pustules were sharply raised, typically round, tense, and firm to the touch. The pustules were deeply embedded in the dermis, giving them the feel of a small bead in the skin. Fluid slowly leaked from the pustules, and by the end of the second week, the pustules had deflated and began to dry up, forming crusts or scabs. By day 16–20 scabs had formed over all of the lesions, which had started to flake off, leaving depigmented scars.
Ordinary smallpox generally produced a discrete rash, in which the pustules stood out on the skin separately. The distribution of the rash was most dense on the face, denser on the extremities than on the trunk, and denser on the distal parts of the extremities than on the proximal. The palms of the hands and soles of the feet were involved in most cases.
Confluent
Sometimes, the blisters merged into sheets, forming a confluent rash, which began to detach the outer layers of skin from the underlying flesh. Patients with confluent smallpox often remained ill even after scabs had formed over all the lesions. In one case series, the case-fatality rate in confluent smallpox was 62%.
Modified
Referring to the character of the eruption and the rapidity of its development, modified smallpox occurred mostly in previously vaccinated people. It was rare in unvaccinated people, with one case study showing 1–2% of modified cases compared to around 25% in vaccinated people. In this form, the prodromal illness still occurred but may have been less severe than in the ordinary type. There was usually no fever during the evolution of the rash. The skin lesions tended to be fewer and evolved more quickly, were more superficial, and may not have shown the uniform characteristic of more typical smallpox. Modified smallpox was rarely, if ever, fatal. This form of variola major was more easily confused with chickenpox.
Malignant
In malignant-type smallpox (also called flat smallpox) the lesions remained almost flush with the skin at the time when raised vesicles would have formed in the ordinary type. It is unknown why some people developed this type. Historically, it accounted for 5–10% of cases, and most (72%) were children. Malignant smallpox was accompanied by a severe prodromal phase that lasted 3–4 days, prolonged high fever, and severe symptoms of viremia. The prodromal symptoms continued even after the onset of the rash. The rash on the mucous membranes (enanthem) was extensive. Skin lesions matured slowly, were typically confluent or semi-confluent, and by the seventh or eighth day, they were flat and appeared to be buried in the skin. Unlike ordinary-type smallpox, the vesicles contained little fluid, were soft and velvety to the touch, and may have contained hemorrhages. Malignant smallpox was nearly always fatal and death usually occurred between the 8th and 12th day of illness. Often, a day or two before death, the lesions turned ashen gray, which, along with abdominal distension, was a bad prognostic sign. This form is thought to be caused by deficient cell-mediated immunity to smallpox. If the person recovered, the lesions gradually faded and did not form scars or scabs.
Hemorrhagic
Hemorrhagic smallpox is a severe form accompanied by extensive bleeding into the skin, mucous membranes, gastrointestinal tract, and viscera. This form develops in approximately 2% of infections and occurs mostly in adults. Pustules do not typically form in hemorrhagic smallpox. Instead, bleeding occurs under the skin, making it look charred and black, hence this form of the disease is also referred to as variola nigra or "black pox". Hemorrhagic smallpox has very rarely been caused by variola minor virus. While bleeding may occur in mild cases and not affect outcomes, hemorrhagic smallpox is typically fatal. Vaccination does not appear to provide any immunity to either form of hemorrhagic smallpox and some cases even occurred among people that were revaccinated shortly before. It has two forms.
Early
The early or fulminant form of hemorrhagic smallpox (referred to as purpura variolosa) begins with a prodromal phase characterized by a high fever, severe headache, and abdominal pain. The skin becomes dusky and erythematous, and this is rapidly followed by the development of petechiae and bleeding in the skin, conjunctiva and mucous membranes. Death often occurs suddenly between the fifth and seventh days of illness, when only a few insignificant skin lesions are present. Some people survive a few days longer, during which time the skin detaches and fluid accumulates under it, rupturing at the slightest injury. People are usually conscious until death or shortly before. Autopsy reveals petechiae and bleeding in the spleen, kidney, serous membranes, skeletal muscles, pericardium, liver, gonads and bladder. Historically, this condition was frequently misdiagnosed, with the correct diagnosis made only at autopsy. This form is more likely to occur in pregnant women than in the general population (approximately 16% of cases in unvaccinated pregnant women were early hemorrhagic smallpox, versus roughly 1% in nonpregnant women and adult males). The case fatality rate of early hemorrhagic smallpox approaches 100%.
Late
There is also a later form of hemorrhagic smallpox (referred to late hemorrhagic smallpox, or variolosa pustula hemorrhagica). The prodrome is severe and similar to that observed in early hemorrhagic smallpox, and the fever persists throughout the course of the disease. Bleeding appears in the early eruptive period (but later than that seen in purpura variolosa), and the rash is often flat and does not progress beyond the vesicular stage. Hemorrhages in the mucous membranes appear to occur less often than in the early hemorrhagic form. Sometimes the rash forms pustules which bleed at the base and then undergo the same process as in ordinary smallpox. This form of the disease is characterized by a decrease in all of the elements of the coagulation cascade and an increase in circulating antithrombin. This form of smallpox occurs anywhere from 3% to 25% of fatal cases, depending on the virulence of the smallpox strain. Most people with the late-stage form die within eight to 10 days of illness. Among the few who recover, the hemorrhagic lesions gradually disappear after a long period of convalescence. The case fatality rate for late hemorrhagic smallpox is around 90–95%. Pregnant women are slightly more likely to experience this form of the disease, though not as much as early hemorrhagic smallpox.
Cause
Smallpox is caused by infection with variola virus, which belongs to the family Poxviridae, subfamily Chordopoxvirinae, genus Orthopoxvirus.
Evolution
The date of the appearance of smallpox is not settled. It most probably evolved from a terrestrial African rodent virus between 68,000 and 16,000 years ago. The wide range of dates is due to the different records used to calibrate the molecular clock. One clade was the variola major strains (the more clinically severe form of smallpox) which spread from Asia between 400 and 1,600 years ago. A second clade included both alastrim (a phenotypically mild smallpox) described from the American continents and isolates from West Africa which diverged from an ancestral strain between 1,400 and 6,300 years before present. This clade further diverged into two subclades at least 800 years ago.
A second estimate has placed the separation of variola virus from Taterapox (an Orthopoxvirus of some African rodents including gerbils) at 3,000 to 4,000 years ago. This is consistent with archaeological and historical evidence regarding the appearance of smallpox as a human disease which suggests a relatively recent origin. If the mutation rate is assumed to be similar to that of the herpesviruses, the divergence date of variola virus from Taterapox has been estimated to be 50,000 years ago. While this is consistent with the other published estimates, it suggests that the archaeological and historical evidence is very incomplete. Better estimates of mutation rates in these viruses are needed.
Examination of a strain that dates from found that this strain was basal to the other presently sequenced strains. The mutation rate of this virus is well modeled by a molecular clock. Diversification of strains only occurred in the 18th and 19th centuries.
Virology
Variola virus is large and brick-shaped and is approximately 302 to 350 nanometers by 244 to 270 nm, with a single linear double stranded DNA genome 186 kilobase pairs (kbp) in size and containing a hairpin loop at each end.
Four orthopoxviruses cause infection in humans: variola, vaccinia, cowpox, and monkeypox. Variola virus infects only humans in nature, although primates and other animals have been infected in an experimental setting. Vaccinia, cowpox, and monkeypox viruses can infect both humans and other animals in nature.
The life cycle of poxviruses is complicated by having multiple infectious forms, with differing mechanisms of cell entry. Poxviruses are unique among human DNA viruses in that they replicate in the cytoplasm of the cell rather than in the nucleus. To replicate, poxviruses produce a variety of specialized proteins not produced by other DNA viruses, the most important of which is a viral-associated DNA-dependent RNA polymerase.
Both enveloped and unenveloped virions are infectious. The viral envelope is made of modified Golgi membranes containing viral-specific polypeptides, including hemagglutinin. Infection with either variola major virus or variola minor virus confers immunity against the other.
Variola major
The more common, infectious form of the disease was caused by the variola major virus strain, known for its significantly higher mortality rate compared to its counterpart, variola minor. Variola major had a fatality rate of around 30%, while variola minor’s mortality rate was about 1%. Throughout the 18th century, variola major was responsible for around 400,000 deaths annually in Europe alone. Survivors of the disease often faced lifelong consequences, such as blindness and severe scarring, which were nearly universal among those who recovered.
In the first half of the 20th century, variola major was the primary cause of smallpox outbreaks across Asia and most of Africa. Meanwhile, variola minor was more commonly found in regions of Europe, North America, South America, and certain parts of Africa.
Variola minor
Variola minor virus, also called alastrim, was a less common form of the virus, and much less deadly. Although variola minor had the same incubation period and pathogenetic stages as smallpox, it is believed to have had a mortality rate of less than 1%, as compared to variola major's 30%. Like variola major, variola minor was spread through inhalation of the virus in the air, which could occur through face-to-face contact or through fomites. Infection with variola minor virus conferred immunity against the more dangerous variola major virus.
Because variola minor was a less debilitating disease than smallpox, people were more frequently ambulant and thus able to infect others more rapidly. As such, variola minor swept through the United States, Great Britain, and South Africa in the early 20th century, becoming the dominant form of the disease in those areas and thus rapidly decreasing mortality rates. Along with variola major, the minor form has now been totally eradicated from the globe. The last case of indigenous variola minor was reported in a Somali cook, Ali Maow Maalin, in October 1977, and smallpox was officially declared eradicated worldwide in May 1980. Variola minor was also called white pox, kaffir pox, Cuban itch, West Indian pox, milk pox, and pseudovariola.
Genome composition
The genome of variola major virus is about 186,000 base pairs in length. It is made from linear double stranded DNA and contains the coding sequence for about 200 genes. The genes are usually not overlapping and typically occur in blocks that point towards the closer terminal region of the genome. The coding sequence of the central region of the genome is highly consistent across orthopoxviruses, and the arrangement of genes is consistent across chordopoxviruses.
The center of the variola virus genome contains the majority of the essential viral genes, including the genes for structural proteins, DNA replication, transcription, and mRNA synthesis. The ends of the genome vary more across strains and species of orthopoxviruses. These regions contain proteins that modulate the hosts' immune systems, and are primarily responsible for the variability in virulence across the orthopoxvirus family. These terminal regions in poxviruses are inverted terminal repetitions (ITR) sequences. These sequences are identical but oppositely oriented on either end of the genome, leading to the genome being a continuous loop of DNA. Components of the ITR sequences include an incompletely base paired A/T rich hairpin loop, a region of roughly 100 base pairs necessary for resolving concatomeric DNA (a stretch of DNA containing multiple copies of the same sequence), a few open reading frames, and short tandemly repeating sequences of varying number and length. The ITRs of poxviridae vary in length across strains and species. The coding sequence for most of the viral proteins in variola major virus have at least 90% similarity with the genome of vaccinia, a related virus used for vaccination against smallpox.
Gene expression
Gene expression of variola virus occurs entirely within the cytoplasm of the host cell, and follows a distinct progression during infection. After entry of an infectious virion into a host cell, synthesis of viral mRNA can be detected within 20 minutes. About half of the viral genome is transcribed prior to the replication of viral DNA. The first set of expressed genes are transcribed by pre-existing viral machinery packaged within the infecting virion. These genes encode the factors necessary for viral DNA synthesis and for transcription of the next set of expressed genes. Unlike most DNA viruses, DNA replication in variola virus and other poxviruses takes place within the cytoplasm of the infected cell. The exact timing of DNA replication after infection of a host cell varies across the poxviridae. Recombination of the genome occurs within actively infected cells. Following the onset of viral DNA replication, an intermediate set of genes codes for transcription factors of late gene expression. The products of the later genes include transcription factors necessary for transcribing the early genes for new virions, as well as viral RNA polymerase and other essential enzymes for new viral particles. These proteins are then packaged into new infectious virions capable of infecting other cells.
Research
Two live samples of variola major virus remain, one in the United States at the CDC in Atlanta, and one at the Vector Institute in Koltsovo, Russia. Research with the remaining virus samples is tightly controlled, and each research proposal must be approved by the WHO and the World Health Assembly (WHA). Most research on poxviruses is performed using the closely related Vaccinia virus as a model organism. Vaccinia virus, which is used to vaccinate for smallpox, is also under research as a viral vector for vaccines for unrelated diseases.
The genome of variola major virus was first sequenced in its entirety in the 1990s. The complete coding sequence is publicly available online. The current reference sequence for variola major virus was sequenced from a strain that circulated in India in 1967. In addition, there are sequences for samples of other strains that were collected during the WHO eradication campaign. A genome browser for a complete database of annotated sequences of variola virus and other poxviruses is publicly available through the Viral Bioinformatics Resource Center.
Genetic engineering
The WHO currently bans genetic engineering of the variola virus. However, in 2004, a committee advisory to the WHO voted in favor of allowing editing of the genome of the two remaining samples of variola major virus to add a marker gene. This gene, called GFP, or green fluorescent protein, would cause live samples of the virus to glow green under fluorescent light. The insertion of this gene, which would not influence the virulence of the virus, would be the only allowed modification of the genome. The committee stated the proposed modification would aid in research of treatments by making it easier to assess whether a potential treatment was effective in killing viral samples. The recommendation could only take effect if approved by the WHA. When the WHA discussed the proposal in 2005, it refrained from taking a formal vote on the proposal, stating that it would review individual research proposals one at a time. Addition of the GFP gene to the Vaccinia genome is routinely performed during research on the closely related Vaccinia virus.
Controversies
The public availability of the variola virus complete sequence has raised concerns about the possibility of illicit synthesis of infectious virus. Vaccinia, a cousin of the variola virus, was artificially synthesized in 2002 by NIH scientists. They used a previously established method that involved using a recombinant viral genome to create a self-replicating bacterial plasmid that produced viral particles.
In 2016, another group synthesized the horsepox virus using publicly available sequence data for horsepox. The researchers argued that their work would be beneficial to creating a safer and more effective vaccine for smallpox, although an effective vaccine is already available. The horsepox virus had previously seemed to have gone extinct, raising concern about potential revival of variola major and causing other scientists to question their motives. Critics found it especially concerning that the group was able to recreate viable virus in a short time frame with relatively little cost or effort. Although the WHO bans individual laboratories from synthesizing more than 20% of the genome at a time, and purchases of smallpox genome fragments are monitored and regulated, a group with malicious intentions could compile, from multiple sources, the full synthetic genome necessary to produce viable virus.
Transmission
Smallpox was highly contagious, but generally spread more slowly and less widely than some other viral diseases, perhaps because transmission required close contact and occurred after the onset of the rash. The overall rate of infection was also affected by the short duration of the infectious stage. In temperate areas, the number of smallpox infections was highest during the winter and spring. In tropical areas, seasonal variation was less evident and the disease was present throughout the year. Age distribution of smallpox infections depended on acquired immunity. Vaccination immunity declined over time and was probably lost within thirty years. Smallpox was not known to be transmitted by insects or animals and there was no asymptomatic carrier state.
Transmission occurred through inhalation of airborne variola virus, usually droplets expressed from the oral, nasal, or pharyngeal mucosa of an infected person. It was transmitted from one person to another primarily through prolonged face-to-face contact with an infected person.
Some infections of laundry workers with smallpox after handling contaminated bedding suggested that smallpox could be spread through direct contact with contaminated objects (fomites), but this was found to be rare. Also rarely, smallpox was spread by virus carried in the air in enclosed settings such as buildings, buses, and trains. The virus can cross the placenta, but the incidence of congenital smallpox was relatively low. Smallpox was not notably infectious in the prodromal period and viral shedding was usually delayed until the appearance of the rash, which was often accompanied by lesions in the mouth and pharynx. The virus can be transmitted throughout the course of the illness, but this happened most frequently during the first week of the rash when most of the skin lesions were intact. Infectivity waned in 7 to 10 days when scabs formed over the lesions, but the infected person was contagious until the last smallpox scab fell off.
Concern about possible use of smallpox for biological warfare led in 2002 to Donald K. Milton's detailed review of existing research on its transmission and of then-current recommendations for controlling its spread. He agreed, citing Rao, Fenner and others, that “careful epidemiologic investigation rarely implicated fomites as a source of infection”; noted that “Current recommendations for control of secondary smallpox infections emphasize transmission ‘by expelled droplets to close contacts (those within 6–7 feet)’”; but warned that the “emphasis on spread via large droplets may reduce the vigilance with which more difficult airborne precautions [i.e. against finer droplets capable of traveling longer distances and penetrating deeply into the lower respiratory tract] are maintained”.
Mechanism
Once inhaled, the variola virus invaded the mucous membranes of the mouth, throat, and respiratory tract. From there, it migrated to regional lymph nodes and began to multiply. In the initial growth phase, the virus seemed to move from cell to cell, but by around the 12th day, widespread lysis of infected cells occurred and the virus could be found in the bloodstream in large numbers, a condition known as viremia. This resulted in the second wave of multiplication in the spleen, bone marrow, and lymph nodes.
Diagnosis
The clinical definition of ordinary smallpox is an illness with acute onset of fever equal to or greater than followed by a rash characterized by firm, deep-seated vesicles or pustules in the same stage of development without other apparent cause. When a clinical case was observed, smallpox was confirmed using laboratory tests.
Microscopically, poxviruses produce characteristic cytoplasmic inclusion bodies, the most important of which are known as Guarnieri bodies, and are the sites of viral replication. Guarnieri bodies are readily identified in skin biopsies stained with hematoxylin and eosin, and appear as pink blobs. They are found in virtually all poxvirus infections but the absence of Guarnieri bodies could not be used to rule out smallpox. The diagnosis of an orthopoxvirus infection can also be made rapidly by electron microscopic examination of pustular fluid or scabs. All orthopoxviruses exhibit identical brick-shaped virions by electron microscopy. If particles with the characteristic morphology of herpesviruses are seen this will eliminate smallpox and other orthopoxvirus infections.
Definitive laboratory identification of variola virus involved growing the virus on chorioallantoic membrane (part of a chicken embryo) and examining the resulting pock lesions under defined temperature conditions. Strains were characterized by polymerase chain reaction (PCR) and restriction fragment length polymorphism (RFLP) analysis. Serologic tests and enzyme linked immunosorbent assays (ELISA), which measured variola virus-specific immunoglobulin and antigen were also developed to assist in the diagnosis of infection.
Chickenpox was commonly confused with smallpox in the immediate post-eradication era. Chickenpox and smallpox could be distinguished by several methods. Unlike smallpox, chickenpox does not usually affect the palms and soles. Additionally, chickenpox pustules are of varying size due to variations in the timing of pustule eruption: smallpox pustules are all very nearly the same size since the viral effect progresses more uniformly. A variety of laboratory methods were available for detecting chickenpox in the evaluation of suspected smallpox cases.
Prevention
The earliest procedure used to prevent smallpox was inoculation with variola minor virus (a method later known as variolation after the introduction of smallpox vaccine to avoid possible confusion), which likely occurred in India, Africa, and China well before the practice arrived in Europe. The idea that inoculation originated in India has been challenged, as few of the ancient Sanskrit medical texts described the process of inoculation. Accounts of inoculation against smallpox in China can be found as early as the late 10th century, and the procedure was widely practiced by the 16th century, during the Ming dynasty. If successful, inoculation produced lasting immunity to smallpox. Because the person was infected with variola virus, a severe infection could result, and the person could transmit smallpox to others. Variolation had a 0.5–2 percent mortality rate, considerably less than the 20–30 percent mortality rate of smallpox. Two reports on the Chinese practice of inoculation were received by the Royal Society in London in 1700; one by Dr. Martin Lister who received a report by an employee of the East India Company stationed in China and another by Clopton Havers.
Lady Mary Wortley Montagu observed smallpox inoculation during her stay in the Ottoman Empire, writing detailed accounts of the practice in her letters, and enthusiastically promoted the procedure in England upon her return in 1718. According to Voltaire (1742), the Turks derived their use of inoculation from neighbouring Circassia. Voltaire does not speculate on where the Circassians derived their technique from, though he reports that the Chinese have practiced it "these hundred years". In 1721, Cotton Mather and colleagues provoked controversy in Boston by inoculating hundreds. After publishing The present method of inoculating for the small-pox in 1767, Dr Thomas Dimsdale was invited to Russia to variolate the Empress Catherine the Great of Russia and her son, Grand Duke Paul, which he successfully did in 1768. In 1796, Edward Jenner, a doctor in Berkeley, Gloucestershire, rural England, discovered that immunity to smallpox could be produced by inoculating a person with material from a cowpox lesion. Cowpox is a poxvirus in the same family as variola. Jenner called the material used for inoculation vaccine from the root word vacca, which is Latin for cow. The procedure was much safer than variolation and did not involve a risk of smallpox transmission. Vaccination to prevent smallpox was soon practiced all over the world. During the 19th century, the cowpox virus used for smallpox vaccination was replaced by the vaccinia virus. Vaccinia is in the same family as cowpox and variola virus but is genetically distinct from both. The origin of the vaccinia virus and how it came to be in the vaccine are not known.
The current formulation of the smallpox vaccine is a live virus preparation of the infectious vaccinia virus. The vaccine is given using a bifurcated (two-pronged) needle that is dipped into the vaccine solution. The needle is used to prick the skin (usually the upper arm) several times in a few seconds. If successful, a red and itchy bump develops at the vaccine site in three or four days. In the first week, the bump becomes a large blister (called a "Jennerian vesicle") which fills with pus and begins to drain. During the second week, the blister begins to dry up, and a scab forms. The scab falls off in the third week, leaving a small scar.
The antibodies induced by the vaccinia vaccine are cross-protective for other orthopoxviruses, such as monkeypox, cowpox, and variola (smallpox) viruses. Neutralizing antibodies are detectable 10 days after first-time vaccination and seven days after revaccination. Historically, the vaccine has been effective in preventing smallpox infection in 95 percent of those vaccinated. Smallpox vaccination provides a high level of immunity for three to five years and decreasing immunity thereafter. If a person is vaccinated again later, the immunity lasts even longer. Studies of smallpox cases in Europe in the 1950s and 1960s demonstrated that the fatality rate among persons vaccinated less than 10 years before exposure was 1.3 percent; it was 7 percent among those vaccinated 11 to 20 years prior, and 11 percent among those vaccinated 20 or more years before infection. By contrast, 52 percent of unvaccinated persons died.
There are side effects and risks associated with the smallpox vaccine. In the past, about 1 out of 1,000 people vaccinated for the first time experienced serious, but non-life-threatening, reactions, including toxic or allergic reaction at the site of the vaccination (erythema multiforme), spread of the vaccinia virus to other parts of the body, and spread to other individuals. Potentially life-threatening reactions occurred in 14 to 500 people out of every 1 million people vaccinated for the first time. Based on past experience, it is estimated that 1 or 2 people in 1 million (0.000198 percent) who receive the vaccine may die as a result, most often the result of postvaccinial encephalitis or severe necrosis in the area of vaccination (called progressive vaccinia).
Given these risks, as smallpox became effectively eradicated and the number of naturally occurring cases fell below the number of vaccine-induced illnesses and deaths, routine childhood vaccination was discontinued in the United States in 1972 and was abandoned in most European countries in the early 1970s. Routine vaccination of health care workers was discontinued in the U.S. in 1976, and among military recruits in 1990 (although military personnel deploying to the Middle East and Korea still receive the vaccination). By 1986, routine vaccination had ceased in all countries. It is now primarily recommended for laboratory workers at risk for occupational exposure. However, the possibility of variola virus being used as a biological weapon has rekindled interest in the development of newer vaccines. The smallpox vaccine is also effective in, and therefore administered for, the prevention of mpox.
ACAM2000 is a smallpox vaccine developed by Acambis, approved for use in the United States by the U.S. FDA on August 31, 2007. It contains live vaccinia virus, cloned from the same strain used in an earlier vaccine, Dryvax. While the Dryvax virus was cultured in the skin of calves and freeze-dried, ACAM2000s virus is cultured in kidney epithelial cells (Vero cells) from an African green monkey. Efficacy and adverse reaction incidence are similar to Dryvax. The vaccine is not routinely available to the US public; it is, however, used in the military and maintained in the Strategic National Stockpile.
Treatment
Smallpox vaccination within three days of exposure will prevent or significantly lessen the severity of smallpox symptoms in the vast majority of people. Vaccination four to seven days after exposure can offer some protection from disease or may modify the severity of the disease. Other than vaccination, treatment of smallpox is primarily supportive, such as wound care and infection control, fluid therapy, and possible ventilator assistance. Flat and hemorrhagic types of smallpox are treated with the same therapies used to treat shock, such as fluid resuscitation. People with semi-confluent and confluent types of smallpox may have therapeutic issues similar to patients with extensive skin burns.
Antiviral treatments have improved since the last large smallpox epidemics, and as of 2004, studies suggested that the antiviral drug cidofovir might be useful as a therapeutic agent. The drug must be administered intravenously, and may cause serious kidney toxicity. In July 2018, the Food and Drug Administration approved tecovirimat, the first drug approved for treatment of smallpox. However, during treatment viral mutations causing resistance have been known to occur, especially since its use in the 2022–2023 mpox outbreak which jeopardize its effectiveness for smallpox biothreat preparedness.
In June 2021, brincidofovir was approved for medical use in the United States for the treatment of human smallpox disease caused by variola virus.
Prognosis
The mortality rate from variola minor is approximately 1%, while the mortality rate from variola major is approximately 30%.
Ordinary type-confluent is fatal about 50–75% of the time, ordinary-type semi-confluent about 25–50% of the time, in cases where the rash is discrete the case-fatality rate is less than 10%. The overall fatality rate for children younger than 1 year of age is 40–50%. Hemorrhagic and flat types have the highest fatality rates. The fatality rate for flat or late hemorrhagic type smallpox is 90% or greater and nearly 100% is observed in cases of early hemorrhagic smallpox. The case-fatality rate for variola minor is 1% or less. There is no evidence of chronic or recurrent infection with variola virus. In cases of flat smallpox in vaccinated people, the condition was extremely rare but less lethal, with one case series showing a 67% death rate.
In fatal cases of ordinary smallpox, death usually occurs between days 10–16 of the illness. The cause of death from smallpox is not clear, but the infection is now known to involve multiple organs. Circulating immune complexes, overwhelming viremia, or an uncontrolled immune response may be contributing factors. In early hemorrhagic smallpox, death occurs suddenly about six days after the fever develops. The cause of death in early hemorrhagic cases is commonly due to heart failure and pulmonary edema. In late hemorrhagic cases, high and sustained viremia, severe platelet loss and poor immune response were often cited as causes of death. In flat smallpox modes of death are similar to those in burns, with loss of fluid, protein and electrolytes, and fulminating sepsis.
Complications
Complications of smallpox arise most commonly in the respiratory system and range from simple bronchitis to fatal pneumonia. Respiratory complications tend to develop on about the eighth day of the illness and can be either viral or bacterial in origin. Secondary bacterial infection of the skin is a relatively uncommon complication of smallpox. When this occurs, the fever usually remains elevated.
Other complications include encephalitis (1 in 500 patients), which is more common in adults and may cause temporary disability; permanent pitted scars, most notably on the face; and complications involving the eyes (2% of all cases). Pustules can form on the eyelid, conjunctiva, and cornea, leading to complications such as conjunctivitis, keratitis, corneal ulcer, iritis, iridocyclitis, and atrophy of the optic nerve. Blindness results in approximately 35–40% of eyes affected with keratitis and corneal ulcer. Hemorrhagic smallpox can cause subconjunctival and retinal hemorrhages. In 2–5% of young children with smallpox, virions reach the joints and bone, causing osteomyelitis variolosa. Bony lesions are symmetrical, most common in the elbows, legs, and characteristically cause separation of the epiphysis and marked periosteal reactions. Swollen joints limit movement, and arthritis may lead to limb deformities, ankylosis, malformed bones, flail joints, and stubby fingers.
Between 65 and 80% of survivors are marked with deep pitted scars (pockmarks), most prominent on the face.
History
Disease emergence
The earliest credible clinical evidence of smallpox is found in the descriptions of smallpox-like disease in medical writings from ancient India (as early as 1500 BCE), and China (1122 BCE), as well as a study of the Egyptian mummy of Ramses V (died 1145 BCE). It has been speculated that Egyptian traders brought smallpox to India during the 1st millennium BCE, where it remained as an endemic human disease for at least 2000 years. Smallpox was probably introduced into China during the 1st century CE from the southwest, and in the 6th century was carried from China to Japan. In Japan, the epidemic of 735–737 is believed to have killed as much as one-third of the population. At least seven religious deities have been specifically dedicated to smallpox, such as the god Sopona in the Yoruba religion in West Africa. In India, the Hindu goddess of smallpox, Shitala, was worshipped in temples throughout the country.
A different viewpoint is that smallpox emerged 1588 CE and the earlier reported cases were incorrectly identified as smallpox.
The timing of the arrival of smallpox in Europe and south-western Asia is less clear. Smallpox is not clearly described in either the Old or New Testaments of the Bible or in the literature of the Greeks or Romans. While some have identified the Plague of Athens – which was said to have originated in "Ethiopia" and Egypt – or the plague that lifted Carthage's 396 BCE siege of Syracuse – with smallpox, many scholars agree it is very unlikely such a serious disease as variola major would have escaped being described by Hippocrates if it had existed in the Mediterranean region during his lifetime.
While the Antonine Plague that swept through the Roman Empire in 165180 CE may have been caused by smallpox, Saint Nicasius of Rheims became the patron saint of smallpox victims for having supposedly survived a bout in 450, and Saint Gregory of Tours recorded a similar outbreak in France and Italy in 580, the first use of the term variola. Other historians speculate that Arab armies first carried smallpox from Africa into Southwestern Europe during the 7th and 8th centuries. In the 9th century the Persian physician, Rhazes, provided one of the most definitive descriptions of smallpox and was the first to differentiate smallpox from measles and chickenpox in his Kitab fi al-jadari wa-al-hasbah (The Book of Smallpox and Measles). During the Middle Ages several smallpox outbreaks occurred in Europe. However, smallpox had not become established there until the population growth and mobility marked by the Crusades allowed it to do so. By the 16th century, smallpox had become entrenched across most of Europe, where it had a mortality rate as high as 30 percent. This endemic occurrence of smallpox in Europe is of particular historical importance, as successive exploration and colonization by Europeans tended to spread the disease to other nations. By the 16th century, smallpox had become a predominant cause of morbidity and mortality throughout much of the world.
There were no credible descriptions of smallpox-like disease in the Americas before the westward exploration by Europeans in the 15th century CE. Smallpox was introduced into the Caribbean island of Hispaniola in 1507, and into the mainland in 1520, when Spanish settlers from Hispaniola arrived in Mexico, inadvertently carrying smallpox with them. Because the native Amerindian population had no acquired immunity to this new disease, their peoples were decimated by epidemics. Such disruption and population losses were an important factor in the Spanish achieving conquest of the Aztecs and the Incas. Similarly, English settlement of the east coast of North America in 1633 in Plymouth, Massachusetts was accompanied by devastating outbreaks of smallpox among Native American populations, and subsequently among the native-born colonists. Case fatality rates during outbreaks in Native American populations were as high as 90%. Smallpox was introduced into Australia in 1789 and again in 1829, though colonial surgeons, who by 1829 were attempting to distinguish between smallpox and chickenpox (which could be almost equally fatal to Aborigines), were divided as to whether the 1829–1830 epidemic was chickenpox or smallpox. Although smallpox was never endemic on the continent, it has been described as the principal cause of death in Aboriginal populations between 1780 and 1870.
By the mid-18th century, smallpox was a major endemic disease everywhere in the world except in Australia and small islands untouched by outside exploration. In 18th century Europe, smallpox was a leading cause of death, killing an estimated 400,000 Europeans each year. Up to 10 percent of Swedish infants died of smallpox each year, and the death rate of infants in Russia might have been even higher. The widespread use of variolation in a few countries, notably Great Britain, its North American colonies, and China, somewhat reduced the impact of smallpox among the wealthy classes during the latter part of the 18th century, but a real reduction in its incidence did not occur until vaccination became a common practice toward the end of the 19th century. Improved vaccines and the practice of re-vaccination led to a substantial reduction in cases in Europe and North America, but smallpox remained almost unchecked everywhere else in the world. By the mid-20th century, variola minor occurred along with variola major, in varying proportions, in many parts of Africa. Patients with variola minor experience only a mild systemic illness, are often ambulant throughout the course of the disease, and are therefore able to more easily spread disease. Infection with variola minor virus induces immunity against the more deadly variola major form. Thus, as variola minor spread all over the US, into Canada, the South American countries, and Great Britain, it became the dominant form of smallpox, further reducing mortality rates.
Eradication
The first clear reference to smallpox inoculation was made by the Chinese author Wan Quan (1499–1582) in his (, "Pox Rash Teachings") published in 1549, with earliest hints of the practice in China during the 10th century. In China, powdered smallpox scabs were blown up the noses of the healthy. People would then develop a mild case of the disease and from then on were immune to it. The technique did have a 0.5–2.0% mortality rate, but that was considerably less than the 20–30% mortality rate of the disease itself. Two reports on the Chinese practice of inoculation were received by the Royal Society in London in 1700: one by Dr. Martin Lister who received a report by an employee of the East India Company stationed in China and another by Clopton Havers. Voltaire (1742) reports that the Chinese had practiced smallpox inoculation "these hundred years". Variolation had also been witnessed in Turkey by Lady Mary Wortley Montagu, who later introduced it in the UK.
An early mention of the possibility of smallpox's eradication was made in reference to the work of Johnnie Notions, a self-taught inoculator from Shetland, Scotland. Notions found success in treating people from at least the late 1780s through a method devised by himself despite having no formal medical background. His method involved exposing smallpox pus to peat smoke, burying it in the ground with camphor for up to 8 years, and then inserting the matter into a person's skin using a knife, and covering the incision with a cabbage leaf. He was reputed not to have lost a single patient. Arthur Edmondston, in writings on Notions' technique that were published in 1809, stated, "Had every practitioner been as uniformly successful in the disease as he was, the small-pox might have been banished from the face of the earth, without injuring the system, or leaving any doubt as to the fact."
The English physician Edward Jenner demonstrated the effectiveness of cowpox to protect humans from smallpox in 1796, after which various attempts were made to eliminate smallpox on a regional scale. In Russia in 1796, the first child to receive this treatment was bestowed the name "Vaccinov" by Catherine the Great, and was educated at the expense of the nation.
The introduction of the vaccine to the New World took place in Trinity, Newfoundland in 1800 by Dr. John Clinch, boyhood friend and medical colleague of Jenner. As early as 1803, the Spanish Crown organized the Balmis expedition to transport the vaccine to the Spanish colonies in the Americas and the Philippines, and establish mass vaccination programs there. The U.S. Congress passed the Vaccine Act of 1813 to ensure that safe smallpox vaccine would be available to the American public. By about 1817, a robust state vaccination program existed in the Dutch East Indies.
On March 26, 1806, the Swiss canton Thurgau became the first state in the world to introduce compulsory smallpox vaccinations, by order of the cantonal councillor Jakob Christoph Scherb. Half a year later, Elisa Bonaparte issued a corresponding order for her Principality of Lucca and Piombino. Baden followed in 1809, Prussia in 1815, Württemberg in 1818, Sweden in 1816 and the German Empire in 1874 through the Reichs Vaccination Act. In Lutheran Sweden, the Protestant clergy played a pioneering role in voluntary smallpox vaccination as early as 1800. The first vaccination was carried out in Liechtenstein in 1801, and from 1812 it was mandatory to vaccinate.
In British India a program was launched to propagate smallpox vaccination, through Indian vaccinators, under the supervision of European officials. Nevertheless, British vaccination efforts in India, and in Burma in particular, were hampered by indigenous preference for inoculation and distrust of vaccination, despite tough legislation, improvements in the local efficacy of the vaccine and vaccine preservative, and education efforts. By 1832, the federal government of the United States established a smallpox vaccination program for Native Americans. In 1842, the United Kingdom banned inoculation, later progressing to mandatory vaccination. The British government introduced compulsory smallpox vaccination by an Act of Parliament in 1853.
In the United States, from 1843 to 1855, first Massachusetts and then other states required smallpox vaccination. Although some disliked these measures, coordinated efforts against smallpox went on, and the disease continued to diminish in the wealthy countries. In Northern Europe a number of countries had eliminated smallpox by 1900, and by 1914, the incidence in most industrialized countries had decreased to comparatively low levels.
Vaccination continued in industrialized countries as protection against reintroduction until the mid to late 1970s. Australia and New Zealand are two notable exceptions; neither experienced endemic smallpox and never vaccinated widely, relying instead on protection by distance and strict quarantines.
The first hemisphere-wide effort to eradicate smallpox was made in 1950 by the Pan American Health Organization. The campaign was successful in eliminating smallpox from all countries of the Americas except Argentina, Brazil, Colombia, and Ecuador. In 1958 Professor Viktor Zhdanov, Deputy Minister of Health for the USSR, called on the World Health Assembly to undertake a global initiative to eradicate smallpox. The proposal (Resolution WHA11.54) was accepted in 1959. At this point, 2 million people were dying from smallpox every year. Overall, the progress towards eradication was disappointing, especially in Africa and in the Indian subcontinent. In 1966 an international team, the Smallpox Eradication Unit, was formed under the leadership of an American, Donald Henderson. In 1967, the World Health Organization intensified the global smallpox eradication by contributing $2.4 million annually to the effort, and adopted the new disease surveillance method promoted by Czech epidemiologist Karel Raška.
In the early 1950s, an estimated 50 million cases of smallpox occurred in the world each year. To eradicate smallpox, each outbreak had to be stopped from spreading, by isolation of cases and vaccination of everyone who lived close by. This process is known as "ring vaccination". The key to this strategy was the monitoring of cases in a community (known as surveillance) and containment.
The initial problem the WHO team faced was inadequate reporting of smallpox cases, as many cases did not come to the attention of the authorities. The fact that humans are the only reservoir for smallpox infection (the virus only infected humans and not other animals) and that carriers did not exist played a significant role in the eradication of smallpox. The WHO established a network of consultants who assisted countries in setting up surveillance and containment activities. Early on, donations of vaccine were provided primarily by the Soviet Union and the United States, but by 1973, more than 80 percent of all vaccine was produced in developing countries. The Soviet Union provided one and a half billion doses between 1958 and 1979, as well as the medical staff.
The last major European outbreak of smallpox was in 1972 in Yugoslavia, after a pilgrim from Kosovo returned from the Middle East, where he had contracted the virus. The epidemic infected 175 people, causing 35 deaths. Authorities declared martial law, enforced quarantine, and undertook widespread re-vaccination of the population, enlisting the help of the WHO. In two months, the outbreak was over. Prior to this, there had been a smallpox outbreak in May–July 1963 in Stockholm, Sweden, brought from the Far East by a Swedish sailor; this had been dealt with by quarantine measures and vaccination of the local population.
By the end of 1975, smallpox persisted only in the Horn of Africa. Conditions were very difficult in Ethiopia and Somalia, where there were few roads. Civil war, famine, and refugees made the task even more difficult. An intensive surveillance, containment, and vaccination program was undertaken in these countries in early and mid-1977, under the direction of Australian microbiologist Frank Fenner. As the campaign neared its goal, Fenner and his team played an important role in verifying eradication. The last naturally occurring case of indigenous smallpox (Variola minor) was diagnosed in Ali Maow Maalin, a hospital cook in Merca, Somalia, on 26 October 1977. The last naturally occurring case of the more deadly Variola major had been detected in October 1975 in a three-year-old Bangladeshi girl, Rahima Banu.
The global eradication of smallpox was certified, based on intense verification activities, by a commission of eminent scientists on 9 December 1979 and subsequently endorsed by the World Health Assembly on 8 May 1980. The first two sentences of the resolution read:
Costs and benefits
The cost of the eradication effort, from 1967 to 1979, was roughly US$300 million. Roughly a third came from the developed world, which had largely eradicated smallpox decades earlier. The United States, the largest contributor to the program, has reportedly recouped that investment every 26 days since in money not spent on vaccinations and the costs of incidence.
Since eradication
The last case of smallpox in the world occurred in an outbreak in the United Kingdom in 1978. A medical photographer, Janet Parker, contracted the disease at the University of Birmingham Medical School and died on 11 September 1978. Although it has remained unclear how Parker became infected, the source of the infection was established to be the variola virus grown for research purposes at the Medical School laboratory. All known stocks of smallpox worldwide were subsequently destroyed or transferred to two WHO-designated reference laboratories with BSL-4 facilities – the United States' Centers for Disease Control and Prevention (CDC) and the Soviet Union's (now Russia's) State Research Center of Virology and Biotechnology VECTOR.
WHO first recommended destruction of the virus in 1986 and later set the date of destruction to be 30 December 1993. This was postponed to 30 June 1999. Due to resistance from the U.S. and Russia, in 2002 the World Health Assembly agreed to permit the temporary retention of the virus stocks for specific research purposes. Destroying existing stocks would reduce the risk involved with ongoing smallpox research; the stocks are not needed to respond to a smallpox outbreak. Some scientists have argued that the stocks may be useful in developing new vaccines, antiviral drugs, and diagnostic tests; a 2010 review by a team of public health experts appointed by WHO concluded that no essential public health purpose is served by the U.S. and Russia continuing to retain virus stocks. The latter view is frequently supported in the scientific community, particularly among veterans of the WHO Smallpox Eradication Program.
On March 31, 2003, smallpox scabs were found inside an envelope in an 1888 book on Civil War medicine in Santa Fe, New Mexico. The envelope was labeled as containing scabs from a vaccination and gave scientists at the CDC an opportunity to study the history of smallpox vaccination in the United States.
On July 1, 2014, six sealed glass vials of smallpox dated 1954, along with sample vials of other pathogens, were discovered in a cold storage room in an FDA laboratory at the National Institutes of Health location in Bethesda, Maryland. The smallpox vials were subsequently transferred to the custody of the CDC in Atlanta, where virus taken from at least two vials proved viable in culture. After studies were conducted, the CDC destroyed the virus under WHO observation on February 24, 2015.
In 2017, scientists at the University of Alberta recreated an extinct horse pox virus to demonstrate that the variola virus can be recreated in a small lab at a cost of about $100,000, by a team of scientists without specialist knowledge. This makes the retention controversy irrelevant since the virus can be easily recreated even if all samples are destroyed. Although the scientists performed the research to help development of new vaccines as well as trace smallpox's history, the possibility of the techniques being used for nefarious purposes was immediately recognized, raising questions on dual use research and regulations.
In September 2019, the Russian lab housing smallpox samples experienced a gas explosion that injured one worker. It did not occur near the virus storage area, and no samples were compromised, but the incident prompted a review of risks to containment.
Society and culture
Biological warfare
In 1763, Pontiac's War broke out as a Native American confederacy led by Pontiac attempted to counter British control over the Great Lakes region. A group of Native American warriors laid siege to British-held Fort Pitt on June 22. In response, Henry Bouquet, the commander of the fort, ordered his subordinate Simeon Ecuyer to give smallpox-infested blankets from the infirmary to a Delaware delegation outside the fort. Bouquet had discussed this with his superior, Sir Jeffrey Amherst, who wrote to Bouquet stating: "Could it not be contrived to send the small pox among the disaffected tribes of Indians? We must on this occasion use every stratagem in our power to reduce them." Bouquet agreed with the proposal, writing back that "I will try to inocculate the Indians by means of Blankets that may fall in their hands". On 24 June 1763, William Trent, a local trader and commander of the Fort Pitt militia, wrote, "Out of our regard for them, we gave them two Blankets and an Handkerchief out of the Small Pox Hospital. I hope it will have the desired effect." The effectiveness of this effort to broadcast the disease is unknown. There are also accounts that smallpox was used as a weapon during the American Revolutionary War (1775–1783).
According to a theory put forward in Journal of Australian Studies (JAS) by independent researcher Christopher Warren, Royal Marines used smallpox in 1789 against indigenous tribes in New South Wales. This theory was also considered earlier in Bulletin of the History of Medicine and by David Day. However it is disputed by some medical academics, including Professor Jack Carmody, who in 2010 claimed that the rapid spread of the outbreak in question was more likely indicative of chickenpoxa more infectious disease which, at the time, was often confused, even by surgeons, with smallpox, and may have been comparably deadly to Aborigines and other peoples without natural immunity to it. Carmody noted that in the 8-month voyage of the First Fleet and the following 14 months there were no reports of smallpox amongst the colonists and that, since smallpox has an incubation period of 10–12 days, it is unlikely it was present in the First Fleet; however, Warren argued in the JAS article that the likely source was bottles of variola virus possessed by First Fleet surgeons. Ian and Jennifer Glynn, in The life and death of smallpox, confirm that bottles of "variolous matter" were carried to Australia for use as a vaccine, but think it unlikely the virus could have survived till 1789. In 2007, Christopher Warren offered evidence that the British smallpox may have been still viable. However, the only non-Aborigine reported to have died in this outbreak was a seaman called Joseph Jeffries, who was recorded as being of "American Indian" origin.
W. S. Carus, an expert in biological weapons, has written that there is circumstantial evidence that smallpox was deliberately introduced to the Aboriginal population. However Carmody and the Australian National University's Boyd Hunter continue to support the chickenpox hypothesis. In a 2013 lecture at the Australian National University, Carmody pointed out that chickenpox, unlike smallpox, was known to be present in the Sydney Cove colony. He also suggested that all (and earlier) identifications of smallpox outbreaks were dubious because: "surgeons … would have been unaware of the distinction between smallpox and chickenpox – the latter having traditionally been considered a milder form of smallpox."
During World War II, scientists from the United Kingdom, United States, and Japan (Unit 731 of the Imperial Japanese Army) were involved in research into producing a biological weapon from smallpox. Plans of large scale production were never carried through as they considered that the weapon would not be very effective due to the wide-scale availability of a vaccine.
In 1947, the Soviet Union established a smallpox weapons factory in the Scientific Research Institute of Medicine of the Ministry of Defense in Sergiyev Posad in Zagorsk, 75 km to the northeast of Moscow. An outbreak of weaponized smallpox occurred during testing at a facility on an island in the Aral Sea in 1971. General Prof. Peter Burgasov, former Chief Sanitary Physician of the Soviet Army and a senior researcher within the Soviet program of biological weapons, described the incident:
Others contend that the first patient may have contracted the disease while visiting Uyaly or Komsomolsk-on-Ustyurt, two cities where the boat docked.
Responding to international pressures, in 1991 the Soviet government allowed a joint U.S.–British inspection team to tour four of its main weapons facilities at Biopreparat. The inspectors were met with evasion and denials from the Soviet scientists and were eventually ordered out of the facility. In 1992, Soviet defector Ken Alibek alleged that the Soviet bioweapons program at Zagorsk had produced a large stockpile – as much as twenty tons – of weaponized smallpox (possibly engineered to resist vaccines, Alibek further alleged), along with refrigerated warheads to deliver it. Alibek's stories about the former Soviet program's smallpox activities have never been independently verified.
In 1997, the Russian government announced that all of its remaining smallpox samples would be moved to the Vector Institute in Koltsovo. With the breakup of the Soviet Union and unemployment of many of the weapons program's scientists, U.S. government officials have expressed concern that smallpox and the expertise to weaponize it may have become available to other governments or terrorist groups who might wish to use virus as means of biological warfare. Specific allegations made against Iraq in this respect proved to be false.
Notable cases
Famous historical figures who contracted smallpox include Lakota Chief Sitting Bull, Ramses V, the Kangxi Emperor (survived), Shunzhi Emperor and Tongzhi Emperor of China, Emperor Komei of Japan (died of smallpox in 1867), and Date Masamune of Japan (who lost an eye to the disease). Cuitláhuac, the 10th tlatoani (ruler) of the Aztec city of Tenochtitlan, died of smallpox in 1520, shortly after its introduction to the Americas, and the Incan emperor Huayna Capac died of it in 1527 (causing a civil war of succession in the Inca empire and the eventual conquest by the Spaniards). More recent public figures include Guru Har Krishan, 8th Guru of the Sikhs, in 1664, Louis I of Spain in 1724 (died), Peter II of Russia in 1730 (died), George Washington (survived), Louis XV of France in 1774 (died) and Maximilian III Joseph of Bavaria in 1777 (died).
Prominent families throughout the world often had several people infected by and/or perish from the disease. For example, several relatives of Henry VIII of England survived the disease but were scarred by it. These include his sister Margaret, his wife Anne of Cleves, and his two daughters: Mary I in 1527 and Elizabeth I in 1562. Elizabeth tried to disguise the pockmarks with heavy makeup. Mary, Queen of Scots, contracted the disease as a child but had no visible scarring.
In Europe, deaths from smallpox often changed dynastic succession. Louis XV of France succeeded his great-grandfather Louis XIV through a series of deaths of smallpox or measles among those higher in the succession line. He himself died of the disease in 1774. Peter II of Russia died of the disease at 14 years of age. Also, before becoming emperor, Peter III of Russia caught the virus and suffered greatly from it. He was left scarred and disfigured. His wife, Catherine the Great, was spared but fear of the virus clearly had its effects on her. She feared for the safety of her son, Paul, so much that she made sure that large crowds were kept at bay and sought to isolate him. Eventually, she decided to have herself inoculated by a British doctor, Thomas Dimsdale. While this was considered a controversial method at the time, she succeeded. Paul was later inoculated as well. Catherine then sought to have inoculations throughout her empire stating: "My objective was, through my example, to save from death the multitude of my subjects who, not knowing the value of this technique, and frightened of it, were left in danger." By 1800, approximately two million inoculations had been administered in the Russian Empire.
In China, the Qing dynasty had extensive protocols to protect Manchus from Peking's endemic smallpox.
U.S. Presidents George Washington, Andrew Jackson, and Abraham Lincoln all contracted and recovered from the disease. Washington became infected with smallpox on a visit to Barbados in 1751. Jackson developed the illness after being taken prisoner by the British during the American Revolution, and though he recovered, his brother Robert did not. Lincoln contracted the disease during his presidency, possibly from his son Tad, and was quarantined shortly after giving the Gettysburg address in 1863.
The famous theologian Jonathan Edwards died of smallpox in 1758 following an inoculation.
Soviet leader Joseph Stalin fell ill with smallpox at the age of seven. His face was badly scarred by the disease. He later had photographs retouched to make his pockmarks less apparent.
Hungarian poet Ferenc Kölcsey, who wrote the Hungarian national anthem, lost his right eye to smallpox.
Tradition and religion
In the face of the devastation of smallpox, various smallpox gods and goddesses have been worshipped throughout parts of the Old World, for example in China and India. In China, the smallpox goddess was referred to as T'ou-Shen Niang-Niang (). Chinese believers actively worked to appease the goddess and pray for her mercy, by such measures as referring to smallpox pustules as "beautiful flowers" as a euphemism intended to avert offending the goddess, for example (the Chinese word for smallpox is , literally "heaven flower"). In a related New Year's Eve custom it was prescribed that the children of the house wear ugly masks while sleeping, so as to conceal any beauty and thereby avoid attracting the goddess, who would be passing through sometime that night. If a case of smallpox did occur, shrines would be set up in the homes of the victims, to be worshipped and offered to as the disease ran its course. If the victim recovered, the shrines were removed and carried away in a special paper chair or boat for burning. If the patient did not recover, the shrine was destroyed and cursed, to expel the goddess from the house.
In the Yoruba language smallpox is known as ṣọpọná, but it was also written as shakpanna, shopona, ṣhapana, and ṣọpọnọ. The word is a combination of 3 words, the verb ṣán, meaning to cover or plaster (referring to the pustules characteristic of smallpox), kpa or pa, meaning to kill, and enia, meaning human. Roughly translated, it means One who kills a person by covering them with pustules. Among the Yorùbá people of West Africa, and also in Dahomean religion, Trinidad, and in Brazil, The deity Sopona, also known as Obaluaye, is the deity of smallpox and other deadly diseases (like leprosy, HIV/AIDS, and fevers). One of the most feared deities of the orisha pantheon, smallpox was seen as a form of punishment from Shopona. Worship of Shopona was highly controlled by his priests, and it was believed that priests could also spread smallpox when angered. However, Shopona was also seen as a healer who could cure the diseases he inflicted, and he was often called upon by his victims to heal them. The British government banned the worship of the god because it was believed his priests were purposely spreading smallpox to their opponents.
India's first records of smallpox can be found in a medical book that dates back to 400 CE. This book describes a disease that sounds exceptionally like smallpox. India, like China and the Yorùbá, created a goddess in response to its exposure to smallpox. The Hindu goddess Shitala was both worshipped and feared during her reign. It was believed that this goddess was both evil and kind and had the ability to inflict victims when angered, as well as calm the fevers of the already affected. Portraits of the goddess show her holding a broom in her right hand to continue to move the disease and a pot of cool water in the other hand in an attempt to soothe patients. Shrines were created where many Indian natives, both healthy and not, went to worship and attempt to protect themselves from this disease. Some Indian women, in an attempt to ward off Shitala, placed plates of cooling foods and pots of water on the roofs of their homes.
In cultures that did not recognize a smallpox deity, there was often nonetheless a belief in smallpox demons, who were accordingly blamed for the disease. Such beliefs were prominent in Japan, Europe, Africa, and other parts of the world. Nearly all cultures who believed in the demon also believed that it was afraid of the color red. This led to the invention of the so-called red treatment, where patients and their rooms would be decorated in red. The practice spread to Europe in the 12th century and was practiced by (among others) Charles V of France and Elizabeth I of England. Afforded scientific credibility through the studies by Niels Ryberg Finsen showing that red light reduced scarring, this belief persisted even until the 1930s.
| Biology and health sciences | Illness and injury | null |
16831059 | https://en.wikipedia.org/wiki/Suicide | Suicide | Suicide is the act of intentionally causing one's own death. Mental disorders, physical disorders, and substance abuse are common risk factors.
Some suicides are impulsive acts driven by stress (such as from financial or academic difficulties), relationship problems (such as breakups or divorces), or harassment and bullying. Those who have previously attempted suicide are at a higher risk for future attempts. Effective suicide prevention efforts include limiting access to methods of suicide such as firearms, drugs, and poisons; treating mental disorders and substance abuse; careful media reporting about suicide; improving economic conditions; and dialectical behaviour therapy (DBT). Although crisis hotlines, like 988 in North America and 13 11 14 in Australia, are common resources, their effectiveness has not been well studied.
Suicide is the 10th leading cause of death worldwide, accounting for approximately 1.5% of total deaths. In a given year, this is roughly 12 per 100,000 people. Though suicides resulted in 828,000 deaths globally in 2015, an increase from 712,000 deaths in 1990, the age-standardized death rate decreased by 23.3%. By gender, suicide rates are generally higher among men than women, ranging from 1.5 times higher in the developing world to 3.5 times higher in the developed world; in the Western world, non-fatal suicide attempts are more common among young people and women. Suicide is generally most common among those over the age of 70; however, in certain countries, those aged between 15 and 30 are at the highest risk. Europe had the highest rates of suicide by region in 2015. There are an estimated 10 to 20 million non-fatal attempted suicides every year. Non-fatal suicide attempts may lead to injury and long-term disabilities. The most commonly adopted method of suicide varies from country to country and is partly related to the availability of effective means. Assisted suicide, sometimes done when a person is in severe pain or facing an imminent death, is legal in many countries and increasing in numbers.
Views on suicide have been influenced by broad existential themes such as religion, honor, and the meaning of life. The Abrahamic religions traditionally consider suicide as an offense towards God due to belief in the sanctity of life. During the samurai era in Japan, a form of suicide known as seppuku (, ) was respected as a means of making up for failure or as a form of protest. Similarly, a ritual fast unto death, known as Vatakkiruttal (, 'fasting facing north'), was a Tamil ritual suicide in ancient India during the Sangam age. Suicide and attempted suicide, while previously illegal, are no longer so in most Western countries. It remains a criminal offense in some countries. In the 20th and 21st centuries, suicide has been used on rare occasions as a form of protest; it has also been committed while or after murdering others, a tactic that has been used both militarily and by terrorists. Suicide is often seen as a major catastrophe, causing significant grief to the deceased's relatives, friends and community members, and it is viewed negatively almost everywhere around the world.
Definitions
Suicide, derived from Latin , is "the act of taking one's own life". Attempted suicide, or non-fatal suicidal behavior, amounts to self-injury with at least some desire to end one's life that does not result in death. Assisted suicide occurs when one individual helps another bring about their own death indirectly by providing either advice or the means to the end. Euthanasia, more specifically voluntary euthanasia, is where another person takes a more active role in bringing about a person's death.
Suicidal ideation is thoughts of ending one's life but not taking any active efforts to do so. It may or may not involve exact planning or intent. Suicidality is defined as "the risk of suicide, usually indicated by suicidal ideation or intent, especially as evident in the presence of a well-elaborated suicidal plan."
In a murder–suicide (or homicide–suicide), the individual aims at taking the lives of others at the same time. A special case of this is extended suicide, where the murder is motivated by seeing the murdered persons as an extension of their self. Suicide in which the reason is that the person feels that they are not part of society is known as egoistic suicide.
The Centre for Suicide Prevention in Canada found that the normal verb in scholarly research and journalism for the act of suicide was commit, and argued for destigmatizing terminology related to suicide; in 2011, they published an article calling for changing the language used around suicide entitled "Suicide and language: Why we shouldn't use the 'C' word". The American Psychological Association lists "committed suicide" as a term to avoid because it "frame[s] suicide as a crime." Some advocacy groups recommend using the terms took his/her own life, died by suicide, or killed him/herself instead of committed suicide. The Associated Press Stylebook recommends avoiding "committed suicide" except in direct quotes from authorities. The Guardian and Observer style guides deprecate the use of "committed", as does CNN. Opponents of commit argue that it implies that suicide is criminal, sinful, or morally wrong.
Pathophysiology
There is no known unifying underlying pathophysiology for suicide; it is believed to result from an interplay of behavioral, socio-economic and psychological factors.
Low levels of brain-derived neurotrophic factor (BDNF) are both directly associated with suicide and indirectly associated through its role in major depression, post-traumatic stress disorder, schizophrenia and obsessive–compulsive disorder. Post-mortem studies have found reduced levels of BDNF in the hippocampus and prefrontal cortex, in those with and without psychiatric conditions. Serotonin, a brain neurotransmitter, is believed to be low in those who die by suicide. This is partly based on evidence of increased levels of 5-HT2A receptors found after death. Other evidence includes reduced levels of a breakdown product of serotonin, 5-hydroxyindoleacetic acid, in the cerebral spinal fluid. However, direct evidence is hard to obtain. Epigenetics, the study of changes in genetic expression in response to environmental factors which do not alter the underlying DNA, is also believed to play a role in determining suicide risk.
Risk factors
Factors that affect the risk of suicide include mental disorders, drug misuse, psychological states, cultural, family and social situations, genetics, experiences of trauma or loss, and nihilism. Mental disorders and substance misuse frequently co-exist. Other risk factors include having previously attempted suicide, the ready availability of a means to take one's life, a family history of suicide, or the presence of traumatic brain injury. For example, suicide rates have been found to be greater in households with firearms than those without them.
Socio-economic problems such as unemployment, poverty, homelessness, and discrimination may trigger suicidal thoughts. Suicide might be rarer in societies with high social cohesion and moral objections against suicide. Genetics appears to account for between 38% and 55% of suicidal behaviors. Suicides may also occur as a local cluster of cases.
Most research does not distinguish between risk factors that lead to thinking about suicide and risk factors that lead to suicide attempts. Risks for suicide attempt, rather than just thoughts of suicide, include a high pain tolerance and a reduced fear of death.
Autism
Those with autism attempt and consider suicide more frequently than the general population. People with autism have been found to be up to seven times more likely to attempt suicide than non-autistic people.
Environmental exposures
Some environmental exposures, including air pollution, intense sunlight, sunlight duration, hot weather, and high altitude, are associated with suicide. There is a possible association between short-term PM10 exposure and suicide. These factors might affect certain high-risk individuals more than others.
The time of year may also affect suicide rates. There appears to be a decrease around Christmas, but an increase in rates during spring and summer, which might be related to exposure to sunshine. Another study found that the risk may be greater for males on their birthday.
Genetics might influence rates of suicide. A family history of suicide, especially in the mother, affects children more than adolescents or adults. Adoption studies have shown that this is the case for biological relatives, but not adopted relatives. This makes familial risk factors unlikely to be due to imitation. Once mental disorders are accounted for, the estimated heritability rate is 36% for suicidal ideation and 17% for suicide attempts. An evolutionary explanation for suicide is that it may improve inclusive fitness. This may occur if the person dying by suicide cannot have more children and takes resources away from relatives by staying alive. An objection to this explanation is that deaths by healthy adolescents likely do not increase inclusive fitness. Adaptation to a very different ancestral environment may be maladaptive in the current one.
Media
The media, including the Internet, plays an important role. Certain depictions of suicide may increase its occurrence, with high-volume, prominent, repetitive coverage glorifying or romanticizing suicide having the most impact. For example, about 15–40% of people leave a suicide note, and media are discouraged from reporting the contents of that message. When detailed descriptions of how to kill oneself by a specific means are portrayed, this method of suicide can be imitated in vulnerable people. This phenomenon has been observed in several cases after press coverage. In a bid to reduce the adverse effect of media portrayals concerning suicide report, one of the effective methods is to educate journalists on how to report suicide news in a manner that might reduce that possibility of imitation and encourage those at risk to seek for help. When journalists follow certain reporting guidelines the risk of suicides can be decreased. Getting buy-in from the media industry, however, can be difficult, especially in the long term.
This trigger of suicide contagion or copycat suicide is known as the "Werther effect", named after the protagonist in Goethe's The Sorrows of Young Werther who killed himself and then was emulated by many admirers of the book. This risk is greater in adolescents who may romanticize death. It appears that while news media has a significant effect, that of the entertainment media is equivocal. It is unclear if searching for information about suicide on the Internet relates to the risk of suicide. The opposite of the Werther effect is the proposed "Papageno effect", in which coverage of effective coping mechanisms may have a protective effect. The term is based upon a character in Mozart's opera The Magic Flute—fearing the loss of a loved one, he had planned to kill himself until his friends helped him out. As a consequence, fictional portrayals of suicide, showing alternative consequences or negative consequences, might have a preventive effect, for instance fiction might normalize mental health problems and encourage help-seeking.
Medical conditions
There is an association between suicidality and physical health problems such as chronic pain, traumatic brain injury, cancer, chronic fatigue syndrome, kidney failure (requiring hemodialysis), HIV, and systemic lupus erythematosus. The diagnosis of cancer approximately doubles the subsequent frequency of suicide. The prevalence of increased suicidality persisted after adjusting for depressive illness and alcohol abuse. Among people with more than one medical condition the frequency was particularly high. In Japan, health problems are listed as the primary justification for suicide.
Sleep disturbances, such as insomnia and sleep apnea, are risk factors for depression and suicide. In some instances, the sleep disturbances may be a risk factor independent of depression. A number of other medical conditions may present with symptoms similar to mood disorders, including hypothyroidism, Alzheimer's, brain tumors, systemic lupus erythematosus, and adverse effects from a number of medications (such as beta blockers and steroids).
Mental illness
Mental illness is present at the time of suicide 27% to more than 90% of the time. Of those who have been hospitalized for suicidal behavior, the lifetime risk of suicide is 8.6%. Comparatively, non-suicidal people hospitalized for affective disorders have a 4% lifetime risk of suicide. Half of all people who die by suicide may have major depressive disorder; having this or one of the other mood disorders such as bipolar disorder increases the risk of suicide 20-fold. Other conditions implicated include schizophrenia (14%), personality disorders (8%), obsessive–compulsive disorder, and post-traumatic stress disorder.
Others estimate that about half of people who die by suicide could be diagnosed with a personality disorder, with borderline personality disorder being the most common. About 5% of people with schizophrenia die of suicide. Eating disorders are another high risk condition. Around 22% to 50% of people with gender dysphoria have attempted suicide, however this greatly varies by region.
Among approximately 80% of suicides, the individual has seen a physician within the year before their death, including 45% within the prior month. Approximately 25–40% of those who died by suicide had contact with mental health services in the prior year. Antidepressants of the SSRI class appear to increase the frequency of suicide among children and young persons. An unwillingness to get help for mental health problems also increases the risk.
Occupational factors
Certain occupations carry an elevated risk of self-harm and suicide, such as military careers. Research in several countries has found that the rate of suicide among former armed forces personnel in particular, and young veterans especially, is markedly higher than that found in the general population. War veterans have a higher risk of suicide due in part to higher rates of mental illness, such as post-traumatic stress disorder, and physical health problems related to war.
Previous attempts
A 2002 review of about analyzing about 90 suicide related study concluded that the risk of suicide following a previous attempt or self-harm is hundreds of time larger than in the general population. A more recent study estimated that individuals with a history of suicide attempts are approximately 25 times more likely to die by suicide compared to the general population. These findings makes a suicide attempt one of the strongest predictors of eventual completed attempt.
Among the population that completed the suicide attempt, it is estimated that between 25% (after one year) to 40%
tried to commit suicide before. The likelihood of completion of the subsequent attempt depends on the means used, the age of the person and their gender. Other risk factors such as substance use and mental health impact likelihood of completed attempt after an attempt. High suicidal intent during previous attempts is another strong predictor.
Time passing since an attempt also plays critical role. The first and the second year have the highest risk of completed attempt. It is estimated that 1% die by suicide within a year of the first attempt
It is estimated that about 90% of suicide survivors will not die of suicide.
Psychosocial factors
A number of psychological factors increase the risk of suicide including: hopelessness, loss of pleasure in life, depression, anxiousness, agitation, rigid thinking, rumination, thought suppression, and poor coping skills. A poor ability to solve problems, the loss of abilities one used to have, and poor impulse control also play a role. In older adults, the perception of being a burden to others is important. Those who have never married are also at greater risk. Recent life stresses, such as a loss of a family member or friend or the loss of a job, might be a contributing factor.
Certain personality factors, especially high levels of neuroticism and introvertedness, have been associated with suicide. This might lead to people who are isolated and sensitive to distress to be more likely to attempt suicide. On the other hand, optimism has been shown to have a protective effect. Other psychological risk factors include having few reasons for living and feeling trapped in a stressful situation. Changes to the stress response system in the brain might be altered during suicidal states. Specifically, changes in the polyamine system and hypothalamic–pituitary–adrenal axis.
Social isolation and the lack of social support has been associated with an increased risk of suicide. Poverty is also a factor, with heightened relative poverty compared to those around a person increasing suicide risk. Over 200,000 farmers in India have died by suicide since 1997, partly due to issues of debt. In China, suicide is three times as likely in rural regions as urban ones, partly, it is believed, due to financial difficulties in this area of the country.
Being religious may reduce one's risk of suicide while beliefs that suicide is noble may increase it. This has been attributed to the negative stance many religions take against suicide and to the greater connectedness religion may give. Muslims, among religious people, appear to have a lower rate of suicide; however, the data supporting this is not strong. There does not appear to be a difference in rates of attempted suicide. Young women in the Middle East may have higher rates.
Rational
Rational suicide is the reasoned taking of one's own life. However, some consider suicide as never being rational.
Euthanasia and assisted suicide are accepted practices in a number of countries among those who have a poor quality of life without the possibility of getting better. They are supported by the legal arguments for a right to die.
The act of taking one's life for the benefit of others is known as altruistic suicide. An example of this is an elder ending his or her life to leave greater amounts of food for the younger people in the community. Suicide in some Inuit cultures has been seen as an act of respect, courage, or wisdom.
A suicide attack is a political or religious action where an attacker carries out violence against others which they understand will result in their own death. Some suicide bombers are motivated by a desire to obtain martyrdoms or are religiously motivated. Kamikaze missions were carried out as a duty to a higher cause or moral obligation. Murder–suicide is an act of homicide followed within a week by suicide of the person who carried out the act.
Mass suicides are often performed under social pressure where members give up autonomy to a leader (see Notable cases below). Mass suicides can take place with as few as two people, often referred to as a suicide pact. In extenuating situations where continuing to live would be intolerable, some people use suicide as a means of escape. Some inmates in Nazi concentration camps are known to have killed themselves during the Holocaust by deliberately touching the electrified fences.
Self-harm
Non-suicidal self-harm is common with 18% of people engaging in self-harm over the course of their life. Acts of self-harm are not usually suicide attempts and most who self-harm are not at high risk of suicide. Some who self-harm, however, do still end their life by suicide, and risk for self-harm and suicide may overlap. Individuals who have been identified as self-harming after being admitted to hospital are more likely to die by suicide.
Substance misuse
Substance misuse is the second most common risk factor for suicide after major depression and bipolar disorder. Both chronic substance misuse as well as acute intoxication are associated. When combined with personal grief, such as bereavement, the risk is further increased. Substance misuse is also associated with mental health disorders.
Most people are under the influence of sedative-hypnotic drugs (such as alcohol or benzodiazepines) when they die by suicide, with alcoholism present in between 15% and 61% of cases. Use of prescribed benzodiazepines is associated with an increased rate of suicide and attempted suicide. The pro-suicidal effects of benzodiazepines are suspected to be due to a psychiatric disturbance caused by side effects, such as disinhibition, or withdrawal symptoms. Countries that have higher rates of alcohol use and a greater density of bars generally also have higher rates of suicide. About 2.2–3.4% of those who have been treated for alcoholism at some point in their life die by suicide. Alcoholics who attempt suicide are usually male, older, and have tried to take their own lives in the past. Between 3 and 35% of deaths among those who use heroin are due to suicide (approximately fourteenfold greater than those who do not use). In adolescents who misuse alcohol, neurological and psychological dysfunctions may contribute to the increased risk of suicide.
The misuse of cocaine and methamphetamine has a high correlation with suicide. In those who use cocaine, the risk is greatest during the withdrawal phase. Those who used inhalants are also at significant risk with around 20% attempting suicide at some point and more than 65% considering it. Smoking cigarettes is associated with risk of suicide. There is little evidence as to why this association exists; however, it has been hypothesized that those who are predisposed to smoking are also predisposed to suicide, that smoking causes health problems which subsequently make people want to end their life, and that smoking affects brain chemistry causing a propensity for suicide. Cannabis, however, does not appear to independently increase the risk.
Other factors
Trauma is a risk factor for suicidality in both children and adults. Some may take their own lives to escape bullying or prejudice. A history of childhood sexual abuse and time spent in foster care are also risk factors. Sexual abuse is believed to contribute to approximately 20% of the overall risk. Significant adversity early in life has a negative effect on problem-solving skills and memory, both of which are implicated in suicidality. According to a 2022 study, adverse childhood experiences maybe "associated with a two-fold higher odds" of anxiety disorders, depression and suicidality."
Problem gambling is associated with increased suicidal ideation and attempts compared to the general population. Between 12 and 24% of pathological gamblers attempt suicide. The rate of suicide among their spouses is three times greater than that of the general population. Other factors that increase the risk in problem gamblers include concomitant mental illness, alcohol, and drug misuse.
Infection by the parasite Toxoplasma gondii, more commonly known as toxoplasmosis, has been linked with suicide risk. One explanation states that this is caused by altered neurotransmitter activity due to the immunological response.
Prevention
Suicide prevention is a term used for the collective efforts to reduce the incidence of suicide through preventive measures. Protective factors for suicide include support, and access to therapy. About 60% of people with suicidal thoughts do not seek help. Reasons for not doing so include low perceived need, and wanting to deal with the problem alone. Despite these high rates, there are few established treatments available for suicidal behavior.
Reducing access to certain methods, such as access to firearms or toxins such as opioids and pesticides, can reduce risk of suicide by that method. Reducing access to easily-accessible methods of suicide may make impulsive attempts less likely to succeed. Other measures include reducing access to charcoal (for burning) and adding barriers on bridges and subway platforms. Treatment of drug and alcohol addiction, depression, and those who have attempted suicide in the past, may also be effective. Some have proposed reducing access to alcohol as a preventive strategy (such as reducing the number of bars).
In young adults who have recently thought about suicide, cognitive behavioral therapy appears to improve outcomes. School-based programs that increase mental health literacy and train staff have shown mixed results on suicide rates. Economic development through its ability to reduce poverty may be able to decrease suicide rates. Efforts to increase social connection, especially in elderly males, may be effective. In people who have attempted suicide, following up on them might prevent repeat attempts. Although crisis hotlines are common, there is little evidence to support or refute their effectiveness. Preventing childhood trauma provides an opportunity for suicide prevention. The World Suicide Prevention Day is observed annually on 10 September with the support of the International Association for Suicide Prevention and the World Health Organization.
Diet
About 50% of people who die of suicide have a mood disorder such as major depression. Sleep and diet may play a role in depression (major depressive disorder), and interventions in these areas may be an effective add-on to conventional methods. Vitamin B2, B6 and B12 deficiency may cause depression in females.
Risk of depression may be reduced with a healthy diet "high in fruits, vegetables, nuts, and legumes; moderate amounts of poultry, eggs, and dairy products; and only occasional red meat". A balanced diet and the consumption of lots of water is essential for mental health. Consuming oily fish may also help as they contain omega-3 fats. Consuming too much refined carbohydrates (e.g., snack foods) may increase the risk of depression symptoms. The mechanism on how diet improves or worsens mental health is still not fully understood. Blood glucose levels alterations, inflammation, or effects on the gut microbiome have been suggested.
Screening
There is little data on the effects of screening the general population on the ultimate rate of suicide. Screening those who come to the emergency departments with injuries from self-harm have been shown to help identify suicide ideation and suicide intention. Psychometric tests such as the Beck Depression Inventory or the Geriatric Depression Scale for older people are being used. As there is a high rate of people who test positive via these tools that are not at risk of suicide, there are concerns that screening may significantly increase mental health care resource utilization. Assessing those at high risk, though, is recommended for. Asking about suicidality does not appear to increase the risk.
Treatment of mental illness
In those with mental health problems, a number of treatments may reduce the risk of suicide. Those who are actively suicidal may be admitted to psychiatric care either voluntarily or involuntarily. Possessions that may be used to harm oneself are typically removed. Some clinicians get patients to sign suicide prevention contracts where they agree to not harm themselves if released. However, evidence does not support a significant effect from this practice. If a person is at low risk, outpatient mental health treatment may be arranged. Short-term hospitalization has not been found to be more effective than community care for improving outcomes in those with borderline personality disorder who are chronically suicidal.
There is tentative evidence that psychotherapy, specifically dialectical behaviour therapy, reduces suicidality in adolescents as well as in those with borderline personality disorder. It may also be useful in decreasing suicide attempts in adults at high risk.
There is controversy around the benefit-versus-harm of antidepressants. In young persons, some antidepressants, such as SSRIs, appear to increase the risk of suicidality from 25 per 1000 to 40 per 1000. In older persons, however, they may decrease the risk. Lithium appears effective at lowering the risk in those with bipolar disorder and major depression to nearly the same levels as that of the general population. Clozapine may decrease the thoughts of suicide in some people with schizophrenia. Ketamine, which is a dissociative anaesthetic, seems to lower the rate of suicidal ideation. In the United States, health professionals are legally required to take reasonable steps to try to prevent suicide.
Caring letters
The "Caring Letters" model of suicide prevention involved mailing short letters that expressed the researchers' interest in the recipients without pressuring them to take any action. The intervention reduced deaths by suicide, as proven through a randomized controlled trial. The technique involves letters sent from a researcher who had spoken at length with the recipient during a suicidal crisis. The typewritten form letters were brief – sometimes as short as two sentences – personally signed by the researcher, and expressed interest in the recipient without making any demands. They were initially sent monthly, eventually decreasing in frequency to quarterly letters; if the recipient wrote back, then an additional personal letter was mailed.
Caring letters are inexpensive and either the only, or one of very few, approaches to suicide prevention that has been scientifically proven to work during the first years after a suicide attempt that resulted in hospitalization.
Methods
The leading method of suicide varies among countries. The leading methods in different regions include hanging, pesticide poisoning, and firearms. These differences are believed to be in part due to availability of the different methods. A review of 56 countries found that hanging was the most common method in most of the countries, accounting for 53% of male suicides and 39% of female suicides.
Worldwide, 30% of suicides are estimated to occur from pesticide poisoning, most of which occur in the developing world. The use of this method varies markedly from 4% in Europe to more than 50% in the Pacific region. It is also common in Latin America due to the ease of access within the farming populations. In many countries, drug overdoses account for approximately 60% of suicides among women and 30% among men. Many are unplanned and occur during an acute period of ambivalence. The death rate varies by method: firearms 80–90%, drowning 65–80%, hanging 60–85%, jumping 35–60%, charcoal burning 40–50%, pesticides 60–75%, and medication overdose 1.5–4.0%. The most common attempted methods of suicide differ from the most common methods of completion; up to 85% of attempts are via drug overdose in the developed world.
In China, the consumption of pesticides is the most common method. In Japan, self-disembowelment known as seppuku (harakiri) still occurs; however, hanging and jumping are the most common. Jumping to one's death is common in both Hong Kong and Singapore at 50% and 80% respectively. In Switzerland, firearms are the most frequent suicide method in young males, although this method has decreased since guns have become less common. In the United States, 50% of suicides involve the use of firearms, with this method being somewhat more common in men (56%) than women (31%). The next most common cause was hanging in males (28%) and self-poisoning in females (31%). Together, hanging and poisoning constituted about 42% of U.S. suicides ().
Epidemiology
Approximately 1.4% of people die by suicide, a mortality rate of 11.6 per 100,000 persons per year. Suicide resulted in 842,000 deaths in 2013 up from 712,000 deaths in 1990. Rates of suicide have increased by 60% from the 1960s to 2012, with these increases seen primarily in the developing world. Globally, /2009, suicide is the tenth leading cause of death. For every suicide that results in death there are between 10 and 40 attempted suicides.
Suicide rates differ significantly between countries and over time. As a percentage of deaths in 2008 it was: Africa 0.5%, South-East Asia 1.9%, Americas 1.2% and Europe 1.4%. Rates per 100,000 were: Australia 8.6, Canada 11.1, China 12.7, India 23.2, United Kingdom 7.6, United States 11.4 and South Korea 28.9. It was ranked as the 10th leading cause of death in the United States in 2016 with about 45,000 cases that year. Rates have increased in the United States in the last few years, with about 49,500 people dying by suicide in 2022, the highest number ever recorded. In the United States, about 650,000 people are seen in emergency departments yearly due to attempting suicide. The United States rate among men in their 50s rose by nearly half in the decade 1999–2010. Greenland, Lithuania, Japan, and Hungary have the highest rates of suicide. Around 75% of suicides occur in the developing world. The countries with the greatest absolute numbers of suicides are China and India, partly due to their large population size, accounting for over half the total. In China, suicide is the 5th leading cause of death.
An unofficial report estimated 5,000 suicides in Iran in 2022.
Sex and gender
Globally , death by suicide occurs about 1.8 times more often in males than females. In the Western world, males die three to four times more often by means of suicide than do females. This difference is even more pronounced in those over the age of 65, with tenfold more males than females dying by suicide. Suicide attempts and self-harm are between two and four times more frequent among females. Researchers have attributed the difference between suicide and attempted suicide among the sexes to males using more lethal means to end their lives. However, separating intentional suicide attempts from non-suicidal self-harm is not currently done in places like the United States when gathering statistics at the national level.
China has one of the highest female suicide rates in the world and is the only country where it is higher than that of men (ratio of 0.9). In the Eastern Mediterranean, suicide rates are nearly equivalent between males and females. The highest rate of female suicide is found in South Korea at 22 per 100,000, with high rates in South-East Asia and the Western Pacific generally.
A number of reviews have found an increased risk of suicide among lesbian, gay, bisexual, and transgender people. Among transgender persons, rates of attempted suicide are about 40% compared to a general population rate of 5%. This is believed to in part be due to social stigmatisation.
Age
In many countries, the rate of suicide is highest in the middle-aged or elderly. The absolute number of suicides, however, is greatest in those between 15 and 29 years old, due to the number of people in this age group. Worldwide, the average age of suicide is between age 30 and 49 for both men and women. Suicidality is rare in children, but increases during the transition to adolescence.
In the United States, the suicide death rate is greatest in Caucasian men older than 80 years, even though younger people more frequently attempt suicide. It is the second most common cause of death in adolescents and in young males is second only to accidental death. In young males in the developed world, it is the cause of nearly 30% of mortality. In the developing world rates are similar, but it makes up a smaller proportion of overall deaths due to higher rates of death from other types of trauma. In South-East Asia, in contrast to other areas of the world, deaths from suicide occur at a greater rate in young females than elderly females.
History
In ancient Athens, a person who died by suicide without the approval of the state was denied the honors of a normal burial. The person would be buried alone, on the outskirts of the city, without a headstone or marker. It was also common for the hand to be cut off the body and buried separately - the hand (and the instrument used) being considered the perpetrator. However, it was deemed to be an acceptable method to deal with military defeat. In Ancient Rome, while suicide was initially permitted, it was later deemed a crime against the state due to its economic costs. Aristotle condemned all forms of suicide while Plato was ambivalent. In Rome, some reasons for suicide included volunteering death in a gladiator combat, guilt over murdering someone, to save the life of another, as a result of mourning, from shame from being raped, and as an escape from intolerable situations like physical suffering, military defeat, or criminal pursuit.
Suicide came to be regarded as a sin in Christian Europe and was condemned at the Council of Arles (452) as the work of the Devil. In the Middle Ages, the Church had drawn-out discussions as to when the desire for martyrdom was suicidal, as in the case of martyrs of Córdoba. Despite these disputes and occasional official rulings, Catholic doctrine was not entirely settled on the subject of suicide until the later 17th century. A criminal ordinance issued by Louis XIV of France in 1670 was extremely severe, even for the times: the dead person's body was drawn through the streets, face down, and then hung or thrown on a garbage heap. Additionally, all of the person's property was confiscated.
Attitudes towards suicide slowly began to shift during the Renaissance. John Donne's work Biathanatos contained one of the first modern defences of suicide, bringing proof from the conduct of Biblical figures, such as Jesus, Samson and Saul, and presenting arguments on grounds of reason and nature to sanction suicide in certain circumstances.
The secularization of society that began during the Enlightenment questioned traditional religious attitudes (such as Christian views on suicide) toward suicide and brought a more modern perspective to the issue. David Hume denied that suicide was a crime as it affected no one and was potentially to the advantage of the individual. In his 1777 Essays on Suicide and the Immortality of the Soul he rhetorically asked, "Why should I prolong a miserable existence, because of some frivolous advantage which the public may perhaps receive from me?" Hume's analysis was criticized by philosopher Philip Reed as being "uncharacteristically (for him) bad", since Hume took an unusually narrow conception of duty and his conclusion depended upon the suicide producing no harm to others – including causing no grief, feelings of guilt, or emotional pain to any surviving friends and family – which is almost never the case. A shift in public opinion at large can also be discerned; The Times in 1786 initiated a spirited debate on the motion "Is suicide an act of courage?".
By the 19th century, the act of suicide had shifted from being viewed as caused by sin to being caused by insanity in Europe. Although suicide remained illegal during this period, it increasingly became the target of satirical comments, such as the Gilbert and Sullivan comic opera The Mikado, which satirized the idea of executing someone who had already killed himself.
By 1879, English law began to distinguish between suicide and homicide, although suicide still resulted in forfeiture of estate. In 1882, the deceased were permitted daylight burial in England and by the middle of the 20th century, suicide had become legal in much of the Western world. The term suicide first emerged shortly before 1700 to replace expressions on self-death which were often characterized as a form of self-murder in the West.
Social and culture
Legislation
Suicide is a crime in some parts of the world. No country in Europe currently considers suicide or attempted suicide to be a crime. It was, however, in most Western European countries from the Middle Ages until at least the 19th century. The Netherlands was the first country to legalize both physician-assisted suicide and euthanasia, which took effect in 2002, although only doctors are allowed to assist in either of them, and have to follow a protocol prescribed by Dutch law. If such protocol is not followed, it is an offence punishable by law. In Germany, active euthanasia is illegal and anyone present during suicide may be prosecuted for failure to render aid in an emergency. Switzerland has taken steps to legalize assisted suicide for the chronically mentally ill. The high court in Lausanne, Switzerland, in a 2006 ruling, granted an anonymous individual with longstanding psychiatric difficulties the right to end his own life. England and Wales decriminalized suicide via the Suicide Act 1961 and the Republic of Ireland in 1993. The word "commit" was used in reference to its being illegal, but many organisations have stopped it because of the negative connotation.
In the United States, suicide is not illegal, but may be associated with penalties for those who attempt it. Physician-assisted suicide is legal in the state of Washington for people with terminal diseases. In Oregon, people with terminal diseases may request medications to help end their life. Canadians who have attempted suicide may be barred from entering the United States. U.S. laws allow border guards to deny access to people who have a mental illness, including those with previous suicide attempts.
In Australia, suicide is not a crime. However, it is a crime to counsel, incite, or aid and abet another in attempting to die by suicide, and the law explicitly allows any person to use "such force as may reasonably be necessary" to prevent another from taking their own life. The Northern Territory of Australia briefly had legal physician-assisted suicide from 1996 to 1997.
In India, suicide was illegal until 2014, and surviving family members used to face legal difficulties. It remains a criminal offense in most Muslim-majority nations.
In Malaysia, suicide per se is not a crime; however, attempted suicide is. Under Section 309 of the Penal Code, a person convicted of attempting suicide can be punished with imprisonment of up to one year, fined, or both. There are ongoing efforts to decriminalise attempted suicide, although rights groups and non-governmental organisations such as the local chapter of Befrienders say that progress has been slow. Proponents of decriminalisation argue that suicide legislation may deter people from seeking help, and may even strengthen the resolve of would-be suicides to end their lives to avoid prosecution. The first reading of a bill to repeal Section 309 of the Penal Code was tabled in Parliament in April 2023, bringing Malaysia one step closer towards decriminalising attempted suicide.
Suicide became a trending crisis in North Korea in 2023; a secret order criminalized suicide as treason against the socialist state.
Religious views
Christianity
Most forms of Christianity consider suicide sinful, based mainly on the writings of influential Christian thinkers of the Middle Ages, such as St. Augustine and St. Thomas Aquinas, but suicide was not considered a sin under the Byzantine Christian code of Justinian, for instance. In Catholic and Orthodox doctrine, suicide is considered to be murder, violating the commandment "Thou shalt not kill," and historically neither church would even hold a burial service for a member that died by suicide, deeming it an act that condemned the person to hell, since they died in a state of mortal sin. The basic idea being that life is a gift given by God which should not be spurned, and that suicide is against the "natural order" and thus interferes with God's master plan for the world. However, it is believed that mental illness or grave fear of suffering diminishes the responsibility of the one completing suicide.
Judaism
Judaism focuses on the importance of valuing this life, and as such, suicide is tantamount to denying God's goodness in the world. Despite this, under extreme circumstances when there has seemed no choice but to either be killed or forced to betray their religion, there are several accounts of Jews having died by suicide, either individually or in groups (see Holocaust, Masada, First French persecution of the Jews and York Castle for examples), and as a grim reminder there is even a prayer in the Jewish liturgy for "when the knife is at the throat", for those dying "to sanctify God's Name" (see Martyrdom). These acts have received mixed responses by Jewish authorities, regarded by some as examples of heroic martyrdom, while others state that it was wrong for them to take their own lives in anticipation of martyrdom.
Islam
Islamic religious views condemn suicide and consider it haram. Hadith manuscripts state that suicide is unlawful and a sin, and the Quran explicitly forbids it. In Islamic countries, suicide is often stigmatized; it is believed that those that successfully die by suicide are forbidden from entering Jannah.
Hinduism and Jainism
In Hinduism, suicide is generally disdained and is considered equally sinful as murdering another in contemporary Hindu society. Hindu Scriptures state that one who dies by suicide will become part of the spirit world, wandering earth until the time one would have otherwise died, had one not taken one's own life. However, Hinduism accepts a man's right to end one's life through the non-violent practice of fasting to death, termed Prayopavesa; but Prayopavesa is strictly restricted to people who have no desire or ambition left, and no responsibilities remaining in this life.
Jainism has a similar practice named Santhara. Sati, or self-immolation by widows, is a rare and illegal practice in Hindu society.
Ainu
Within the Ainu religion, someone who dies by suicide is believed to become a ghost (tukap) who would haunt the living, to come to fulfillment from which they were excluded during life. Also, someone who insults another so they kill themselves is regarded as co-responsible for their death. According to Norbert Richard Adami, this ethic exists due to the case that solidarity within the community is much more important to Ainu culture than it is to the Western world.
Philosophy
A number of questions are raised within the philosophy of suicide, including what constitutes suicide, whether or not suicide can be a rational choice, and the moral permissibility of suicide. Arguments as to acceptability of suicide in moral or social terms range from the position that the act is inherently immoral and unacceptable under any circumstances, to a regard for suicide as a sacrosanct right of anyone who believes they have rationally and conscientiously come to the decision to end their own lives, even if they are young and healthy.
Opponents to suicide include philosophers such as Augustine of Hippo, Thomas Aquinas, Immanuel Kant and, arguably, John Stuart Mill – Mill's focus on the importance of liberty and autonomy meant that he rejected choices which would prevent a person from making future autonomous decisions. Others view suicide as a legitimate matter of personal choice. Supporters of this position maintain that no one should be forced to suffer against their will, particularly from conditions such as incurable disease, mental illness, and old age, with no possibility of improvement. They reject the belief that suicide is always irrational, arguing instead that it can be a valid last resort for those enduring major pain or trauma. A stronger stance would argue that people should be allowed to autonomously choose to die regardless of whether they are suffering. Notable supporters of this school of thought include Scottish empiricist David Hume, who accepted suicide so long as it did not harm or violate a duty to God, other people, or the self, and American bioethicist Jacob Appel.
Adverse attitudes
Society may have negative attitudes towards suicide, which can lead to suicidal people experiencing discrimination, stigmatization, exclusion, pathologization, and incarceration. They may be hospitalized and/or drugged without their consent, and have difficulties in finding jobs or housing, and have their parental rights revoked. Suicide is not seen as a positive human right, and/or a logical decision given circumstances. Suicidal people are not seen as having potentially valuable messages to convey.
Advocacy
Advocacy of suicide has occurred in many cultures and subcultures. The Japanese military during World War II encouraged and glorified kamikaze attacks, which were suicide attacks by military aviators from the Empire of Japan against Allied naval vessels in the closing stages of the Pacific Theater of World War II. Japanese society as a whole has been described as "suicide-tolerant" (see Suicide in Japan).
Internet searches for information on suicide return webpages that, in a 2008 study, about 50% of the time provide information on suicide methods. A similar study found that 11% of sites encouraged suicide attempts. There is some concern that such sites may push those already predisposed to attempt suicide. Some people form suicide pacts online, either with pre-existing friends or people they have recently encountered in chat rooms or message boards. The Internet, however, may also help prevent suicide by providing a social group for those who are isolated.
Locations
Some landmarks have become known for high levels of suicide attempts. These include China's Nanjing Yangtze River Bridge, San Francisco's Golden Gate Bridge, Japan's Aokigahara Forest, England's Beachy Head, and Toronto's Bloor Street Viaduct. , the Golden Gate Bridge has had more than 1,300 suicides by jumping since its construction in 1937. Many locations where suicide is common have constructed barriers to prevent it; this includes the Luminous Veil in Toronto, the Eiffel Tower in Paris, the West Gate Bridge in Melbourne, and Empire State Building in New York City. They generally appear to be effective.
Notable cases
An example of mass suicide is the 1978 Jonestown mass murder/suicide in which 909 members of the Peoples Temple, an American new religious movement led by Jim Jones, ended their lives by drinking grape Flavor Aid laced with cyanide and various prescription drugs.
Thousands of Japanese civilians took their own lives in the last days of the Battle of Saipan in 1944, some jumping from "Suicide Cliff" and "Banzai Cliff". The 1981 Irish hunger strikes, led by Bobby Sands, resulted in 10 deaths. The cause of death was recorded by the coroner as "starvation, self-imposed" rather than suicide; this was modified to simply "starvation" on the death certificates after protest from the dead strikers' families. During World War II, Erwin Rommel was found to have foreknowledge of the 20 July plot on Hitler's life; he was threatened with public trial, execution, and reprisals on his family unless he killed himself.
Other species
As suicide requires a wilful attempt to die, some feel it therefore cannot be said to occur in non-human animals. Suicidal behavior has been observed in Salmonella seeking to overcome competing bacteria by triggering an immune system response against them. Suicidal defenses by workers are also seen in the Brazilian ant Forelius pusillus, where a small group of ants leaves the security of the nest after sealing the entrance from the outside each evening.
Pea aphids, when threatened by a ladybug, can explode themselves, scattering and protecting their brethren and sometimes even killing the ladybug; this form of suicidal altruism is known as autothysis. Some species of termites (for example Globitermes sulphureus) have soldiers that explode, covering their enemies with sticky goo.
There have been anecdotal reports of dogs, horses, and dolphins killing themselves, but little scientific study has been done regarding animal suicide. Animal suicide is usually put down to romantic human interpretation and is not generally thought to be intentional. Some of the reasons animals are thought to unintentionally kill themselves include: psychological stress, infection by certain parasites or fungi, or disruption of a long-held social tie, such as the ending of a long association with an owner and thus not accepting food from another individual.
| Biology and health sciences | Biology | null |
3290705 | https://en.wikipedia.org/wiki/Twelvefold%20way | Twelvefold way | In combinatorics, the twelvefold way is a systematic classification of 12 related enumerative problems concerning two finite sets, which include the classical problems of counting permutations, combinations, multisets, and partitions either of a set or of a number. The idea of the classification is credited to Gian-Carlo Rota, and the name was suggested by Joel Spencer.
Overview
Let and be finite sets. Let and be the cardinalities of the sets. Thus is a set with elements, and is a set with elements.
The general problem we consider is the enumeration of equivalence classes of functions .
The functions are subject to one of the three following restrictions:
No condition: each in may be sent by to any in , and each may occur multiple times.
is injective: each value for in must be distinct from every other, and so each in may occur at most once in the image of .
is surjective: for each in there must be at least one in such that , thus each will occur at least once in the image of .
(The condition " is bijective" is only an option when ; but then it is equivalent to both " is injective" and " is surjective".)
There are four different equivalence relations which may be defined on the set of functions from to :
equality;
equality up to a permutation of ;
equality up to a permutation of ;
equality up to permutations of and .
The three conditions on the functions and the four equivalence relations can be paired in ways.
The twelve problems of counting equivalence classes of functions do not involve the same difficulties, and there is not one systematic method for solving them. Two of the problems are trivial (the number of equivalence classes is 0 or 1), five problems have an answer in terms of a multiplicative formula of and , and the remaining five problems have an answer in terms of combinatorial functions (Stirling numbers and the partition function for a given number of parts).
The incorporation of classical enumeration problems into this setting is as follows.
Counting -permutations (i.e., partial permutations or sequences without repetition) of is equivalent to counting injective functions .
Counting -combinations of is equivalent to counting injective functions up to permutations of .
Counting permutations of the set is equivalent to counting injective functions when = , and also to counting surjective functions when .
Counting multisets of size (also known as -combinations with repetitions) of elements in is equivalent to counting all functions up to permutations of .
Counting partitions of the set into subsets is equivalent to counting all surjective functions up to permutations of .
Counting compositions of the number into parts is equivalent to counting all surjective functions up to permutations of .
Viewpoints
The various problems in the twelvefold way may be considered from different points of view.
Balls and boxes
Traditionally many of the problems in the twelvefold way have been formulated in terms of placing balls in boxes (or some similar visualization) instead of defining functions. The set can be identified with a set of balls, and with a set of boxes; the function then describes a way to distribute the balls into the boxes, namely by putting each ball into box . A function ascribes a unique image to each value in its domain; this property is reflected by the property that any ball can go into only one box (together with the requirement that no ball should remain outside of the boxes), whereas any box can accommodate an arbitrary number of balls. Requiring in addition to be injective means to forbid putting more than one ball in any one box, while requiring to be surjective means insisting that every box contain at least one ball.
Counting modulo permutations of or is reflected by calling the balls or the boxes, respectively, "indistinguishable". This is an imprecise formulation, intended to indicate that different configurations are not to be counted separately if one can be transformed into the other by some interchange of balls or of boxes. This possibility of transformation is formalized by the action by permutations.
Sampling
Another way to think of some of the cases is in terms of sampling, in statistics. Imagine a population of items (or people), of which we choose . Two different schemes are normally described, known as "sampling with replacement" and "sampling without replacement". In the former case (sampling with replacement), once we've chosen an item, we put it back in the population, so that we might choose it again. The result is that each choice is independent of all the other choices, and the set of samples is technically referred to as independent identically distributed. In the latter case, however, once we have chosen an item, we put it aside so that we can not choose it again. This means that the act of choosing an item has an effect on all the following choices (the particular item can not be seen again), so our choices are dependent on one another.
A second distinction among sampling schemes is whether ordering matters. For example, if we have ten items, of which we choose two, then the choice (4, 7) is different from (7, 4) if ordering matters; on the other hand, if ordering does not matter, then the choices (4, 7) and (7, 4) are equivalent.
The first two rows and columns of the table below correspond to sampling with and without replacement, with and without consideration of order. The cases of sampling with replacement are found in the column labeled "Any ", while the cases of sampling without replacement are found in the column labeled "Injective ". The cases where ordering matters are found in the row labeled "Distinct," and the cases where ordering does not matter are found in the row labeled " orbits". Each table entry indicates how many different sets of choices there are, in a particular sampling scheme. Three of these table entries also correspond to probability distributions. Sampling with replacement where ordering matters is comparable to describing the joint distribution of separate random variables, each with an -fold categorical distribution. Sampling with replacement where ordering does not matter, however, is comparable to describing a single multinomial distribution of draws from an -fold category, where only the number seen of each category matters. Sampling without replacement where ordering does not matter is comparable to a single multivariate hypergeometric distribution. Sampling without replacement where order does matter does not seem to correspond to a probability distribution. In all the injective cases (sampling without replacement), the number of sets of choices is zero unless . ("Comparable" in the above cases means that each element of the sample space of the corresponding distribution corresponds to a separate set of choices, and hence the number in the appropriate box indicates the size of the sample space for the given distribution.)
From the perspective of sampling, the column labeled "Surjective " is somewhat strange: Essentially, we keep sampling with replacement until we have chosen each item at least once. Then, we count how many choices we have made, and if it is not equal to , throw out the entire set and repeat. This is vaguely comparable to the coupon collector's problem, where the process involves "collecting" (by sampling with replacement) a set of coupons until each coupon has been seen at least once. In all surjective cases, the number of sets of choices is zero unless .
Labelling, selection, grouping
A function can be considered from the perspective of or of . This leads to different views:
the function labels each element of by an element of .
the function selects (chooses) an element of the set for each element of , a total of choices.
the function groups the elements of together that are mapped to the same element of .
These points of view are not equally suited to all cases. The labelling and selection points of view are not well compatible with permutation of the elements of , since this changes the labels or the selection; on the other hand the grouping point of view does not give complete information about the configuration unless the elements of may be freely permuted. The labelling and selection points of view are more or less equivalent when is not permuted, but when it is, the selection point of view is more suited. The selection can then be viewed as an unordered selection: a single choice of a (multi-)set of elements from is made.
Labelling and selection with or without repetition
When viewing as a labelling of the elements of , the latter may be thought of as arranged in a sequence, and the labels from as being successively assigned to them. A requirement that be injective means that no label can be used a second time; the result is a sequence of labels without repetition. In the absence of such a requirement, the terminology "sequences with repetition" is used, meaning that labels may be used more than once (although sequences that happen to be without repetition are also allowed).
When viewing as an unordered selection of the elements of , the same kind of distinction applies. If must be injective, then the selection must involve distinct elements of , so it is a subset of of size , also called an -combination. Without the requirement, one and the same element of may occur multiple times in the selection, and the result is a multiset of size of elements from , also called an -multicombination or -combination with repetition.
The requirement that be surjective, from the viewpoint of labelling elements of , means that every label is to be used at least once; from the viewpoint of selection from , it means that every element of must be included in the selection at least once. Labelling with surjection is equivalent to a grouping of elements of followed by labeling each group by an element of , and is accordingly somewhat more complicated to describe mathematically.
Partitions of sets and numbers
When viewing as a grouping of the elements of (which assumes one identifies under permutations of ), requiring to be surjective means the number of groups must be exactly . Without this requirement the number of groups can be at most . The requirement of injective means each element of must be a group in itself, which leaves at most one valid grouping and therefore gives a rather uninteresting counting problem.
When in addition one identifies under permutations of , this amounts to forgetting the groups themselves but retaining only their sizes. These sizes moreover do not come in any definite order, while the same size may occur more than once; one may choose to arrange them into a weakly decreasing list of numbers, whose sum is the number . This gives the combinatorial notion of a partition of the number , into exactly (for surjective ) or at most (for arbitrary ) parts.
Formulas
Formulas for the different cases of the twelvefold way are summarized in the following table; each table entry links to a subsection below explaining the formula.
The particular notations used are:
the falling factorial power ,
the rising factorial power ,
the factorial
the Stirling number of the second kind , denoting the number of ways to partition a set of elements into non-empty subsets
the binomial coefficient
the Iverson bracket [ ] encoding a truth value as 0 or 1
the number of partitions of into parts
Intuitive meaning of the rows and columns
This is a quick summary of what the different cases mean. The cases are described in detail below.
Think of a set of numbered items (numbered from 1 to ), from which we choose , yielding an ordered list of the items: e.g. if there are items of which we choose , the result might be the list (5, 2, 10). We then count how many different such lists exist, sometimes first transforming the lists in ways that reduce the number of distinct possibilities.
Then the columns mean:
Any After we choose an item, we put it back, so we might choose it again.
Injective After we choose an item, we set it aside, so we can't choose it again; hence we'll end up with distinct items. Necessarily, then, unless , no lists can be chosen at all.
Surjective After we choose an item, we put it back, so we might choose it again — but at the end, we have to end up having chosen each item at least once. Necessarily, then, unless , no lists can be chosen at all.
And the rows mean:
Distinct Leave the lists alone; count them directly.
orbits Before counting, sort the lists by the item number of the items chosen, so that order doesn't matter, e.g., (5, 2, 10), (10, 2, 5), (2, 10, 5) → (2, 5, 10).
orbits Before counting, renumber the items seen so that the first item seen has number 1, the second 2, etc. Numbers may repeat if an item was seen more than once, e.g., (3, 5, 3), (5, 2, 5), (4, 9, 4) → (1, 2, 1) while (3, 3, 5), (5, 5, 3), (2, 2, 9) → (1, 1, 2).
× orbits Two lists count as the same if it is possible to both reorder and relabel them as above and produce the same result. For example, (3, 5, 3) and (2, 9, 9) count as the same because they can be reordered as (3, 3, 5) and (9, 9, 2) and then relabeling both produces the same list (1, 1, 2).
Intuitive meaning of the chart using balls and boxes scenario
The chart below is similar to the chart above, but instead of showing the formulas, it gives an intuitive understanding of their meaning using the familiar balls and boxes example. The rows represent the distinctness of the balls and boxes. The columns represent if multi-packs (more than one ball in one box), or empty boxes are allowed. The cells in the chart show the question that is answered by solving the formula given in the formula chart above.
Details of the different cases
The cases below are ordered in such a way as to group those cases for which the arguments used in counting are related, which is not the ordering in the table given.
Functions from to
This case is equivalent to counting sequences of elements of with no restriction: a function is determined by the images of the elements of , which can each be independently chosen among the elements of . This gives a total of possibilities.
Example:
Injective functions from to
This case is equivalent to counting sequences of distinct elements of , also called -permutations of , or sequences without repetitions; again this sequence is formed by the images of the elements of . This case differs from the one of unrestricted sequences in that there is one choice fewer for the second element, two fewer for the third element, and so on. Therefore instead of by an ordinary power of , the value is given by a falling factorial power of , in which each successive factor is one fewer than the previous one. The formula is
Note that if then one obtains a factor zero, so in this case there are no injective functions at all; this is just a restatement of the pigeonhole principle.
Example:
Injective functions from to , up to a permutation of
This case is equivalent to counting subsets with elements of , also called -combinations of : among the sequences of distinct elements of , those that differ only in the order of their terms are identified by permutations of . Since in all cases this groups together exactly ! different sequences, we can divide the number of such sequences by ! to get the number of -combinations of . This number is known as the binomial coefficient , which is therefore given by
Example:
Functions from to , up to a permutation of
This case is equivalent to counting multisets with elements from (also called -multicombinations). The reason is that for each element of it is determined how many elements of are mapped to it by , while two functions that give the same such "multiplicities" to each element of can always be transformed into another by a permutation of . The formula counting all functions is not useful here, because the number of them grouped together by permutations of varies from one function to another. Rather, as explained under combinations, the number of -multicombinations from a set with elements can be seen to be the same as the number of -combinations from a set with elements. This reduces the problem to another one in the twelvefold way, and gives as result
Example:
Surjective functions from to , up to a permutation of
This case is equivalent to counting multisets with elements from , for which each element of occurs at least once. This is also equivalent to counting the compositions of with (non-zero) terms, by listing the multiplicities of the elements of in order. The correspondence between functions and multisets is the same as in the previous case, and the surjectivity requirement means that all multiplicities are at least one. By decreasing all multiplicities by 1, this reduces to the previous case; since the change decreases the value of by , the result is
Note that when < there are no surjective functions at all (a kind of "empty pigeonhole" principle); this is taken into account in the formula, by the convention that binomial coefficients are always 0 if the lower index is negative. The same value is also given by the expression
except in the extreme case , where with the former expression correctly gives , while the latter incorrectly gives .
The form of the result suggests looking for a manner to associate a class of surjective functions directly to a subset of elements chosen from a total of , which can be done as follows. First choose a total ordering of the sets and , and note that by applying a suitable permutation of , every surjective function can be transformed into a unique weakly increasing (and of course still surjective) function. If one connects the elements of in order by arcs into a linear graph, then choosing any subset of arcs and removing the rest, one obtains a graph with connected components, and by sending these to the successive elements of , one obtains a weakly increasing surjective function ; also the sizes of the connected components give a composition of into parts. This argument is basically the one given at stars and bars, except that there the complementary choice of "separations" is made.
Example:
Injective functions from to , up to a permutation of
In this case we consider sequences of distinct elements from , but identify those obtained from one another by applying to each element a permutation of . It is easy to see that two different such sequences can always be identified: the permutation must map term of the first sequence to term of the second sequence, and since no value occurs twice in either sequence these requirements do not contradict each other; it remains to map the elements not occurring in the first sequence bijectively to those not occurring in the second sequence in an arbitrary way. The only fact that makes the result depend on and at all is that the existence of any such sequences to begin with requires , by the pigeonhole principle. The number is therefore expressed as , using the Iverson bracket.
Injective functions from to , up to permutations of and
This case is reduced to the previous one: since all sequences of distinct elements from can already be transformed into each other by applying a permutation of to each of their terms, also allowing reordering of the terms does not give any new identifications; the number remains .
Surjective functions from to , up to a permutation of
This case is equivalent to counting partitions of into (non-empty) subsets, or counting equivalence relations on with exactly classes. Indeed, for any surjective function , the relation of having the same image under is such an equivalence relation, and it does not change when a permutation of is subsequently applied; conversely one can turn such an equivalence relation into a surjective function by assigning the elements of in some manner to the equivalence classes. The number of such partitions or equivalence relations is by definition the Stirling number of the second kind (,), also written . Its value can be described using a recursion relation or using generating functions, but unlike binomial coefficients there is no closed formula for these numbers that does not involve a summation.
Surjective functions from to
For each surjective function , its orbit under permutations of has ! elements, since composition (on the left) with two distinct permutations of never gives the same function on (the permutations must differ at some element of , which can always be written as for some ∈ , and the compositions will then differ at ). It follows that the number for this case is ! times the number for the previous case, that is
Example:
Functions from to , up to a permutation of
This case is like the corresponding one for surjective functions, but some elements of might not correspond to any equivalence class at all (since one considers functions up to a permutation of , it does not matter which elements are concerned, just how many). As a consequence one is counting equivalence relations on with at most classes, and the result is obtained from the mentioned case by summation over values up to , giving . In case ≥ , the size of poses no restriction at all, and one is counting all equivalence relations on a set of elements (equivalently all partitions of such a set); therefore gives an expression for the Bell number .
Surjective functions from to , up to permutations of and
This case is equivalent to counting partitions of the number into non-zero parts. Compared to the case of counting surjective functions up to permutations of only (), one only retains the sizes of the equivalence classes that the function partitions into (including the multiplicity of each size), since two equivalence relations can be transformed into one another by a permutation of if and only if the sizes of their classes match. This is precisely what distinguishes the notion of partition of from that of partition of , so as a result one gets by definition the number () of partitions of into non-zero parts.
Functions from to , up to permutations of and
This case is equivalent to counting partitions of the number into ≤ parts. The association is the same as for the previous case, except that now some parts of the partition may be equal to 0. (Specifically, they correspond to elements of not in the image of the function.) Each partition of into at most non-zero parts can be extended to such a partition by adding the required number of zeroes, and this accounts for all possibilities exactly once, so the result is given by . By adding 1 to each of the parts, one obtains a partition of into nonzero parts, and this correspondence is bijective; hence the expression given can be simplified by writing it as .
Extremal cases
The above formulas give the proper values for all finite sets and . In some cases there are alternative formulas which are almost equivalent, but which do not give the correct result in some extremal cases, such as when or are empty. The following considerations apply to such cases.
For every set there is exactly one function from the empty set to (there are no values of this function to specify), which is always injective, but never surjective unless is (also) empty.
For every non-empty set there are no functions from to the empty set (there is at least one value of the function that must be specified, but it cannot).
When there are no injective functions , and if there are no surjective functions .
The expressions used in the formulas have as particular values
(the first three are instances of an empty product, and the value is given by the conventional extension of binomial coefficients to arbitrary values of the upper index), while
In particular in the case of counting multisets with elements taken from , the given expression is equivalent in most cases to , but the latter expression would give 0 for the case (by the usual convention that binomial coefficients with a negative lower index are always 0). Similarly, for the case of counting compositions of with non-zero parts, the given expression is almost equivalent to the expression given by the stars and bars argument, but the latter gives incorrect values for and all values of . For the cases where the result involves a summation, namely those of counting partitions of into at most non-empty subsets or partitions of into at most non-zero parts, the summation index is taken to start at 0; although the corresponding term is zero whenever , it is the unique non-zero term when , and the result would be wrong for those cases if the summation were taken to start at 1.
Generalizations
We can generalize further by allowing other groups of permutations to act on and . If is a group of permutations of , and is a group of permutations of , then we count equivalence classes of functions . Two functions and are considered equivalent if, and only if, there exist so that . This extension leads to notions such as cyclic and dihedral permutations, as well as cyclic and dihedral partitions of numbers and sets.
The twentyfold way
Another generalization called the twentyfold way was developed by Kenneth P. Bogart in his book "Combinatorics Through Guided Discovery". In the problem of distributing objects to boxes both the objects and the boxes may be identical or distinct. Bogart identifies 20 cases. Robert A. Proctor has constructed the thirtyfold way.
| Mathematics | Combinatorics | null |
3291706 | https://en.wikipedia.org/wiki/Motor%20ship | Motor ship | A motor ship or motor vessel is a ship propelled by an internal combustion engine, usually a diesel engine. The names of motor ships are often prefixed with MS, M/S, MV or M/V.
Engines for motorships were developed during the 1890s, and by the early 20th century, motorships began to cross the waters.
History
The first diesel-powered motorships were launched in 1903: the Russian (the first equipped with diesel-electric transmission) and French Petite-Pierre. There is disagreement over which of the two was the first.
| Technology | Naval transport | null |
3292675 | https://en.wikipedia.org/wiki/Reservoir | Reservoir | A reservoir (; ) is an enlarged lake behind a dam, usually built to store fresh water, often doubling for hydroelectric power generation.
Reservoirs are created by controlling a watercourse that drains an existing body of water, interrupting a watercourse to form an embayment within it, excavating, or building any number of retaining walls or levees to enclose any area to store water.
The term is also used technically to refer to certain forms of liquid storage, such the "coolant reservoir" that captures overflow of coolant in an automobile's cooling system.
Types
Dammed valleys
Dammed reservoirs are artificial lakes created and controlled by a dam constructed across a valley and rely on the natural topography to provide most of the basin of the reservoir. These reservoirs can either be on-stream reservoirs, which are located on the original streambed of the downstream river and are filled by creeks, rivers or rainwater that runs off the surrounding forested catchments, or off-stream reservoirs, which receive diverted water from a nearby stream or aqueduct or pipeline water from other on-stream reservoirs.
Dams are typically located at a narrow part of a downstream of a natural basin. The valley sides act as natural walls, with the dam located at the narrowest practical point to provide strength and the lowest cost of construction. In many reservoir construction projects, people have to be moved and re-housed, historical artifacts moved or rare environments relocated. Examples include the temples of Abu Simbel (which were moved before the construction of the Aswan Dam to create Lake Nasser from the Nile in Egypt), the relocation of the village of Capel Celyn during the construction of Llyn Celyn, and the relocation of Borgo San Pietro of Petrella Salto during the construction of Lake Salto.
Construction of a dammed reservoir will usually require the river to be diverted during part of the build, often through a temporary tunnel or by-pass channel.
In hilly regions, reservoirs are often constructed by enlarging existing lakes. Sometimes in such reservoirs, the new top water level exceeds the watershed height on one or more of the feeder streams such as at Llyn Clywedog in Mid Wales. In such cases additional side dams are required to contain the reservoir.
Where the topography is poorly suited to forming a single large reservoir, a number of smaller reservoirs may be constructed in a chain, as in the River Taff valley where the Llwyn-on, Cantref and Beacons Reservoirs form a chain up the valley.
Coastal
Coastal reservoirs are fresh water storage reservoirs located on the sea coast near a river mouth to store the flood water of a river. As the land-based reservoir construction is fraught with substantial land submergence, coastal reservoirs are preferred economically and technically since they do not use scarce land area. Many coastal reservoirs were constructed in Asia and Europe. Saemanguem in South Korea, Marina Barrage in Singapore, Qingcaosha in China, and Plover Cove in Hong Kong are a few such coastal reservoirs.
Bank-side
Where water is pumped or siphoned from a river of variable quality or size, bank-side reservoirs may be built to store the water. Such reservoirs are usually formed partly by excavation and partly by building a complete encircling bund or embankment, which may exceed 6 km (4 miles) in circumference. Both the floor of the reservoir and the bund must have an impermeable lining or core: initially these were often made of puddled clay, but this has generally been superseded by the modern use of rolled clay. The water stored in such reservoirs may stay there for several months, during which time normal biological processes may substantially reduce many contaminants and reduce turbidity. The use of bank-side reservoirs also allows water abstraction to be stopped for some time, for instance when the river is unacceptably polluted or when flow conditions are very low due to drought. The London water supply system exhibits one example of the use of bank-side storage: here water is taken from the River Thames and River Lee into several large Thames-side reservoirs, such as Queen Mary Reservoir that can be seen along the approach to London Heathrow Airport.
Service
Service reservoirs store fully treated potable water close to the point of distribution. Many service reservoirs are constructed as water towers, often as elevated structures on concrete pillars where the landscape is relatively flat. Other service reservoirs can be storage pools, water tanks or sometimes entirely underground cisterns, especially in more hilly or mountainous country. Modern reserviors will often use geomembrane liners on their base to limit seepage and/or as floating covers to limit evaporation, particularly in arid climates. In the United Kingdom, Thames Water has many underground reservoirs built in the 1800s, most of which are lined with brick. A good example is the Honor Oak Reservoir in London, constructed between 1901 and 1909. When it was completed it was said to be the largest brick built underground reservoir in the world and it is still one of the largest in Europe. This reservoir now forms part of the southern extension of the Thames Water Ring Main. The top of the reservoir has been grassed over and is now used by the Aquarius Golf Club.
Service reservoirs perform several functions, including ensuring sufficient head of water in the water distribution system and providing water capacity to even-out peak demand from consumers, enabling the treatment plant to run at optimum efficiency. Large service reservoirs can also be managed to reduce the cost of pumping by refilling the reservoir at times of day when energy costs are low.
Irrigation reservoir
An irrigation reservoir is a water reservoir for agricultural use. They are filled using pumped groundwater, pumped river water or water runoff and are typically used during the local dry season.
This type of infrastructure has sparked an opposition movement in France, with numerous disputes and, for some projects, protests, especially in the former Poitou-Charentes region where violent demonstrations took place in 2022 and 2023. In Spain, there is greater acceptance because all beneficiary users are involved in the implementation of the system.
The specific debate about substitution reservoirs is part of a broader discussion related to reservoirs used for agricultural irrigation, regardless of their type, and a certain model of intensive agriculture. Opponents view these reservoirs as a monopolization of resources benefiting only a few, representing an outdated model of productive agriculture. They argue that these reservoirs lead to a loss in both quantity and quality of water necessary for maintaining ecological balance and pose a risk of increasing severity and duration of droughts due to climate change. In summary, they consider it a misadaptation to climate change.
Proponents of reservoirs or substitution reserves, on the other hand, see them as a solution for sustainable agriculture while waiting for a truly durable agricultural model. Without such reserves, they fear that unsustainable imported irrigation will be inevitable. They believe that these reservoirs should be accompanied by a territorial project that unites all water stakeholders with the goal of preserving and enhancing natural environments.
Two main types of reservoirs can be distinguished based on their mode of supply.
History
Circa 3000 BC, the craters of extinct volcanoes in Arabia were used as reservoirs by farmers for their irrigation water.
Dry climate and water scarcity in India led to early development of stepwells and other water resource management techniques, including the building of a reservoir at Girnar in 3000 BC. Artificial lakes dating to the 5th century BC have been found in ancient Greece. The artificial Bhojsagar lake in present-day Madhya Pradesh state of India, constructed in the 11th century, covered .
The Kingdom of Kush invented the Hafir, a type of reservoir, during the Meroitic period. 800 ancient and modern hafirs have been registered in the Meroitic town of Butana. The Hafirs catch the water during rainy seasons in order to ensure water is available for several months during dry seasons to supply drinking water, irrigate fields and water cattle. The Great Reservoir near the Lion Temple in Musawwarat es-Sufra is a notable hafir in Kush.
In Sri Lanka, large reservoirs were created by ancient Sinhalese kings in order to store water for irrigation. The famous Sri Lankan king Parākramabāhu I of Sri Lanka said "Do not let a drop of water seep into the ocean without benefiting mankind." He created the reservoir named Parakrama Samudra ("sea of King Parakrama"). Vast artificial reservoirs were also built by various ancient kingdoms in Bengal, Assam, and Cambodia.
Uses
Direct water supply
Many dammed river reservoirs and most bank-side reservoirs are used to provide the raw water feed to a water treatment plant which delivers drinking water through water mains. The reservoir does not merely hold water until it is needed: it can also be the first part of the water treatment process. The time the water is held before it is released is known as the retention time. This is a design feature that allows particles and silts to settle out, as well as time for natural biological treatment using algae, bacteria and zooplankton that naturally live in the water. However natural limnological processes in temperate climate lakes produce temperature stratification in the water, which tends to partition some elements such as manganese and phosphorus into deep, cold anoxic water during the summer months. In the autumn and winter the lake becomes fully mixed again. During drought conditions, it is sometimes necessary to draw down the cold bottom water, and the elevated levels of manganese in particular can cause problems in water treatment plants.
Hydroelectricity
In 2005, about 25% of the world's 33,105 large dams (over 15 metres in height) were used for hydroelectricity. The U.S. produces 3% of its electricity from 80,000 dams of all sizes. An initiative is underway to retrofit more dams as a good use of existing infrastructure to provide many smaller communities with a reliable source of energy. A reservoir generating hydroelectricity includes turbines connected to the retained water body by large-diameter pipes. These generating sets may be at the base of the dam or some distance away. In a flat river valley a reservoir needs to be deep enough to create a head of water at the turbines; and if there are periods of drought the reservoir needs to hold enough water to average out the river's flow throughout the year(s). Run-of-the-river hydro in a steep valley with constant flow needs no reservoir.
Some reservoirs generating hydroelectricity use pumped recharge: a high-level reservoir is filled with water using high-performance electric pumps at times when electricity demand is low, and then uses this stored water to generate electricity by releasing the stored water into a low-level reservoir when electricity demand is high. Such systems are called pump-storage schemes.
Controlling watersources
Reservoirs can be used in a number of ways to control how water flows through downstream waterways:
Downstream water supply water may be released from an upland reservoir so that it can be abstracted for drinking water lower down the system, sometimes hundreds of miles further downstream.
Irrigation water in an irrigation reservoir may be released into networks of canals for use in farmlands or secondary water systems. Irrigation may also be supported by reservoirs which maintain river flows, allowing water to be abstracted for irrigation lower down the river.
Flood control also known as an "attenuation" or "balancing" reservoirs, flood control reservoirs collect water at times of very high rainfall, then release it slowly during the following weeks or months. Some of these reservoirs are constructed across the river line, with the onward flow controlled by an orifice plate. When river flow exceeds the capacity of the orifice plate, water builds up behind the dam; but as soon as the flow rate reduces, the water behind the dam is slowly released until the reservoir is empty again. In some cases, such reservoirs only function a few times in a decade, and the land behind the reservoir may be developed as community or recreational land. A new generation of balancing dams are being developed to combat the possible consequences of climate change. They are called "Flood Detention Reservoirs". Because these reservoirs will remain dry for long periods, there may be a risk of the clay core drying out, reducing its structural stability. Recent developments include the use of composite core fill made from recycled materials as an alternative to clay.
Canals Where a natural watercourse's water is not available to be diverted into a canal, a reservoir may be built to guarantee the water level in the canal: for example, where a canal climbs through locks to cross a range of hills. Another use is to reduce costs or construction time when the canal must be dug through rock, as used on the Rideau Canal with The Narrows locks dividing the two Rideau's and essentially turning the upper Rideau into an enlarged reservoir, albeit only by two or three feet.
Recreation water may be released from a reservoir to create or supplement white water conditions for kayaking and other white-water sports. On salmonid rivers special releases (in Britain called freshets) are made to encourage natural migration behaviours in fish and to provide a variety of fishing conditions for anglers.
Flow balancing
Reservoirs can be used to balance the flow in highly managed systems, taking in water during high flows and releasing it again during low flows. In order for this to work without pumping requires careful control of water levels using spillways.
When a major storm approaches, the dam operators calculate the volume of water that the storm will add to the reservoir. If forecast storm water will overfill the reservoir, water is slowly let out of the reservoir prior to, and during, the storm. If done with sufficient lead time, the major storm will not fill the reservoir and areas downstream will not experience damaging flows.
Accurate weather forecasts are essential so that dam operators can correctly plan drawdowns prior to a high rainfall event. Dam operators blamed a faulty weather forecast on the 2010–2011 Queensland floods.
Examples of highly managed reservoirs are Burrendong Dam in Australia and Bala Lake (Llyn Tegid) in North Wales. Bala Lake is a natural lake whose level was raised by a low dam and into which the River Dee flows or discharges depending upon flow conditions, as part of the River Dee regulation system. This mode of operation is a form of hydraulic capacitance in the river system.
Recreation
Many reservoirs often allow some recreational uses, such as fishing and boating. Special rules may apply for the safety of the public and to protect the quality of the water and the ecology of the surrounding area. Many reservoirs now support and encourage less formal and less structured recreation such as natural history, bird watching, landscape painting, walking and hiking, and often provide information boards and interpretation material to encourage responsible use.
Operation
Water falling as rain upstream of the reservoir, together with any groundwater emerging as springs, is stored in the reservoir. Any excess water can be spilled via a specifically designed spillway. Stored water may be piped by gravity for use as drinking water, to generate hydro-electricity or to maintain river flows to support downstream uses. Occasionally reservoirs can be managed to retain water during high rainfall events to prevent or reduce downstream flooding. Some reservoirs support several uses, and the operating rules may be complex.
Most modern reservoirs have a specially designed draw-off tower that can discharge water from the reservoir at different levels, both to access water as the water level falls, and to allow water of a specific quality to be discharged into the downstream river as "compensation water": the operators of many upland or in-river reservoirs have obligations to release water into the downstream river to maintain river quality, support fisheries, to maintain downstream industrial and recreational uses or for a range of other purposes. Such releases are known as compensation water.
Terminology
The units used for measuring reservoir areas and volumes vary from country to country. In most of the world, reservoir areas are expressed in square kilometers; in the United States, acres are commonly used. For volume, either cubic meters or cubic kilometers are widely used, with acre-feet used in the US.
The capacity, volume, or storage of a reservoir is usually divided into distinguishable areas. Dead or inactive storage refers to water in a reservoir that cannot be drained by gravity through a dam's outlet works, spillway, or power plant intake and can only be pumped out. Dead storage allows sediments to settle, which improves water quality and also creates an area for fish during low levels. Active or live storage is the portion of the reservoir that can be used for flood control, power production, navigation, and downstream releases. In addition, a reservoir's "flood control capacity" is the amount of water it can regulate during flooding. The "surcharge capacity" is the capacity of the reservoir above the spillway crest that cannot be regulated.
In the United States, the water below the normal maximum level of a reservoir is called the "conservation pool".
In the United Kingdom, "top water level" describes the reservoir full state, while "fully drawn down" describes the minimum retained volume.
Modelling reservoir management
There is a wide variety of software for modelling reservoirs, from the specialist Dam Safety Program Management Tools (DSPMT) to the relatively simple WAFLEX, to integrated models like the Water Evaluation And Planning system (WEAP) that place reservoir operations in the context of system-wide demands and supplies.
Safety
In many countries large reservoirs are closely regulated to try to prevent or minimize failures of containment.
While much of the effort is directed at the dam and its associated structures as the weakest part of the overall structure, the aim of such controls is to prevent an uncontrolled release of water from the reservoir. Reservoir failures can generate huge increases in flow down a river valley, with the potential to wash away towns and villages and cause considerable loss of life, such as the devastation following the failure of containment at Llyn Eigiau which killed 17 people.(see also List of dam failures)
A notable case of reservoirs being used as an instrument of war involved the British Royal Air Force Dambusters raid on Germany in World War II (codenamed "Operation Chastise"), in which three German reservoir dams were selected to be breached in order to damage German infrastructure and manufacturing and power capabilities deriving from the Ruhr and Eder rivers. The economic and social impact was derived from the enormous volumes of previously stored water that swept down the valleys, wreaking destruction. This raid later became the basis for several films.
Environmental impact
Whole life environmental impact
All reservoirs will have a monetary cost/benefit assessment made before construction to see if the project is worth proceeding with. However, such analysis can often omit the environmental impacts of dams and the reservoirs that they contain. Some impacts, such as the greenhouse gas production associated with concrete manufacture, are relatively easy to estimate. Other impacts on the natural environment and social and cultural effects can be more difficult to assess and to weigh in the balance but identification and quantification of these issues is now commonly required in major construction projects in the developed world
Climate change
Reservoir greenhouse gas emissions
Naturally occurring lakes receive organic sediments which decay in an anaerobic environment releasing methane and carbon dioxide. The methane released is approximately 8 times more potent as a greenhouse gas than carbon dioxide.
As a human-made reservoir fills, existing plants are submerged and during the years it takes for this matter to decay, will give off considerably more greenhouse gases than lakes do. A reservoir in a narrow valley or canyon may cover relatively little vegetation, while one situated on a plain may flood a great deal of vegetation. The site may be cleared of vegetation first or simply flooded. Tropical flooding can produce far more greenhouse gases than in temperate regions.
The following table indicates reservoir emissions in milligrams per square meter per day for different bodies of water.
Hydroelectricity and climate change
Depending upon the area flooded versus power produced, a reservoir built for hydro-electricity generation can either reduce or increase the net production of greenhouse gases when compared to other sources of power.
A study for the National Institute for Research in the Amazon found that hydroelectric reservoirs release a large pulse of carbon dioxide from decay of trees left standing in the reservoirs, especially during the first decade after flooding. This elevates the global warming impact of the dams to levels much higher than would occur by generating the same power from fossil fuels. According to the World Commission on Dams report (Dams And Development), when the reservoir is relatively large and no prior clearing of forest in the flooded area was undertaken, greenhouse gas emissions from the reservoir could be higher than those of a conventional oil-fired thermal generation plant. For instance, In 1990, the impoundment behind the Balbina Dam in Brazil (inaugurated in 1987) had over 20 times the impact on global warming than would generating the same power from fossil fuels, due to the large area flooded per unit of electricity generated. Another study published in the Global Biogeochemical Cycles also found that newly flooded reservoirs released more carbon dioxide and methane than the pre-flooded landscape, noting that forest lands, wetlands, and preexisting water features all released differing amounts of carbon dioxide and methane both pre- and post-flooding.
The Tucuruí Dam in Brazil (completed in 1984) had only 0.4 times the impact on global warming than would generating the same power from fossil fuels.
A two-year study of carbon dioxide and methane releases in Canada concluded that while the hydroelectric reservoirs there do emit greenhouse gases, it is on a much smaller scale than thermal power plants of similar capacity. Hydropower typically emits 35 to 70 times less greenhouse gases per TWh of electricity than thermal power plants.
A decrease in air pollution occurs when a dam is used in place of thermal power generation, since electricity produced from hydroelectric generation does not give rise to any flue gas emissions from fossil fuel combustion (including sulfur dioxide, nitric oxide and carbon monoxide from coal).
Biology
Dams can produce a block for migrating fish, trapping them in one area, producing food and a habitat for various water-birds. They can also flood various ecosystems on land and may cause extinctions.
Creating reservoirs can alter the natural biogeochemical cycle of mercury. After a reservoir's initial formation, there is a large increase in the production of toxic methylmercury (MeHg) via microbial methylation in flooded soils and peat. MeHg levels have also been found to increase in zooplankton and in fish.
Human impact
Dams can severely reduce the amount of water reaching countries downstream of them, causing water stress between the countries, e.g. the Sudan and Egypt, which damages farming businesses in the downstream countries, and reduces drinking water.
Farms and villages, e.g. Ashopton can be flooded by the creation of reservoirs, ruining many livelihoods. For this very reason, worldwide 80 million people (figure is as of 2009, from the Edexcel GCSE Geography textbook) have had to be forcibly relocated due to dam construction.
Limnology
The limnology of reservoirs has many similarities to that of lakes of equivalent size. There are however significant differences. Many reservoirs experience considerable variations in level producing significant areas that are intermittently underwater or dried out. This greatly limits the productivity or the water margins and also limits the number of species able to survive in these conditions.
Upland reservoirs tend to have a much shorter residence time than natural lakes and this can lead to more rapid cycling of nutrients through the water body so that they are more quickly lost to the system. This may be seen as a mismatch between water chemistry and water biology with a tendency for the biological component to be more oligotrophic than the chemistry would suggest.
Conversely, lowland reservoirs drawing water from nutrient rich rivers, may show exaggerated eutrophic characteristics because the residence time in the reservoir is much greater than in the river and the biological systems have a much greater opportunity to utilise the available nutrients.
Deep reservoirs with multiple level draw off towers can discharge deep cold water into the downstream river greatly reducing the size of any hypolimnion. This in turn can reduce the concentrations of phosphorus released during any annual mixing event and may therefore reduce productivity.
The dams in front of reservoirs act as knickpoints-the energy of the water falling from them reduces and deposition is a result below the dams.
Seismicity
The filling (impounding) of reservoirs has often been attributed to reservoir-triggered seismicity (RTS) as seismic events have occurred near large dams or within their reservoirs in the past. These events may have been triggered by the filling or operation of the reservoir and are on a small scale when compared to the amount of reservoirs worldwide. Of over 100 recorded events, some early examples include the tall Marathon Dam in Greece (1929), the tall Hoover Dam in the U.S. (1935). Most events involve large dams and small amounts of seismicity. The only four recorded events above a 6.0-magnitude (Mw) are the tall Koyna Dam in India and the Kremasta Dam in Greece which both registered 6.3-Mw, the high Kariba Dam in Zambia at 6.25-Mw and the Xinfengjiang Dam in China at 6.1-Mw. Disputes have occurred regarding when RTS has occurred due to a lack of hydrogeological knowledge at the time of the event. It is accepted, though, that the infiltration of water into pores and the weight of the reservoir do contribute to RTS patterns. For RTS to occur, there must be a seismic structure near the dam or its reservoir and the seismic structure must be close to failure. Additionally, water must be able to infiltrate the deep rock stratum as the weight of a deep reservoir will have little impact when compared the deadweight of rock on a crustal stress field, which may be located at a depth of or more.
Climate
Reservoirs may change the local climate increasing humidity and reducing extremes of temperature, especially in dry areas. Such effects are claimed also by some South Australian wineries as increasing the quality of the wine production.
List of reservoirs
In 2005, there were 33,105 large dams (≥15 m height) listed by the International Commission on Large Dams (ICOLD).
List of reservoirs by area
List of reservoirs by volume
| Technology | Hydraulic infrastructure | null |
3293019 | https://en.wikipedia.org/wiki/Cecidomyiidae | Cecidomyiidae | Cecidomyiidae is a family of flies known as gall midges or gall gnats. As the name implies, the larvae of most gall midges feed within plant tissue, creating abnormal plant growths called galls. Cecidomyiidae are very fragile small insects usually only in length; many are less than long. They are characterised by hairy wings, unusual in the order Diptera, and have long antennae. Some Cecidomyiids are also known for the strange phenomenon of paedogenesis in which the larval stage reproduces without maturing first. In some species, the daughter larvae consume the mother, while in others, reproduction occurs later on in the egg or pupa.
More than 6,650 species and 830 genera are described worldwide, though this is certainly an underestimate of the actual diversity of this family. A DNA metabarcoding study published in 2016 estimated the fauna of Canada alone to be in excess of 16,000 species, hinting at a staggering global count of over 1 million cecidomyiid species that have yet to be described, which would make it the most speciose single family in the entire animal kingdom. A second similar metabarcoding study performed in Costa Rica also found Cecidomyiidae to be the most diverse family of flies, supporting this assertion. A third metabarcoding study in 2023 concluded that Cecidomyiidae are the single most diverse family collected from malaise traps all around the world and are a dominant component of insect diversity, comprising about 20% of all species collected.
Description
Cecidomyiidae are minute to small (0.5–3.0 mm), rarely larger (up to 8 mm, wing length 15 mm) flies with a delicate appearance. Except for a few genera with reduced wings, the eyes are holoptic. The mouthparts are reduced. Cecidomyiid antennae are notably long, with 12–14 segments, (sometimes fewer and up to 40 in some genera). The antennal segments either consist of a basal thickening and petiole or they are binodal, with a proximal node, an intermediate petiole and a distal node. Basal, medial, and apical whorls of hairs occur on the antennal segments. In some species, whorls of loop-shaped sensory filaments are also found, the basal or medial one sometimes being reduced. Some gall flies have only one (basal) whorl of hairs on the antennal segments, and the sensoria (transparent sensory appendages) differ in size and shape. The filaments are thread-like in the Porricondylinae and in all the Cecidomyiinae and take the form of long loops in the supertribe Cecidomyiidi). Ocelli are present only in the Lestremiinae. The wings are usually clear, rarely patterned. The wing bears microtrichia, often as scales, and some species have macrotrichia. The number of longitudinal veins is reduced. Only veins R1, R4+5, M3+4 and Cu1 are well developed in most species. The medial veins M1 and M2 are developed only in primitive groups, and the costa usually has a break just beyond vein R5. The legs are long and slender, without apical bristles. Gall midge larvae, and many adults, are orange or yellow in color due to carotenoids. Cecidomyiidae are among the very few animals which can synthesize carotenoids, but its unknown to what degree de novo biosynthesis of carotenoids accounts for their characteristic color as opposed to dietary sequestration or endosymbionts. The genes responsible for carotenoid synthesis likely originate from horizontal gene transfer from a fungal donor.
The genitalia of males consist of gonocoxites, gonostyles, aedeagus, and tergites 9 and 10. Lower (in the evolutionary sense) gall flies often have sclerotized parameres and a more or less transparent plate (the tegmen) located above theaedeagus-the tegmen. In higher gall flies, the parameres and tegmen are not developed. In these, instead, close to the aedeagus, is a triangular basal outgrowth of the gonocoxites called the gonosterna. Supporting structures called apodema are located near the base of the genitalia in males; these are often equipped with two outgrowths. The ovipositor is short, lamelliform, or long, mobile, and in some species, acicular.
The larva is peripneustic. The head is tiny, cone-shaped, and has two posterolateral extensions. The mouthparts are reduced, with minute styliform mandibles. The relatively prominent antennae are two-segmented. Integumental setae or papillae are important in taxonomy since they are constant in number within groups. The prothorax has sclerotized sternal spatula (most). The anus is terminal in the Lestremiinae and paedogenetic in the Porricondylinae and ventral in other groups. The pupa is exarate (in a few species it is enclosed within the last instar larval integument). The anterior spiracle and anterior angle of antennal bases is prominent (most).
As a pest or biological control
Many species are economically significant, especially the Hessian fly, a wheat pest, as the galls cause severe damage. Other important pests of this family are the wheat blossom midge Sitodiplosis mosellana, the Asian rice gall midge (Orseolia oryzae) and the African rice gall midge O. oryzivora. The millet grain midge (Geromyia penniseti), sorghum midge (Contarinia sorghicola), and African rice gall midge (Orseolia oryzivora) attack grain crops such as pearl millet in Mali and other countries of the Sahel in West Africa.
Other pests are the coffee flower midge (Dasyneura coffeae), Soybean pod gall midge, (Asphondylia yushimai) pine needle gall midge (Thecodiplosis japonensis), the lentil flower midge (Contarinia lentis), the lucerne flower midge (C. medicaginis), and the alfalfa sprout midge (Dasineura ignorata) on the Leguminosae; the black locust tree gall midge (Obolodiplosis robiniae), the swede midge (Contarinia nasturtii), and the brassica pod midge (Dasineura brassicae) on the Cruciferae; the pear midge (Contarinia pyrivora) and the raspberry cane midge (Resseliella theobaldi) on fruit crops; Horidiplosis ficifolii on ornamental figs, and the rosette gall midge (Rhopalomyia solidaginis) on goldenrod stalks, Porricondylini spp. on Citrus, Lestremia spp. on sweet potato, yam, ginger, garlic, onions, taro tubers, and potato, Lestodiplosis spp., Acaroletes spp., and Aphidoletes spp. on oranges, and Arthrocnodax spp. on limes.
In South Africa, Dasineura rubiformis has been deployed against the invasive Australian Acacia mearnsii; it oviposits eggs into the flowers which develop into galls, thus reducing seed production.
Parasitoids hosted by Cecidomyiidae include Braconidae (Opiinae, Euphorinae), Eurytomidae, Eulophidae, Torymidae, Pteromalidae, Eupelmidae, Trichogrammatidae, and Aphelinidae. All contain species which are actual or potential biological agents.
A large number of gall midge species are natural enemies of other crop pests. Their larvae are predatory, and some are reported as parasitic. The most common prey are aphids and spider mites, followed by scale insects, then other small prey such as whiteflies and thrips, which eat the eggs of other insects or mites. As the larvae are incapable of moving considerable distances, a substantial population of prey must be present before the adults lay eggs, and the Cecidiomyiidae are most frequently seen during pest outbreaks. One species, Aphidoletes aphidimyza, is an important component of biological control programs for greenhouse crops and is widely sold in the United States.
| Biology and health sciences | Flies (Diptera) | Animals |
12395897 | https://en.wikipedia.org/wiki/Alligator%20pepper | Alligator pepper | Alligator pepper (also known as Ata Ire or Ose Oji or mbongo spice or hepper pepper) is a West African spice made from the seeds and seed pods of Aframomum danielli, A. citratum or A. exscapum. It is a close relative of grains obtained from the closely related species, Aframomum melegueta or "grains of paradise". Unlike grains of paradise, which are generally sold as only the seeds of the plant, alligator pepper is sold as the entire pod containing the seeds (in the same manner to another close relative, black cardamom).
The plants which provide alligator pepper are herbaceous, perennial, flowering plants of the ginger family (Zingiberaceae), native to swampy habitats along the West African coast. Once the pod is open and the seeds are revealed, the reason for this spice's common English name becomes apparent as the seeds have a papery skin enclosing them and the bumps of the seeds within this skin is reminiscent of an alligator's back.
As mbongo spice, the seeds of alligator pepper are often sold as the grains isolated from the pod and with the outer skin removed. Mbongo spice is most commonly either A. danielli or A. citratum, and has a more floral aroma than A. exscapum (which is the commonest source of the entire pod).
It is a common ingredient in West African cuisine, where it imparts both pungency and a spicy aroma to soups and stews.
Use in cuisine
Even in West Africa, alligator pepper is an expensive spice, so is used sparingly. Often, a single whole pod is pounded in a pestle and mortar before half of it is added (along with black pepper) as a flavouring to West African soups or boiled rice. The spice can also be substituted in any recipe using grains of paradise or black cardamom to provide a hotter and more pungent flavour.
When babies are born in Yoruba culture, they are given a small taste of alligator pepper (atare) shortly after birth as part of the routine baby-welcoming process, and it is also used as an ingredient at traditional meet-and-greets.
In Igboland, alligator pepper, ósè ọ́jị́ with kola nuts are used in naming ceremonies, as presentation to visiting guests, and for other social events with the kola nut rite. The Igbo present and eat the alligator pepper together with kola nuts. In virtually every Igbo ceremony, alligator peppers and kola nuts are presented to guests at the top of the agenda and prior to any other food or entertainment. Prayers and libations are made together with kola nuts and alligator pepper.
During the Covid-19 pandemic, it was used in medicine.
| Biology and health sciences | Herbs and spices | Plants |
1129026 | https://en.wikipedia.org/wiki/Hyperon | Hyperon | In particle physics, a hyperon is any baryon containing one or more strange quarks, but no charm, bottom, or top quarks. This form of matter may exist in a stable form within the core of some neutron stars. Hyperons are sometimes generically represented by the symbol Y.
History and research
The first research into hyperons happened in the 1950s and spurred physicists on to the creation of an organized classification of particles.
The term was coined by French physicist Louis Leprince-Ringuet in 1953, and announced for the first time at the cosmic ray conference at Bagnères de Bigorre in July of that year, agreed upon by Leprince-Ringuet, Bruno Rossi, C.F. Powell, William B. Fretter and Bernard Peters.
Today, research in this area is carried out on data taken at many facilities around the world, including CERN, Fermilab, SLAC, JLAB, Brookhaven National Laboratory, KEK, GSI and others. Physics topics include searches for CP violation, measurements of spin, studies of excited states (commonly referred to as spectroscopy), and hunts for exotic forms such as pentaquarks and dibaryons.
Properties and behavior
Being baryons, all hyperons are fermions. That is, they have half-integer spin and obey Fermi–Dirac statistics. Hyperons all interact via the strong nuclear force, making them types of hadron. They are composed of three light quarks, at least one of which is a strange quark, which makes them strange baryons.
Excited hyperon resonances and ground-state hyperons with a '*' included in their notation decay via the strong interaction. For Ω⁻ as well as the lighter hyperons this decay mode is not possible given the particle masses and the conservation of flavor and isospin necessary in strong interactions. Instead, these decay weakly with non-conserved parity. An exception to this is the Σ⁰ which decays electromagnetically into Λ on account of carrying the same flavor quantum numbers. The type of interaction through which these decays occur determine the average lifetime, which is why weakly decaying hyperons are significantly more long-lived than those that decay through strong or electromagnetic interactions.
List
| Physical sciences | Fermions | Physics |
1129074 | https://en.wikipedia.org/wiki/Grand%20canonical%20ensemble | Grand canonical ensemble | In statistical mechanics, the grand canonical ensemble (also known as the macrocanonical ensemble) is the statistical ensemble that is used to represent the possible states of a mechanical system of particles that are in thermodynamic equilibrium (thermal and chemical) with a reservoir. The system is said to be open in the sense that the system can exchange energy and particles with a reservoir, so that various possible states of the system can differ in both their total energy and total number of particles. The system's volume, shape, and other external coordinates are kept the same in all possible states of the system.
The thermodynamic variables of the grand canonical ensemble are chemical potential (symbol: ) and absolute temperature (symbol: . The ensemble is also dependent on mechanical variables such as volume (symbol: , which influence the nature of the system's internal states. This ensemble is therefore sometimes called the ensemble, as each of these three quantities are constants of the ensemble.
Basics
In simple terms, the grand canonical ensemble assigns a probability to each distinct microstate given by the following exponential:
where is the number of particles in the microstate and is the total energy of the microstate. is the Boltzmann constant.
The number is known as the grand potential and is constant for the ensemble. However, the probabilities and will vary if different are selected. The grand potential serves two roles: to provide a normalization factor for the probability distribution (the probabilities, over the complete set of microstates, must add up to one); and, many important ensemble averages can be directly calculated from the function .
In the case where more than one kind of particle is allowed to vary in number, the probability expression generalizes to
where is the chemical potential for the first kind of particles, is the number of that kind of particle in the microstate, is the chemical potential for the second kind of particles and so on ( is the number of distinct kinds of particles). However, these particle numbers should be defined carefully (see the note on particle number conservation below).
The distribution of the grand canonical ensemble is called generalized Boltzmann distribution by some authors.
Grand ensembles are apt for use when describing systems such as the electrons in a conductor, or the photons in a cavity, where the shape is fixed but the energy and number of particles can easily fluctuate due to contact with a reservoir (e.g., an electrical ground or a dark surface, in these cases). The grand canonical ensemble provides a natural setting for an exact derivation of the Fermi–Dirac statistics or Bose–Einstein statistics for a system of non-interacting quantum particles (see examples below).
Note on formulation
An alternative formulation for the same concept writes the probability as , using the grand partition function rather than the grand potential. The equations in this article (in terms of grand potential) may be restated in terms of the grand partition function by simple mathematical manipulations.
Applicability
The grand canonical ensemble is the ensemble that describes the possible states of an isolated system that is in thermal and chemical equilibrium with a reservoir (the derivation proceeds along lines analogous to the heat bath derivation of the normal canonical ensemble, and can be found in Reif). The grand canonical ensemble applies to systems of any size, small or large; it is only necessary to assume that the reservoir with which it is in contact is much larger (i.e., to take the macroscopic limit).
The condition that the system is isolated is necessary in order to ensure it has well-defined thermodynamic quantities and evolution. In practice, however, it is desirable to apply the grand canonical ensemble to describe systems that are in direct contact with the reservoir, since it is that contact that ensures the equilibrium. The use of the grand canonical ensemble in these cases is usually justified either 1) by assuming that the contact is weak, or 2) by incorporating a part of the reservoir connection into the system under analysis, so that the connection's influence on the region of interest is correctly modeled. Alternatively, theoretical approaches can be used to model the influence of the connection, yielding an open statistical ensemble.
Another case in which the grand canonical ensemble appears is when considering a system that is large and thermodynamic (a system that is "in equilibrium with itself"). Even if the exact conditions of the system do not actually allow for variations in energy or particle number, the grand canonical ensemble can be used to simplify calculations of some thermodynamic properties. The reason for this is that various thermodynamic ensembles (microcanonical, canonical) become equivalent in some aspects to the grand canonical ensemble, once the system is very large. Of course, for small systems, the different ensembles are no longer equivalent even in the mean. As a result, the grand canonical ensemble can be highly inaccurate when applied to small systems of fixed particle number, such as atomic nuclei.
Properties
Grand potential, ensemble averages, and exact differentials
The partial derivatives of the function give important grand canonical ensemble average quantities:
Exact differential: From the above expressions, it can be seen that the function has the exact differential
First law of thermodynamics: Substituting the above relationship for into the exact differential of , an equation similar to the first law of thermodynamics is found, except with average signs on some of the quantities:
Thermodynamic fluctuations: The variances in energy and particle numbers are
Correlations in fluctuations: The covariances of particle numbers and energy are
Example ensembles
The usefulness of the grand canonical ensemble is illustrated in the examples below. In each case the grand potential is calculated on the basis of the relationship
which is required for the microstates' probabilities to add up to 1.
Statistics of noninteracting particles
Bosons and fermions (quantum)
In the special case of a quantum system of many non-interacting particles, the thermodynamics are simple to compute.
Since the particles are non-interacting, one can compute a series of single-particle stationary states, each of which represent a separable part that can be included into the total quantum state of the system.
For now let us refer to these single-particle stationary states as orbitals (to avoid confusing these "states" with the total many-body state), with the provision that each possible internal particle property (spin or polarization) counts as a separate orbital.
Each orbital may be occupied by a particle (or particles), or may be empty.
Since the particles are non-interacting, we may take the viewpoint that each orbital forms a separate thermodynamic system.
Thus each orbital is a grand canonical ensemble unto itself, one so simple that its statistics can be immediately derived here. Focusing on just one orbital labelled , the total energy for a microstate of particles in this orbital will be , where is the characteristic energy level of that orbital. The grand potential for the orbital is given by one of two forms, depending on whether the orbital is bosonic or fermionic:
In each case the value gives the thermodynamic average number of particles on the orbital: the Fermi–Dirac distribution for fermions, and the Bose–Einstein distribution for bosons.
Considering again the entire system, the total grand potential is found by adding up the for all orbitals.
Indistinguishable classical particles
In classical mechanics it is also possible to consider indistinguishable particles (in fact, indistinguishability is a prerequisite for defining a chemical potential in a consistent manner; all particles of a given kind must be interchangeable). We can consider a region of the single-particle phase space with approximately uniform energy to be an "orbital" labelled .
Two complications arise since this orbital actually encompasses many (infinite) distinct states. Briefly:
An overcounting correction of is needed since the many-particle phase space contains copies of the same actual state (formed by the permutation of the particles' different exact states).
The chosen width of the orbital is arbitrary, thus there is a further proportionality factor that is independent of .
Due to the overcounting correction, the summation now takes the form of an exponential power series,
the value corresponding to Maxwell–Boltzmann statistics.
Ionization of an isolated atom
The grand canonical ensemble can be used to predict whether an atom prefers to be in a neutral state or ionized state.
An atom is able to exist in ionized states with more or fewer electrons compared to neutral. As shown below, ionized states may be thermodynamically preferred depending on the environment.
Consider a simplified model where the atom can be in a neutral state or in one of two ionized states (a detailed calculation also includes the degeneracy factors of the states):
charge neutral state, with electrons and energy .
an oxidized state ( electrons) with energy
a reduced state ( electrons) with energy
Here and are the atom's ionization energy and electron affinity, respectively; is the local electrostatic potential in the vacuum nearby the atom, and is the electron charge.
The grand potential in this case is thus determined by
The quantity is critical in this case, for determining the balance between the various states. This value is determined by the environment around the atom.
If one of these atoms is placed into a vacuum box, then , the work function of the box lining material. Comparing the tables of work function for various solid materials with the tables of electron affinity and ionization energy for atomic species, it is clear that many combinations would result in a neutral atom, however some specific combinations would result in the atom preferring an ionized state: e.g., a halogen atom in a ytterbium box, or a cesium atom in a tungsten box. At room temperature this situation is not stable since the atom tends to adsorb to the exposed lining of the box instead of floating freely. At high temperatures, however, the atoms are evaporated from the surface in ionic form; this spontaneous surface ionization effect has been used as a cesium ion source.
At room temperature, this example finds application in semiconductors, where the ionization of a dopant atom is well described by this ensemble. In the semiconductor, the conduction band edge plays the role of the vacuum energy level (replacing ), and is known as the Fermi level. Of course, the ionization energy and electron affinity of the dopant atom are strongly modified relative to their vacuum values. A typical donor dopant in silicon, phosphorus, has ;
the value of in the intrinsic silicon is initially about , guaranteeing the ionization of the dopant.
The value of depends strongly on electrostatics, however, so under some circumstances it is possible to de-ionize the dopant.
Meaning of chemical potential, generalized "particle number"
In order for a particle number to have an associated chemical potential, it must be conserved during the internal dynamics of the system, and only able to change when the system exchanges particles with an external reservoir.
If the particles can be created out of energy during the dynamics of the system, then an associated term must not appear in the probability expression for the grand canonical ensemble. In effect, this is the same as requiring that for that kind of particle. Such is the case for photons in a black cavity, whose number regularly change due to absorption and emission on the cavity walls. (On the other hand, photons in a highly reflective cavity can be conserved and caused to have a nonzero .)
In some cases the number of particles is not conserved and the represents a more abstract conserved quantity:
Chemical reactions: Chemical reactions can convert one type of molecule to another; if reactions occur then the must be defined such that they do not change during the chemical reaction.
High energy particle physics: Ordinary particles can be spawned out of pure energy, if a corresponding antiparticle is created. If this sort of process is allowed, then neither the number of particles nor antiparticles are conserved. Instead, is conserved. As particle energies increase, there are more possibilities to convert between particle types, and so there are fewer numbers that are truly conserved. At the very highest energies, the only conserved numbers are electric charge, weak isospin, and baryon–lepton number difference.
On the other hand, in some cases a single kind of particle may have multiple conserved numbers:
Closed compartments: In a system composed of multiple compartments that share energy but do not share particles, it is possible to set the chemical potentials separately for each compartment. For example, a capacitor is composed of two isolated conductors and is charged by applying a difference in electron chemical potential.
Slow equilibration: In some quasi-equilibrium situations it is possible to have two distinct populations of the same kind of particle in the same location, which are each equilibrated internally but not with each other. Though not strictly in equilibrium, it may be useful to name quasi-equilibrium chemical potentials which can differ among the different populations. Examples: (semiconductor physics) distinct quasi-Fermi levels (electron chemical potentials) in the conduction band and valence band; (spintronics) distinct spin-up and spin-down chemical potentials; (cryogenics) distinct parahydrogen and orthohydrogen chemical potentials.
Precise expressions for the ensemble
The precise mathematical expression for statistical ensembles has a distinct form depending on the type of mechanics under consideration (quantum or classical), as the notion of a "microstate" is considerably different. In quantum mechanics, the grand canonical ensemble affords a simple description since diagonalization provides a set of distinct microstates of a system, each with well-defined energy and particle number. The classical mechanical case is more complex as it involves not stationary states but instead an integral over canonical phase space.
Quantum mechanical
A statistical ensemble in quantum mechanics is represented by a density matrix, denoted by . The grand canonical ensemble is the density matrix
where is the system's total energy operator (Hamiltonian), is the system's total particle number operator for particles of type 1, is the total particle number operator for particles of type 2, and so on. is the matrix exponential operator. The grand potential is determined by the probability normalization condition that the density matrix has a trace of one, :
Note that for the grand ensemble, the basis states of the operators , , etc. are all states with multiple particles in Fock space, and the density matrix is defined on the same basis. Since the energy and particle numbers are all separately conserved, these operators are mutually commuting.
The grand canonical ensemble can alternatively be written in a simple form using bra–ket notation, since it is possible (given the mutually commuting nature of the energy and particle number operators) to find a complete basis of simultaneous eigenstates , indexed by , where , , and so on. Given such an eigenbasis, the grand canonical ensemble is simply
where the sum is over the complete set of states with state having total energy, particles of type 1, particles of type 2, and so on.
Classical mechanical
In classical mechanics, a grand ensemble is instead represented by a joint probability density function defined over multiple phase spaces of varying dimensions, , where the and are the canonical coordinates (generalized momenta and generalized coordinates) of the system's internal degrees of freedom. The expression for the grand canonical ensemble is somewhat more delicate than the canonical ensemble since:
The number of particles and thus the number of coordinates varies between the different phase spaces, and,
it is vital to consider whether permuting similar particles counts as a distinct state or not.
In a system of particles, the number of degrees of freedom depends on the number of particles in a way that depends on the physical situation. For example, in a three-dimensional gas of monoatoms , however in molecular gases there will also be rotational and vibrational degrees of freedom.
The probability density function for the grand canonical ensemble is:
where
is the energy of the system, a function of the phase ,
is an arbitrary but predetermined constant with the units of , setting the extent of one microstate and providing correct dimensions to .
is an overcounting correction factor (see below), a function of .
Again, the value of is determined by demanding that is a normalized probability density function:
This integral is taken over the entire available phase space for the given numbers of particles.
Overcounting correction
A well-known problem in the statistical mechanics of fluids (gases, liquids, plasmas) is how to treat particles that are similar or identical in nature: should they be regarded as distinguishable or not? In the system's equation of motion each particle is forever tracked as a distinguishable entity, and yet there are also valid states of the system where the positions of each particle have simply been swapped: these states are represented at different places in phase space, yet would seem to be equivalent.
If the permutations of similar particles are regarded to count as distinct states, then the factor above is simply . From this point of view, ensembles include every permuted state as a separate microstate. Although appearing benign at first, this leads to a problem of severely non-extensive entropy in the canonical ensemble, known today as the Gibbs paradox. In the grand canonical ensemble a further logical inconsistency occurs: the number of distinguishable permutations depends not only on how many particles are in the system, but also on how many particles are in the reservoir (since the system may exchange particles with a reservoir). In this case the entropy and chemical potential are non-extensive but also badly defined, depending on a parameter (reservoir size) that should be irrelevant.
To solve these issues it is necessary that the exchange of two similar particles (within the system, or between the system and reservoir) must not be regarded as giving a distinct state of the system. In order to incorporate this fact, integrals are still carried over full phase space but the result is divided by
which is the number of different permutations possible. The division by neatly corrects the overcounting that occurs in the integral over all phase space.
It is of course possible to include distinguishable types of particles in the grand canonical ensemble—each distinguishable type is tracked by a separate particle counter and chemical potential . As a result, the only consistent way to include "fully distinguishable" particles in the grand canonical ensemble is to consider every possible distinguishable type of those particles, and to track each and every possible type with a separate particle counter and separate chemical potential.
| Physical sciences | Statistical mechanics | Physics |
1129256 | https://en.wikipedia.org/wiki/Cupressus%20sempervirens | Cupressus sempervirens | Cupressus sempervirens, the Mediterranean cypress (also known as Italian cypress, Tuscan cypress, Persian cypress, or pencil pine), is a species of cypress native to the eastern Mediterranean region and Iran. While some studies show it has modern medicinal properties, it is most noted for uses in folk medicine, where the dried leaves of the plant are used to treat various ailments. It is well-adapted to the environmental conditions that it lives in due to its ability to survive in both acidic and alkaline soils and withstand drought. Cupressus sempervirens is widely present in culture, most notably in Iran, where it is both a sacred tree and a metaphor for "the graceful figure of the beloved".
Description
Cupressus sempervirens is a medium-sized coniferous evergreen tree growing up to 35 m (115 ft) tall, with a conic crown with level branches and variably loosely hanging branchlets. It is very long-lived, with some trees reported to be over 1,000 years old.
Cupressis sempervirens produces lateral shoots, or branches, which often grow upwards towards a light source. The foliage grows in dense, dark green sprays. The leaves are scale-like, 2–5 mm long, and produced on rounded (not flattened) shoots. The seed cones are ovoid or oblong and 25–40 mm long. The cones have 10–14 scales, which are green at first and mature to brown about 20–24 months after pollination. The male cones are 3–5 mm long and release highly allergenic pollen in late winter. The cones of C. sempervirens can withstand years of being sealed and are known to perform serotiny. The tree is moderately susceptible to cypress canker, caused by the fungus Seiridium cardinale, and can suffer extensive dieback where this disease is common. The species name sempervirens comes from the Latin for 'evergreen'.
Uses
C. sempervirens has been widely cultivated as an ornamental tree for millennia outside of its native range, mainly throughout the Mediterranean region and in other areas with similar hot, dry summers and mild, rainy winters, including California, southwest South Africa, and southern Australia. It can also be grown successfully in areas with cooler, moister summers, such as the British Isles, New Zealand, and the Pacific Northwest. It is also planted in Florida and parts of the coastal southern United States as an ornamental tree. In some areas, particularly the United States, it is known as "Italian" or "Tuscan cypress". Within its native range, C. sempervirens has historically been planted in gardens and cemeteries and used as a windbreak alongside roads. The tree can also prevent damage to land caused by violent weather.
The vast majority of the trees in cultivation are selected cultivars with a fastigiate crown, with erect branches forming a narrow to very narrow crown often less than a tenth as wide as the tree is tall. The dark green "exclamation mark" shape of these trees is a highly characteristic signature of Mediterranean town and village landscapes.
In cosmetics, it is used as an astringent, for firming, as an anti-seborrheic, for anti-dandruff, for anti-aging and as a fragrance. It is also the traditional wood used for Italian harpsichords.
Dried seeds of Cupressus sempervirens are sometimes used to help people control skin conditions such as acne and to heal cuts or scrapes. The oil from the leaves of the plant can aid in recovery from minor ailments like nose congestion.
Habitats
Cupressus sempervirens grows primarily in places with wet winters and hot summers; in the spring and autumn, the tree grows out its roots, stems, and leaves. Like most plants, Cupressus sempervirens requires light for such growth. Because the tree must survive wet winters and hot, dry summers, its roots are adapted to be stout and shallow for easier gathering of the nutrients in the soil. The roots of Cupressus sempervirens are adapted to function in both acidic and alkaline soils.
In culture
Iran
In Persian, C. sempervirens is called the "Graceful Cypress" (sarv-e nāz), and has a strong presence in culture, poetry and gardens. It bears several metaphors, including the "graceful figure and stately gait of [the] beloved". Iranians considered cypress to be a relic of Zoroaster. A Zoroastrian tradition recorded by Daqiqi maintains that King Vishtaspa, after converting to Zoroastrianism, ordered a cypress brought from paradise by Zoroaster to be planted near the first fire temple.
The oldest living cypress is the Sarv-e-Abarkooh in Iran's Yazd Province. Its age is estimated to be approximately 4,000 years.
Symbolism
In classical antiquity, the cypress was a symbol of mourning, and in the modern era, it remains the principal cemetery tree in both the Muslim world and Europe. In the classical tradition, the cypress was associated with death and the underworld because it failed to regenerate when cut back too severely. Athenian households in mourning were garlanded with boughs of cypress. Cypress was used to fumigate the air during cremations. It was among the plants that were suitable for making wreaths to adorn statues of Pluto, the classical ruler of the underworld.
The poet Ovid, who wrote during the reign of Augustus, records the best-known myth that explains the association of the cypress with grief. The handsome boy Cyparissus, a favorite of Apollo, accidentally killed a beloved tame stag. His grief and remorse were so inconsolable that he asked to weep forever. He was transformed into a cypress tree, with the tree's sap as his tears. In another version of the story, it was the woodland god Silvanus who was the divine companion of Cyparissus and accidentally killed the stag. When the boy was consumed by grief, Silvanus turned him into a tree and thereafter carried a branch of cypress as a symbol of mourning.
In Jewish tradition, cypress is held to be the wood used to build Noah's Ark and Solomon's Temple, and is mentioned as an idiom or metaphor in biblical passages, either referencing the tree's shape as an example of uprightness or its evergreen nature as an example of eternal beauty or health. The tree features in classical Aramaic writings.
In popular culture, C. sempervirens is often stereotypically associated with vacation destinations in the Mediterranean region, especially Italy. The tree has been seen on travel posters for decades.
Other characteristics
In July 2012, a forest fire lasting five days burned 20,000 hectares of forest in the Valencian village of Andilla. However, a group of 946 cypress trees about 22 years old was virtually unharmed, and only 12 cypresses were burned. The Andilla cypresses had been planted by the CypFire European project to study various aspects of the cypresses, including fire resistance.
| Biology and health sciences | Cupressaceae | Plants |
1129919 | https://en.wikipedia.org/wiki/Metallicity | Metallicity | In astronomy, metallicity is the abundance of elements present in an object that are heavier than hydrogen and helium. Most of the normal currently detectable (i.e. non-dark) matter in the universe is either hydrogen or helium, and astronomers use the word "metals" as convenient shorthand for "all elements except hydrogen and helium". This word-use is distinct from the conventional chemical or physical definition of a metal as an electrically conducting solid. Stars and nebulae with relatively high abundances of heavier elements are called "metal-rich" when discussing metallicity, even though many of those elements are called nonmetals in chemistry.
Metals in early spectroscopy
In 1802, William Hyde Wollaston noted the appearance of a number of dark features in the solar spectrum. In 1814, Joseph von Fraunhofer independently rediscovered the lines and began to systematically study and measure their wavelengths, and they are now called Fraunhofer lines. He mapped over 570 lines, designating the most prominent with the letters A through K and weaker lines with other letters.
About 45 years later, Gustav Kirchhoff and Robert Bunsen noticed that several Fraunhofer lines coincide with characteristic emission lines identifies in the spectra of heated chemical elements. They inferred that dark lines in the solar spectrum are caused by absorption by chemical elements in the solar atmosphere. Their observations were in the visible range where the strongest lines come from metals such as sodium, potassium, and iron. In the early work on the chemical composition of the sun the only elements that were detected in spectra were hydrogen and various metals, with the term metallic frequently used when describing them. In contemporary usage in astronomy all the extra elements beyond just hydrogen and helium are termed metallic.
Origin of metallic elements
The presence of heavier elements results from stellar nucleosynthesis, where the majority of elements heavier than hydrogen and helium in the Universe (metals, hereafter) are formed in the cores of stars as they evolve. Over time, stellar winds and supernovae deposit the metals into the surrounding environment, enriching the interstellar medium and providing recycling materials for the birth of new stars. It follows that older generations of stars, which formed in the metal-poor early Universe, generally have lower metallicities than those of younger generations, which formed in a more metal-rich Universe.
Stellar populations
Observed changes in the chemical abundances of different types of stars, based on the spectral peculiarities that were later attributed to metallicity, led astronomer Walter Baade in 1944 to propose the existence of two different populations of stars.
These became commonly known as (metal-rich) and (metal-poor) stars. A third, earliest stellar population was hypothesized in 1978, known as stars. These "extremely metal-poor" (XMP) stars are theorized to have been the "first-born" stars created in the Universe.
Common methods of calculation
Astronomers use several different methods to describe and approximate metal abundances, depending on the available tools and the object of interest. Some methods include determining the fraction of mass that is attributed to gas versus metals, or measuring the ratios of the number of atoms of two different elements as compared to the ratios found in the Sun.
Mass fraction
Stellar composition is often simply defined by the parameters , , and . Here represents the mass fraction of hydrogen, is the mass fraction of helium, and is the mass fraction of all the remaining chemical elements. Thus
In most stars, nebulae, H regions, and other astronomical sources, hydrogen and helium are the two dominant elements. The hydrogen mass fraction is generally expressed as where is the total mass of the system, and is the mass of the hydrogen it contains. Similarly, the helium mass fraction is denoted as The remainder of the elements are collectively referred to as "metals", and the mass fraction of metals is calculated as
For the surface of the Sun (symbol ), these parameters are measured to have the following values:
Due to the effects of stellar evolution, neither the initial composition nor the present day bulk composition of the Sun is the same as its present-day surface composition.
Chemical abundance ratios
The overall stellar metallicity is conventionally defined using the total hydrogen content, since its abundance is considered to be relatively constant in the Universe, or the iron content of the star, which has an abundance that is generally linearly increasing in time in the Universe.
Hence, iron can be used as a chronological indicator of nucleosynthesis. Iron is relatively easy to measure with spectral observations in the star's spectrum given the large number of iron lines in the star's spectra (even though oxygen is the most abundant heavy element – see metallicities in H regions below). The abundance ratio is the common logarithm of the ratio of a star's iron abundance compared to that of the Sun and is calculated thus:
where and are the number of iron and hydrogen atoms per unit of volume respectively, is the standard symbol for the Sun, and for a star (often omitted below). The unit often used for metallicity is the dex, contraction of "decimal exponent". By this formulation, stars with a higher metallicity than the Sun have a positive common logarithm, whereas those more dominated by hydrogen have a corresponding negative value. For example, stars with a value of +1 have 10 times the metallicity of the Sun (); conversely, those with a value of −1 have , while those with a value of 0 have the same metallicity as the Sun, and so on.
Young population I stars have significantly higher iron-to-hydrogen ratios than older population II stars. Primordial population III stars are estimated to have metallicity less than −6, a millionth of the abundance of iron in the Sun.
The same notation is used to express variations in abundances between other individual elements as compared to solar proportions. For example, the notation represents the difference in the logarithm of the star's oxygen abundance versus its iron content compared to that of the Sun. In general, a given stellar nucleosynthetic process alters the proportions of only a few elements or isotopes, so a star or gas sample with certain values may well be indicative of an associated, studied nuclear process.
Photometric colors
Astronomers can estimate metallicities through measured and calibrated systems that correlate photometric measurements and spectroscopic measurements (see also Spectrophotometry). For example, the Johnson UVB filters can be used to detect an ultraviolet (UV) excess in stars,
where a smaller UV excess indicates a larger presence of metals that absorb the UV radiation, thereby making the star appear "redder".
The UV excess, (U−B), is defined as the difference between a star's U and B band magnitudes, compared to the difference between U and B band magnitudes of metal-rich stars in the Hyades cluster.
Unfortunately, (U−B) is sensitive to both metallicity and temperature: If two stars are equally metal-rich, but one is cooler than the other, they will likely have different (U−B) values (see also Blanketing effect).
To help mitigate this degeneracy, a star's B−V color index can be used as an indicator for temperature. Furthermore, the UV excess and B−V index can be corrected to relate the (U−B) value to iron abundances.
Other photometric systems that can be used to determine metallicities of certain astrophysical objects include the Strӧmgren system,
the Geneva system, the Washington system,
and the DDO system.
Metallicities in various astrophysical objects
Stars
At a given mass and age, a metal-poor star will be slightly warmer. metallicities are roughly to of the Sun's but the group appears cooler than overall, as heavy population II stars have long since died. Above 40 solar masses, metallicity influences how a star will die: Outside the pair-instability window, lower metallicity stars will collapse directly to a black hole, while higher metallicity stars undergo a type Ib/c supernova and may leave a neutron star.
Relationship between stellar metallicity and planets
A star's metallicity measurement is one parameter that helps determine whether a star may have a giant planet, as there is a direct correlation between metallicity and the presence of a giant planet. Measurements have demonstrated the connection between a star's metallicity and gas giant planets, like Jupiter and Saturn. The more metals in a star and thus its planetary system and protoplanetary disk, the more likely the system may have gas giant planets. Current models show that the metallicity along with the correct planetary system temperature and distance from the star are key to planet and planetesimal formation. For two stars that have equal age and mass but different metallicity, the less metallic star is bluer. Among stars of the same color, less metallic stars emit more ultraviolet radiation. The Sun, with eight planets and nine consensus dwarf planets, is used as the reference, with a of 0.00.
H regions
Young, massive and hot stars (typically of spectral types O and B) in H regions emit UV photons that ionize ground-state hydrogen atoms, knocking electrons free; this process is known as photoionization. The free electrons can strike other atoms nearby, exciting bound metallic electrons into a metastable state, which eventually decay back into a ground state, emitting photons with energies that correspond to forbidden lines. Through these transitions, astronomers have developed several observational methods to estimate metal abundances in H regions, where the stronger the forbidden lines in spectroscopic observations, the higher the metallicity. These methods are dependent on one or more of the following: the variety of asymmetrical densities inside H regions, the varied temperatures of the embedded stars, and/or the electron density within the ionized region.
Theoretically, to determine the total abundance of a single element in an H region, all transition lines should be observed and summed. However, this can be observationally difficult due to variation in line strength. Some of the most common forbidden lines used to determine metal abundances in H regions are from oxygen (e.g. [O] = (3727, 7318, 7324) Å, and [O] = (4363, 4959, 5007) Å), nitrogen (e.g. [N] = (5755, 6548, 6584) Å), and sulfur (e.g. [S] = (6717, 6731) Å and [S] = (6312, 9069, 9531) Å) in the optical spectrum, and the [O] = (52, 88) μm and [N] = 57 μm lines in the infrared spectrum. Oxygen has some of the stronger, more abundant lines in H regions, making it a main target for metallicity estimates within these objects. To calculate metal abundances in H regions using oxygen flux measurements, astronomers often use the 23 method, in which
where is the sum of the fluxes from oxygen emission lines measured at the rest frame = (3727, 4959 and 5007) Å wavelengths, divided by the flux from the Balmer series H emission line at the rest frame = 4861 Å wavelength.
This ratio is well defined through models and observational studies, but caution should be taken, as the ratio is often degenerate, providing both a low and high metallicity solution, which can be broken with additional line measurements.
Similarly, other strong forbidden line ratios can be used, e.g. for sulfur, where
Metal abundances within H regions are typically less than 1%, with the percentage decreasing on average with distance from the Galactic Center.
| Physical sciences | Stellar astronomy | null |
1130627 | https://en.wikipedia.org/wiki/Hyaline%20cartilage | Hyaline cartilage | Hyaline cartilage is the glass-like (hyaline) and translucent cartilage found on many joint surfaces. It is also most commonly found in the ribs, nose, larynx, and trachea. Hyaline cartilage is pearl-gray in color, with a firm consistency and has a considerable amount of collagen. It contains no nerves or blood vessels, and its structure is relatively simple.
Structure
Hyaline cartilage is the most common kind of cartilage in the human body. It is primarily composed of type II collagen and proteoglycans. Hyaline cartilage is located in the trachea, nose, epiphyseal plate, sternum, and ribs.
Hyaline cartilage is covered externally by a fibrous membrane known as the perichondrium. The primary cells of cartilage are chondrocytes, which are in a matrix of fibrous tissue, proteoglycans and glycosaminoglycans.
As cartilage does not have lymph glands or blood vessels, the movements of solutes, including nutrients, occur via diffusion within the fluid compartments contiguous with adjacent tissues. Cartilage gives the structures a definite but pliable form, making them strong, but with limited mobility and flexibility. Cartilage has no nerves.
Hyaline cartilage also forms the temporary embryonic skeleton, which is gradually replaced by bone, and the skeleton of elasmobranch fish.
Microanatomy
When a slice of hyaline cartilage is examined under the microscope, it is shown to consist of chondrocytes of a rounded or bluntly angular form, lying in groups of two or more in a granular, or almost homogeneous matrix. When arranged in groups of two or more, the chondrocytes have rounded, but generally straight outlines, where they are in contact with each other, and in the rest of their circumference, they are rounded.
They consist of translucent protoplasm with fine interlacing filaments and minute granules are sometimes present. Embedded in this are one or two round nuclei, having the usual intranuclear network.
The cells are contained in cavities in the matrix, called cartilage lacunae. These cavities are actually artificial gaps formed from the shrinking of the cells during the staining and setting of the tissue for examination. The inter-territorial space between the isogenous cell groups contains relatively more collagen fibers, allowing it to maintain its shape while the actual cells shrink, creating the lacunae. This constitutes the so-called 'capsule' of the space. Each lacuna is usually occupied by a single cell, but during mitosis, it may contain two, four, or even eight cells.
Articular cartilage
Articular cartilage is hyaline cartilage on the articular surfaces of bones, and lies inside the joint cavity of synovial joints, bathed in synovial fluid produced by the synovial membrane, which lines the walls of the cavity.
Though it is often found in close contact with menisci and articular disks, articular cartilage is not considered a part of either of these structures, which are made entirely of fibrocartilage.
The articular cartilage extracellular matrix has a highly specialized architecture that is zonally organized: the superficial zone consists mostly of type II collagen fibers aligned parallel to the articular surface to resist shear forces, whereas the deep zone consists of the same fibers aligned perpendicularly to the bone interface to absorb compressive loads.
The biochemical breakdown of the articular cartilage results in osteoarthritis – the most common type of joint disease. Osteoarthritis affects over 30 million individuals in the United States alone, and is the leading cause of chronic disability amongst the elderly.
Articular cartilage development begins with interzone condensation of a type II collagen positive limb bud at the future joint site. This is followed by definition of specific cellular subtypes (meniscal progenitors, articular progenitors, synovial progenitors, and ligament progenitors) that will eventually form the joint capsule. Finally, the joint capsule matures and forms a cavity, with a central meniscus, and an encasement of synovium. This final structure will form several distinct layers of the articular cartilage found in all synovial joints including the deep zone (closest to the bone), middle zone, and superficial zone (closest to the synovial fluid).
Maintenance of articular cartilage is guided by a balance of anabolic (cartilage generating) and catabolic (cartilage degrading factors), in a manner similar to the maintenance of bone. Over the lifetime of the organism, anabolic factors and catabolic factors are generally in balance, however, as the organism ages, catabolism predominates and cartilage begins to degrade. Eventually, the loss of hyaline cartilage matrix and reduction in the chondrocyte content of the hyaline cartilage matrix results in the development of joint disease such as osteoarthritis. Overexpression of hyaline-cartilage specific anabolic factors, such as FGF18, appears to restore the balance between cartilage loss and generation.
Additional images
| Biology and health sciences | Tissues | Biology |
1131010 | https://en.wikipedia.org/wiki/Hylidae | Hylidae | Hylidae is a wide-ranging family of frogs commonly referred to as "tree frogs and their allies". However, the hylids include a diversity of frog species, many of which do not live in trees, but are terrestrial or semiaquatic.
Taxonomy and systematics
The earliest known fossils that can be assigned to this family are from the Cretaceous of India and the state of Wyoming in the United States.
The common name of "tree frog" is a popular name for several species of the family Hylidae. However, the name "treefrog" is not unique to this family, also being used for many species in the family Rhacophoridae.
The following genera are recognised in the family Hylidae:
Subfamily Hylinae
Tribe Cophomantini
Aplastodiscus – canebrake treefrogs
Boana – gladiator treefrogs
Bokermannohyla
Hyloscirtus
Myersiohyla
Nesorohyla
"Hyla" nicefori
Tribe Dendropsophini
Dendropsophus
Xenohyla
Tribe Hylini
Acris – cricket frogs
Atlantihyla
Bromeliohyla Charadrahyla Dryophytes – Ameroasian treefrogs
Duellmanohyla – brook frogs
Ecnomiohyla Exerodonta Hyla – common tree frogs
Isthmohyla Megastomatohyla Plectrohyla – spike-thumb frogs
Pseudacris – chorus frogs
Ptychohyla – stream frogs
Quilticohyla Rheohyla – small-eared treefrog
Sarcohyla Smilisca – burrowing frogs
Tlalocohyla Triprion – shovel-headed tree frogs
Tribe Lophiohylini
Aparasphenodon – casque-headed frogs
Argenteohyla – Argentinian frogs
Corythomantis – casque-headed tree frog
Dryaderces
Itapotihyla Nyctimantis – brown-eyed tree frogs
Osteocephalus – slender-legged tree frogs
Osteopilus Phyllodytes – heart-tongued frogs
Phytotriades – Trinidad golden treefrogs
Tepuihyla – Amazon tree frogs
Trachycephalus – casque-headed tree frog
Tribe Pseudini
Lysapsus – harlequin frogs
Pseudis – swimming frogs
Scarthyla – Madre de Dios tree frogs
Tribe Scinaxini
Julianus Ololygon (synonymous with Scinax)
Scinax – snouted tree frogs
Tribe Sphaenorhynchini
Sphaenorhynchus – lime tree frogs
Incertae sedis
"Hyla" imitator – mimic tree frog
Subfamily Pelodryadinae (Australian tree frogs)
Litoria Nyctimystes Ranoidea Incertae sedis
"Litoria" castanea
"Litoria" jeudii
"Litoria" louisiadensis
"Litoria" obtusirostris
"Litoria" vagabunda
Subfamily Phyllomedusinae (leaf frogs)
Agalychnis Callimedusa Cruziohyla Hylomantis – rough leaf frogs
Phasmahyla – shining leaf frogs
Phrynomedusa – colored leaf frogs
Phyllomedusa PithecopusThe subfamilies Pelodryadinae and Phyllomedusinae are sometimes classified as distinct families of their own due to their deep divergence and unique evolutionary history (with Pelodryadinae being the sister group to Phyllomedusinae and colonizing Australia during the Eocene via Antarctica, which at the time was not yet frozen over), but are presently retained in the Hylidae.
Description
Most hylids show adaptations suitable for an arboreal lifestyle, including forward-facing eyes providing binocular vision, and adhesive pads on the fingers and toes. In the nonarboreal species, these features may be greatly reduced, or absent.
Distribution and habitat
The European tree frog (Hyla arborea) is common in the middle and south of Europe, and its range extends into Asia and North Africa.
North America has many species of the family Hylidae, including the gray tree frog (Hyla versicolor) and the American green tree frog (H. cinerea). The spring peeper (Pseudacris crucifer) is also widespread in the eastern United States and is commonly heard on spring and summer evenings.
Behaviour and ecology
Species of the genus Cyclorana are burrowing frogs that spend much of their lives underground.
Breeding
Hylids lay their eggs in a range of different locations, depending on species. Many use ponds, or puddles that collect in the holes of their trees, while others use bromeliads or other water-holding plants. Other species lay their eggs on the leaves of vegetation hanging over water, allowing the tadpoles to drop into the pond when they hatch.
A few species use fast-flowing streams, attaching the eggs firmly to the substrate. The tadpoles of these species have suckers enabling them to hold on to rocks after they hatch. Another unusual adaptation is found in some South American hylids, which brood the eggs on the back of the female. The tadpoles of most hylid species have laterally placed eyes and broad tails with narrow, filamentous tips.
Feeding
Hylids mostly feed on insects and other invertebrates, but some larger species can feed on small vertebrates.
Gallery
| Biology and health sciences | Amphibians | null |
1131045 | https://en.wikipedia.org/wiki/Barbary%20lion | Barbary lion | The Barbary lion was a population of the lion subspecies Panthera leo leo. It was also called North African lion, Atlas lion and Egyptian lion. It lived in the mountains and deserts of the Maghreb of North Africa from Morocco to Egypt. It was eradicated following the spread of firearms and bounties for shooting lions. A comprehensive review of hunting and sighting records revealed that small groups of lions may have survived in Algeria until the early 1960s, and in Morocco until the mid-1960s. Today, it is locally extinct in this region. Fossils of the Barbary lion dating to between 100,000 and 110,000 years were found in the cave of Bizmoune near Essaouira.
Until 2017, the Barbary lion was considered a distinct lion subspecies. Results of morphological and genetic analyses of lion samples from North Africa showed that the Barbary lion does not differ significantly from the Asiatic lion and falls into the same subclade. This North African/Asian subclade is closely related to lions from West Africa and northern parts of Central Africa, and therefore grouped into the northern lion subspecies Panthera leo leo.
Characteristics
Barbary lion zoological specimens range in colour from light to dark tawny. Male lion skins had manes of varying colouration and length.
Head-to-tail length of stuffed males in zoological collections varies from , and of females around . Skull size varied from . Some manes extended over the shoulder and under the belly to the elbows. The mane hair was long.
In 19th-century hunter accounts, the Barbary lion was claimed to be the largest lion, with a weight of wild males ranging from . Yet, the accuracy of such data measured in the field is questionable. Captive Barbary lions were much smaller but kept under such poor conditions that they might not have attained their full potential size and weight.
The colour and size of lions' manes was long thought to be a sufficiently distinct morphological characteristic to accord a subspecific status to lion populations. Mane development varies with age and between individuals from different regions, and is therefore not a sufficient characteristic for subspecific identification. The size of manes is not regarded as evidence for Barbary lions' ancestry. Instead, results of mitochondrial DNA research support the genetic distinctness of Barbary lions in a unique haplotype found in museum specimens that is thought to be of Barbary lion descent. The presence of this haplotype is considered a reliable molecular marker to identify captive Barbary lions.
Barbary lions may have developed long-haired manes, because of lower temperatures in the Atlas Mountains than in other African regions, particularly in winter.
Results of a long-term study on lions in Serengeti National Park indicate that ambient temperature, nutrition and the level of testosterone influence the colour and size of lion manes.
Taxonomy
Felis leo was the scientific name proposed by Carl Linnaeus in 1758 for a type specimen from Constantine, Algeria. Following Linnaeus's description, several lion zoological specimens from North Africa were described and proposed as subspecies in the 19th century:
Felis leo barbaricus, described by the Austrian zoologist Johann Nepomuk Meyer in 1826, was a lion skin from the Barbary Coast.
Felis leo nubicus, described by Henri Marie Ducrotay de Blainville in 1843, was a male lion from Nubia that had been sent by Antoine Clot from Cairo to Paris, and died in the Ménagerie du Jardin des Plantes in 1841.
In 1930, Reginald Innes Pocock subordinated the lion to the genus Panthera, when he wrote about the Asiatic lion.
In the 20th and early 21st centuries, there has been much debate and controversy among zoologists on lion classification and validity of proposed subspecies:
In 1939, Glover Morrill Allen considered F. l. barbaricus and nubicus synonymous with F. l. leo.
In 1951, John Ellerman and Terence Morrison-Scott recognized only two lion subspecies in the Palearctic realm, namely the African lion Panthera leo leo and the Asiatic lion P. l. persica.
Some authors considered P. l. nubicus a valid subspecies and synonymous with P. l. massaica.
In 2005, P. l. barbarica, nubica and somaliensis were subsumed under P. l. leo.
In 2016, IUCN Red List assessors used P. l. leo for all lion populations in Africa.
The Barbary lion was considered a distinct lion subspecies.
In 2017, the Cat Classification Task Force of the Cat Specialist Group subsumed the lion populations in North, West and Central Africa and Asia to P. l. leo.
The Barbary lion was also called North African lion, Atlas lion, and Egyptian lion.
Genetic research
Results of a phylogeographic analysis using samples from African and Asiatic lions was published in 2006. One of the African samples was a vertebra from the National Museum of Natural History (France) that originated in the Nubian part of Sudan. In terms of mitochondrial DNA, it grouped with lion skull samples from the Central African Republic, Ethiopia and the northern part of the Democratic Republic of the Congo.
While the historical Barbary lion was morphologically distinct, its genetic uniqueness remained questionable.
In a comprehensive study about the evolution of lions in 2008, 357 samples of wild and captive lions from Africa and India were examined. Results showed that four captive lions from Morocco did not exhibit any unique genetic characteristic, but shared mitochondrial haplotypes with lion samples from West and Central Africa. They were all part of a major mtDNA grouping that also included Asiatic lion samples. Results provided evidence for the hypothesis that this group developed in East Africa, and about 118,000 years ago traveled north and west in the first wave of lion expansion. It broke up within Africa, and later in West Asia. Lions in Africa probably constitute a single population that interbred during several waves of migration since the Late Pleistocene. Genome-wide data of a wild-born historical lion specimen from Sudan clustered with P. l. leo in mtDNA-based phylogenies, but with a high affinity to P. l. melanochaita.
A comprehensive genetic study published in 2016 confirmed the close relationship between the extinct Barbary lions from Northern Africa and lions from Central and West Africa and in addition showed that the former fall into the same subclade as the Asiatic lion.
Former distribution and habitat
Fossils of the Barbary lion dating to between 100,000 and 110,000 years were found in the cave of Bizmoune near Essaouira.
The Barbary lion lived in the mountains and deserts of the Maghreb of North Africa from Morocco to Egypt. It was eradicated following the spread of firearms and bounties for shooting lions.
Today, it is locally extinct in this region.
Historical sighting and hunting records from the 19th and 20th centuries show that the Barbary lion survived in Algeria until the early 1960s, and in Morocco until the mid-1960s. It inhabited Mediterranean forests, woodlands, and scrub. The westernmost sighting of a Barbary lion reportedly occurred in the Anti-Atlas in western Morocco. It ranged from the Atlas Mountains and the Rif in Morocco, the Ksour and Amour Ranges in Algeria to the Aurès Mountains in Tunisia.
In Algeria, the Barbary lion was sighted in the forested hills and mountains between Ouarsenis in the west to the Chelif River plains in the north and the Pic de Taza in the east. It inhabited the forests and wooded hills of the Constantine Province southward into the Aurès Mountains.
In the 1830s, lions may have already been eliminated along the coast of the Mediterranean Sea and near human settlements.
In Libya, the Barbary lion persisted along the coast until the beginning of the 18th century, and was extirpated in Tunisia by 1890. By the mid-19th century, the Barbary lion population had massively declined, since bounties were paid for shooting lions. The cedar forests of Chelia and neighbouring mountains in Algeria harboured lions until about 1884. The Barbary lion disappeared in the Bône region by 1890, in the Khroumire and Souk Ahras regions by 1891, and in Batna Province by 1893.
The last recorded shooting of a wild Barbary lion took place in 1942 near Tizi n'Tichka in the Moroccan part of the Atlas Mountains. A small relict population may have survived in remote montane areas into the early 1960s. The last known sighting of a lion in Algeria occurred in 1956 in Beni Ourtilane District.
Historical accounts indicate that in Egypt, lions occurred in the Sinai Peninsula, along the Nile, in the Eastern and Western Deserts, in the region of Wadi El Natrun and along the maritime coast of the Mediterranean. In the 14th century BC, Thutmose IV hunted lions in the hills near Memphis. The growth of civilizations along the Nile and in the Sinai Peninsula by the beginning of the second millennium BC and desertification contributed to isolating lion populations in North Africa.
Behaviour and ecology
In the early 20th century, when Barbary lions were rare, they were sighted in pairs or in small family groups comprising a male and female lion with one or two cubs. Between 1839 and 1942, sightings of wild lions involved solitary animals, pairs and family units. Analysis of these sightings indicate that lions retained living in prides even when under increasing persecution, particularly in the eastern Maghreb. The size of prides was likely similar to prides living in sub-Saharan habitats, whereas the density of the Barbary lion population is considered to have been lower than in moister habitats.
When Barbary stag (Cervus elaphus barbarus) and gazelles became scarce in the Atlas Mountains, lions preyed on herds of livestock that were carefully tended. They also preyed on wild boar (Sus scrofa).
Sympatric predators in this region included the African leopard (P. pardus pardus) and Atlas bear (Ursus arctos crowtheri).
In captivity
The lions kept in the menagerie at the Tower of London in the Middle Ages were Barbary lions, as shown by DNA testing on two well-preserved skulls excavated at the Tower in 1936 and 1937. The skulls were radiocarbon-dated to around 1280–1385 and 1420−1480.
In the 19th century and the early 20th century, lions were often kept in hotels and circus menageries. In 1835, the lions in the Tower of London were transferred to improved enclosures at the London Zoo on the orders of the Duke of Wellington.
The lions in the Rabat Zoo exhibited characteristics thought typical for the Barbary lion. Nobles and Berber people presented lions as gifts to the royal family of Morocco. When the family was forced into exile in 1953, the lions in Rabat, numbering 21 altogether, were transferred to two zoos in the region. Three of these were shifted to a zoo in Casablanca, with the rest being shifted to Meknès. The lions at Meknès were moved back to the palace in 1955, but those at Casablanca never came back. In the late 1960s, new lion enclosures were built in Temara near Rabat. Results of a mtDNA research revealed in 2006 that a lion kept in the German Zoo Neuwied originated from this collection and is very likely a descendant of a Barbary lion.
Five lion samples from this collection were not Barbary lions maternally. Nonetheless, genes of the Barbary lion are likely to be present in common European zoo lions, since this was one of the most frequently introduced subspecies. Many lions in European and American zoos, which are managed without subspecies classification, are most likely descendants of Barbary lions. Several researchers and zoos supported the development of a studbook of lions directly descended from the King of Morocco's collection.
At the beginning of the 21st century, the Addis Ababa Zoo kept 16 adult lions. With their dark, brown manes extending through the front legs, they looked like Barbary or Cape lions. Their ancestors were caught in southwestern Ethiopia as part of a zoological collection for Emperor Haile Selassie of Ethiopia.
Since 2005, three Barbary lions were kept in Belfast Zoo obtained from Port Lympne Wild Animal Park, and a new Barbary lion enclosure was opened in 2023.
Cultural significance
The lion also appeared frequently in early Egyptian art and literature. Statues and statuettes of lions found at Hierakonpolis and Koptos in Upper Egypt date to the Early Dynastic Period. The early Egyptian deity Mehit was depicted with a lion head. In Ancient Egypt, the lion-headed deity Sekhmet was venerated as protector of the country. She represented destructive power, but was also regarded as protector against famine and disease. Lion-headed figures and amulets were excavated in tombs in the Aegean islands of Crete, Euboea, Rhodes, Paros and Chios. They are associated with Sekhmet and date to the early Iron Age between the 9th and 6th centuries BC. The remains of seven mostly subadult lions were excavated at the necropolis Umm El Qa'ab in a tomb of Hor-Aha, dated to the 31st century BC. In 2001, the skeleton of a mummified lion was found in the tomb of Maïa in a necropolis dedicated to Tutankhamun at Saqqara. It had probably lived and died in the Ptolemaic period, showed signs of malnutrition and had probably lived in captivity for many years.
The Barbary lion is a symbol in Nubian culture and was often depicted in art and architecture. Nubian deities, such as Amun, Amesemi, Apedemak, Arensnuphis, Hathor, Bastet, Dedun, Mehit, Menhit, and Sebiumeker, were depicted as lion protectors in Kushite religion.
In Roman North Africa, lions were regularly captured by experienced hunters for venatio spectacles in amphitheatres.
The Morocco national football team is called the "Atlas Lions", and the supporters are usually seen wearing T-shirts with a lion's face or wearing a lion suit.
| Biology and health sciences | Felines | Animals |
1131151 | https://en.wikipedia.org/wiki/Heliosphere | Heliosphere | The heliosphere is the magnetosphere, astrosphere, and outermost atmospheric layer of the Sun. It takes the shape of a vast, tailed bubble-like region of space. In plasma physics terms, it is the cavity formed by the Sun in the surrounding interstellar medium. The "bubble" of the heliosphere is continuously "inflated" by plasma originating from the Sun, known as the solar wind. Outside the heliosphere, this solar plasma gives way to the interstellar plasma permeating the Milky Way. As part of the interplanetary magnetic field, the heliosphere shields the Solar System from significant amounts of cosmic ionizing radiation; uncharged gamma rays are, however, not affected. Its name was likely coined by Alexander J. Dessler, who is credited with the first use of the word in the scientific literature in 1967. The scientific study of the heliosphere is heliophysics, which includes space weather and space climate.
Flowing unimpeded through the Solar System for billions of kilometers, the solar wind extends far beyond even the region of Pluto until it encounters the "termination shock", where its motion slows abruptly due to the outside pressure of the interstellar medium. The "heliosheath" is a broad transitional region between the termination shock and the heliosphere's outmost edge, the "heliopause". The overall shape of the heliosphere resembles that of a comet, being roughly spherical on one side to around 100 astronomical units (AU), and on the other side being tail shaped, known as the "heliotail", trailing for several thousands of AUs.
Two Voyager program spacecraft explored the outer reaches of the heliosphere, passing through the termination shock and the heliosheath. Voyager 1 encountered the heliopause on 25 August 2012, when the spacecraft measured a forty-fold sudden increase in plasma density. Voyager 2 traversed the heliopause on 5 November 2018. Because the heliopause marks the boundary between matter originating from the Sun and matter originating from the rest of the galaxy, spacecraft that depart the heliosphere (such as the two Voyagers) are in interstellar space.
History
The heliosphere is thought to change significantly over the course of millions of years due to extrasolar effects such as closer supernovas or the traversing interstellar medium of different densities. Evidence suggests that up to three million years ago Earth was exposed to the interstellar medium due to it shrinking the heliosphere to the Inner Solar System, which possibly had impacted Earth's past climate and human evolution.
Structure
Despite its name, the heliosphere's shape is not a perfect sphere. Its shape is determined by three factors: the interstellar medium (ISM), the solar wind, and the overall motion of the Sun and heliosphere as it passes through the ISM. Because the solar wind and the ISM are both fluid, the heliosphere's shape and size are also fluid. Changes in the solar wind, however, more strongly alter the fluctuating position of the boundaries on short timescales (hours to a few years). The solar wind's pressure varies far more rapidly than the outside pressure of the ISM at any given location. In particular, the effect of the 11-year solar cycle, which sees a distinct maximum and minimum of solar wind activity, is thought to be significant.
On a broader scale, the motion of the heliosphere through the fluid medium of the ISM results in an overall comet-like shape. The solar wind plasma which is moving roughly "upstream" (in the same direction as the Sun's motion through the galaxy) is compressed into a nearly-spherical form, whereas the plasma moving "downstream" (opposite the Sun's motion) flows out for a much greater distance before giving way to the ISM, defining the long, trailing shape of the heliotail.
The limited data available and the unexplored nature of these structures have resulted in many theories as to their form. In 2020, Merav Opher led the team of researchers who determined that the shape of the heliosphere is a crescent that can be described as a deflated croissant.
Solar wind
The solar wind consists of particles (ionized atoms from the solar corona) and fields like the magnetic field that are produced from the Sun and stream out into space. Because the Sun rotates once approximately every 25 days, the heliospheric magnetic field transported by the solar wind gets wrapped into a spiral. The solar wind affects many other systems in the Solar System; for example, variations in the Sun's own magnetic field are carried outward by the solar wind, producing geomagnetic storms in the Earth's magnetosphere.
Heliospheric current sheet
The heliospheric current sheet is a ripple in the heliosphere created by the rotating magnetic field of the Sun. It marks the boundary between heliospheric magnetic field regions of opposite polarity. Extending throughout the heliosphere, the heliospheric current sheet could be considered the largest structure in the Solar System and is said to resemble a "ballerina's skirt".
Edge structure
The outer structure of the heliosphere is determined by the interactions between the solar wind and the winds of interstellar space. The solar wind streams away from the Sun in all directions at speeds of several hundred km/s in the Earth's vicinity. At some distance from the Sun, well beyond the orbit of Neptune, this supersonic wind slows down as it encounters the gases in the interstellar medium. This takes place in several stages:
The solar wind is traveling at supersonic speeds within the Solar System. At the termination shock, a standing shock wave, the solar wind falls below the speed of sound and becomes subsonic.
It was previously thought that once subsonic, the solar wind would be shaped by the ambient flow of the interstellar medium, forming a blunt nose on one side and comet-like heliotail behind, a region called the heliosheath. However, observations in 2009 showed that this model is incorrect. As of 2011, it is thought to be filled with a magnetic bubble "foam".
The outer surface of the heliosheath, where the heliosphere meets the interstellar medium, is called heliopause. This is the edge of the entire heliosphere. Observations in 2009 led to changes to this model.
In theory, heliopause causes turbulence in the interstellar medium as the Sun orbits the Galactic Center. Outside the heliopause, would be a turbulent region caused by the pressure of the advancing heliopause against the interstellar medium. However, the velocity of the solar wind relative to the interstellar medium is probably too low for a bow shock.
Termination shock
The termination shock is the point in the heliosphere where the solar wind slows down to subsonic speed (relative to the Sun) because of interactions with the local interstellar medium. This causes compression, heating, and a change in the magnetic field. In the Solar System, the termination shock is believed to be 75 to 90 astronomical units from the Sun. In 2004, Voyager 1 crossed the Sun's termination shock, followed by Voyager 2 in 2007.
The shock arises because solar wind particles are emitted from the Sun at about 400 km/s, while the speed of sound (in the interstellar medium) is about 100 km/s. The exact speed depends on the density, which fluctuates considerably. The interstellar medium, although very low in density, nonetheless has a relatively constant pressure associated with it; the pressure from the solar wind decreases with the square of the distance from the Sun. As one moves far enough away from the Sun, the pressure of the solar wind drops to where it can no longer maintain supersonic flow against the pressure of the interstellar medium, at which point the solar wind slows to below its speed of sound, causing a shock wave. Further from the Sun, the termination shock is followed by heliopause, where the two pressures become equal and solar wind particles are stopped by the interstellar medium.
Other termination shocks can be seen in terrestrial systems; perhaps the easiest may be seen by simply running a water tap into a sink creating a hydraulic jump. Upon hitting the floor of the sink, the flowing water spreads out at a speed that is higher than the local wave speed, forming a disk of shallow, rapidly diverging flow (analogous to the tenuous, supersonic solar wind). Around the periphery of the disk, a shock front or wall of water forms; outside the shock front, the water moves slower than the local wave speed (analogous to the subsonic interstellar medium).
Evidence presented at a meeting of the American Geophysical Union in May 2005 by Ed Stone suggests that the Voyager 1 spacecraft passed the termination shock in December 2004, when it was about 94 AU from the Sun, by virtue of the change in magnetic readings taken from the craft. In contrast, Voyager 2 began detecting returning particles when it was only 76 AU from the Sun, in May 2006. This implies that the heliosphere may be irregularly shaped, bulging outwards in the Sun's northern hemisphere and pushed inward in the south.
Heliosheath
The heliosheath is the region of the heliosphere beyond the termination shock. Here the wind is slowed, compressed, and made turbulent by its interaction with the interstellar medium. At its closest point, the inner edge of the heliosheath lies approximately 80 to 100 AU from the Sun. A proposed model hypothesizes that the heliosheath is shaped like the coma of a comet, and trails several times that distance in the direction opposite to the Sun's path through space. At its windward side, its thickness is estimated to be between 10 and 100 AU. Voyager project scientists have determined that the heliosheath is not "smooth" – it is rather a "foamy zone" filled with magnetic bubbles, each about 1 AU wide. These magnetic bubbles are created by the impact of the solar wind and the interstellar medium. Voyager 1 and Voyager 2 began detecting evidence of the bubbles in 2007 and 2008, respectively. The probably sausage-shaped bubbles are formed by magnetic reconnection between oppositely oriented sectors of the solar magnetic field as the solar wind slows down. They probably represent self-contained structures that have detached from the interplanetary magnetic field.
At a distance of about 113 AU, Voyager 1 detected a 'stagnation region' within the heliosheath. In this region, the solar wind slowed to zero, the magnetic field intensity doubled and high-energy electrons from the galaxy increased 100-fold. At about 122 AU, the spacecraft entered a new region that Voyager project scientists called the "magnetic highway", an area still under the influence of the Sun but with some dramatic differences.
Heliopause
The heliopause is the theoretical boundary where the Sun's solar wind is stopped by the interstellar medium; where the solar wind's strength is no longer great enough to push back the stellar winds of the surrounding stars. This is the boundary where the interstellar medium and solar wind pressures balance. The crossing of the heliopause should be signaled by a sharp drop in the temperature of solar wind-charged particles, a change in the direction of the magnetic field, and an increase in the number of galactic cosmic rays.
In May 2012, Voyager 1 detected a rapid increase in such cosmic rays (a 9% increase in a month, following a more gradual increase of 25% from January 2009 to January 2012), suggesting it was approaching the heliopause. Between late August and early September 2012, Voyager 1 witnessed a sharp drop in protons from the Sun, from 25 particles per second in late August, to about 2 particles per second by early October. In September 2013, NASA announced that Voyager 1 had crossed the heliopause as of 25 August 2012. This was at a distance of from the Sun. Contrary to predictions, data from Voyager 1 indicates the magnetic field of the galaxy is aligned with the solar magnetic field.
On November 5, 2018, the Voyager 2 mission detected a sudden decrease in the flux of low-energy ions. At the same time, the level of cosmic rays increased. This demonstrated that the spacecraft crossed the heliopause at a distance of from the Sun. Unlike Voyager 1, the Voyager 2 spacecraft did not detect interstellar flux tubes while crossing the heliosheath.
NASA also collected data from the heliopause remotely during the suborbital SHIELDS mission in 2021.
Heliotail
The heliotail is the several thousand astronomical units long tail of the heliosphere, and thus the Solar System's tail. It can be compared to the tail of a comet (however, a comet's tail does not stretch behind it as it moves; it is always pointing away from the Sun). The tail is a region where the Sun's solar wind slows down and ultimately escapes the heliosphere, slowly evaporating because of charge exchange.
The shape of the heliotail (newly found by NASA's Interstellar Boundary Explorer – IBEX), is that of a four-leaf clover. The particles in the tail do not shine, therefore it cannot be seen with conventional optical instruments. IBEX made the first observations of the heliotail by measuring the energy of "energetic neutral atoms", neutral particles created by collisions in the Solar System's boundary zone.
The tail has been shown to contain fast and slow particles; the slow particles are on the side and the fast particles are encompassed in the center. The shape of the tail can be linked to the Sun sending out fast solar winds near its poles and slow solar winds near its equator more recently. The clover-shaped tail moves further away from the Sun, which makes the charged particles begin to morph into a new orientation.
Cassini and IBEX data challenged the "heliotail" theory in 2009. In July 2013, IBEX results revealed a 4-lobed tail on the Solar System's heliosphere.
Outside structures
The heliopause is the final known boundary between the heliosphere and the interstellar space that is filled with material, especially plasma, not from the Earth's own star, the Sun, but from other stars. Even so, just outside the heliosphere (i.e. the "solar bubble") there is a transitional region, as detected by Voyager 1. Just as some interstellar pressure was detected as early as 2004, some of the Sun's material seeps into the interstellar medium. The heliosphere is thought to reside in the Local Interstellar Cloud inside the Local Bubble, which is a region in the Orion Arm of the Milky Way Galaxy.
Outside the heliosphere, there is a forty-fold increase in plasma density. There is also a radical reduction in the detection of certain types of particles from the Sun, and a large increase in galactic cosmic rays.
The flow of the interstellar medium (ISM) into the heliosphere has been measured by at least 11 different spacecraft as of 2013. By 2013, it was suspected that the direction of the flow had changed over time. The flow, coming from Earth's perspective from the constellation Scorpius, has probably changed direction by several degrees since the 1970s.
Hydrogen wall
Predicted to be a region of hot hydrogen, a structure called the "hydrogen wall" may be between the bow shock and the heliopause. The wall is composed of interstellar material interacting with the edge of the heliosphere. One paper released in 2013 studied the concept of a bow wave and hydrogen wall.
Another hypothesis suggests that the heliopause could be smaller on the side of the Solar System facing the Sun's orbital motion through the galaxy. It may also vary depending on the current velocity of the solar wind and the local density of the interstellar medium. It is known to lie far outside the orbit of Neptune. The mission of the Voyager 1 and 2 spacecraft is to find and study the termination shock, heliosheath, and heliopause. Meanwhile, the IBEX mission is attempting to image the heliopause from Earth orbit within two years of its 2008 launch. Initial results (October 2009) from IBEX suggest that previous assumptions are insufficiently cognizant of the true complexities of the heliopause.
In August 2018, long-term studies about the hydrogen wall by the New Horizons spacecraft confirmed results first detected in 1992 by the two Voyager spacecraft. Although the hydrogen is detected by extra ultraviolet light (which may come from another source), the detection by New Horizons corroborates the earlier detections by Voyager at a much higher level of sensitivity.
Bow shock
It was long hypothesized that the Sun produces a "shock wave" in its travels within the interstellar medium. It would occur if the interstellar medium is moving supersonically "toward" the Sun, since its solar wind moves "away" from the Sun supersonically. When the interstellar wind hits the heliosphere it slows and creates a region of turbulence. A bow shock was thought to possibly occur at about 230 AU, but in 2012 it was determined it probably does not exist. This conclusion resulted from new measurements: The velocity of the LISM (local interstellar medium) relative to the Sun's was previously measured to be 26.3 km/s by Ulysses, whereas IBEX measured it at 23.2 km/s.
This phenomenon has been observed outside the Solar System, around stars other than the Sun, by NASA's now retired orbital GALEX telescope. The red giant star Mira in the constellation Cetus has been shown to have both a debris tail of ejecta from the star and a distinct shock in the direction of its movement through space (at over 130 kilometers per second).
Observational methods
Detection by spacecraft
The precise distance to and shape of the heliopause are still uncertain. Interplanetary/interstellar spacecraft such as Pioneer 10, Pioneer 11 and New Horizons are traveling outward through the Solar System and will eventually pass through the heliopause. Contact to Pioneer 10 and 11 has been lost.
Cassini results
Rather than a comet-like shape, the heliosphere appears to be bubble-shaped according to data from Cassinis Ion and Neutral Camera (MIMI / INCA). Rather than being dominated by the collisions between the solar wind and the interstellar medium, the INCA (ENA) maps suggest that the interaction is controlled more by particle pressure and magnetic field energy density.
IBEX results
Initial data from Interstellar Boundary Explorer (IBEX), launched in October 2008, revealed a previously unpredicted "very narrow ribbon that is two to three times brighter than anything else in the sky." Initial interpretations suggest that "the interstellar environment has far more influence on structuring the heliosphere than anyone previously believed"
"No one knows what is creating the ENA (energetic neutral atoms) ribbon, ..."
"The IBEX results are truly remarkable! What we are seeing in these maps does not match with any of the previous theoretical models of this region. It will be exciting for scientists to review these (ENA) maps and revise the way we understand our heliosphere and how it interacts with the galaxy." In October 2010, significant changes were detected in the ribbon after 6 months, based on the second set of IBEX observations. IBEX data did not support the existence of a bow shock, but there might be a 'bow wave' according to one study.
Locally
Examples of missions that have or continue to collect data related to the heliosphere include:
Solar Anomalous and Magnetospheric Particle Explorer
Solar and Heliospheric Observatory
Solar Dynamics Observatory
STEREO
Ulysses spacecraft
Parker Solar Probe
During a total eclipse the high-temperature corona can be more readily observed from Earth solar observatories. During the Apollo program the Solar wind was measured on the Moon via the Solar Wind Composition Experiment. Some examples of Earth surface based Solar observatories include the McMath–Pierce solar telescope or the newer GREGOR Solar Telescope, and the refurbished Big Bear Solar Observatory.
Exploration history
The heliosphere is the area under the influence of the Sun; the two major components to determining its edge are the heliospheric magnetic field and the solar wind from the Sun. Three major sections from the beginning of the heliosphere to its edge are the termination shock, the heliosheath, and the heliopause. Five spacecraft have returned much of the data about its furthest reaches, including Pioneer 10 (1972–1997; data to 67 AU), Pioneer 11 (1973–1995; 44 AU), Voyager 1 and Voyager 2 (launched 1977, ongoing), and New Horizons (launched 2006). A type of particle called an energetic neutral atom (ENA) has also been observed to have been produced from its edges.
Except for regions near obstacles such as planets or comets, the heliosphere is dominated by material emanating from the Sun, although cosmic rays, fast-moving neutral atoms, and cosmic dust can penetrate the heliosphere from the outside. Originating at the extremely hot surface of the corona, solar wind particles reach escape velocity, streaming outwards at 300 to 800 km/s (671 thousand to 1.79 million mph or 1 to 2.9 million km/h). As it begins to interact with the interstellar medium, its velocity slows to a stop. The point where the solar wind becomes slower than the speed of sound is called the termination shock; the solar wind continues to slow as it passes through the heliosheath leading to a boundary called the heliopause, where the interstellar medium and solar wind pressures balance. The termination shock was traversed by Voyager 1 in 2004, and Voyager 2 in 2007.
It was thought that beyond the heliopause there was a bow shock, but data from Interstellar Boundary Explorer suggested the velocity of the Sun through the interstellar medium is too low for it to form. It may be a more gentle "bow wave".
Voyager data led to a new theory that the heliosheath has "magnetic bubbles" and a stagnation zone. Also, there were reports of a "stagnation region" within the heliosheath, starting around , detected by Voyager 1 in 2010. There, the solar wind velocity drops to zero, the magnetic field intensity doubles, and high-energy electrons from the galaxy increase 100-fold.
Starting in May 2012 at , Voyager 1 detected a sudden increase in cosmic rays, an apparent sign of approach to the heliopause. In the summer of 2013, NASA announced that Voyager 1 had reached interstellar space as of 25 August 2012.
In December 2012, NASA announced that in late August 2012, Voyager 1, at about from the Sun, entered a new region they called the "magnetic highway", an area still under the influence of the Sun but with some dramatic differences.
Pioneer 10 was launched in March 1972, and within 10 hours passed by the Moon; over the next 35 years or so the mission would be the first out, laying out many firsts of discoveries about the nature of heliosphere as well as Jupiter's impact on it. Pioneer 10 was the first spacecraft to detect sodium and aluminum ions in the solar wind, as well as helium in the inner Solar System. In November 1972, Pioneer 10 encountered Jupiter's enormous (compared to Earth) magnetosphere and would pass in and out of it and its heliosphere 17 times charting its interaction with the solar wind. Pioneer 10 returned scientific data until March 1997, including data on the solar wind out to about 67 AU. It was also contacted in 2003 when it was a distance of 7.6 billion miles from Earth (82 AU), but no instrument data about the solar wind was returned then.
Voyager 1 surpassed the radial distance from the Sun of Pioneer 10 at 69.4 AU on 17 February 1998, because it was traveling faster, gaining about 1.02 AU per year. On July 18, 2023, Voyager 2 overtook Pioneer 10 as the second most distant human-made object from the Sun. Pioneer 11, launched a year after Pioneer 10, took similar data as Pioneer out to 44.7 AU in 1995 when that mission was concluded. Pioneer 11 had a similar instrument suite as 10 but also had a flux-gate magnetometer. Pioneer and Voyager spacecraft were on different trajectories and thus recorded data on the heliosphere in different overall directions away from the Sun. Data obtained from Pioneer and Voyager spacecraft helped corroborate the detection of a hydrogen wall.
Voyagers 1 and 2 were launched in 1977 and operated continuously to at least the late 2010s and encountered various aspects of the heliosphere past Pluto. In 2012 Voyager 1 is thought to have passed through heliopause, and Voyager 2 did the same in 2018
The twin Voyagers are the only man-made objects to have entered interstellar space. However, while they have left the heliosphere, they have not yet left the boundary of the Solar System which is considered to be the outer edge of the Oort Cloud. Upon passing the heliopause, Voyager 2 Plasma Science Experiment (PLS) observed a sharp decline in the speed of solar wind particles on 5 November and there has been no sign of it since. The three other instruments on board measuring cosmic rays, low-energy charged particles, and magnetic fields also recorded the transition. The observations complement data from NASA's IBEX mission. NASA is also preparing an additional mission, Interstellar Mapping and Acceleration Probe (IMAP) which is due to launch in 2025 to capitalize on Voyager observations.
Timeline of exploration and detection
1904: Astronomers using the Potsdam Great Refractor with a spectrograph find evidence of the interstellar medium while observing the binary star Mintaka in Orion.
January 1959: Luna 1 becomes the first spacecraft to observe the solar wind.
1962: Mariner 2 detects the solar wind.
1972–1973: Pioneer 10 becomes the first spacecraft to explore the heliosphere past Mars, flying by Jupiter on 4 December 1973 and continuing to return solar wind data out to a distance of 67 AU.
February 1992: After flying by Jupiter, the Ulysses spacecraft becomes the first to explore the mid and high latitudes of the heliosphere.
1992: Pioneer and Voyager probes detected Ly-α radiation resonantly scattered by heliospheric hydrogen.
2004: Voyager 1 becomes the first spacecraft to reach the termination shock.
2005: SOHO observations of the solar wind show that the shape of the heliosphere is not axisymmetrical, but distorted, very likely under the effect of the local galactic magnetic field.
2009: IBEX project scientists discover and map a ribbon-shaped region of intense energetic neutral atom emission. These neutral atoms are thought to be originating from the heliopause.
October 2009: the heliosphere may be bubble, not comet shaped.
October 2010: significant changes were detected in the ribbon after six months, based on the second set of IBEX observations.
May 2012: IBEX data implies there is probably not a bow "shock".
June 2012: At 119 AU, Voyager 1 detected an increase in cosmic rays.
25 August 2012: Voyager 1 crosses the heliopause, becoming the first human-made object to depart the heliosphere.
August 2018: long-term studies about the hydrogen wall by the New Horizons spacecraft confirmed results first detected in 1992 by the two Voyager spacecraft.
5 November 2018: Voyager 2 crosses the heliopause, departing the heliosphere.
| Physical sciences | Solar System | Astronomy |
1131927 | https://en.wikipedia.org/wiki/Greywacke | Greywacke | Greywacke or graywacke (German grauwacke, signifying a grey, earthy rock) is a variety of sandstone generally characterized by its hardness (6–7 on Mohs scale), dark color, and poorly sorted angular grains of quartz, feldspar, and small rock fragments or sand-size lithic fragments set in a compact, clay-fine matrix. It is a texturally immature sedimentary rock generally found in Paleozoic strata. The larger grains can be sand- to gravel-sized, and matrix materials generally constitute more than 15% of the rock by volume.
Formation
The origin of greywacke was unknown until turbidity currents and turbidites were understood, since, according to the normal laws of sedimentation, gravel, sand and mud should not be laid down together. Geologists now attribute its formation to submarine avalanches or strong turbidity currents. These actions churn sediment and cause mixed-sediment slurries, in which the resulting deposits may exhibit a variety of sedimentary features. Supporting the turbidity origin theory is the fact that deposits of greywacke are found on the edges of the continental shelves, at the bottoms of oceanic trenches, and at the bases of mountain formational areas. They also occur in association with black shales of deep-sea origin.
As a rule, greywackes do not contain fossils, but organic remains may be common in the finer beds associated with them. Their component particles are usually not very rounded or polished, and the rocks have often been considerably indurated by recrystallization, such as the introduction of interstitial silica. In some districts, the greywackes are cleaved, but they show phenomena of this kind much less perfectly than the slates.
Although the group is so diverse that it is difficult to characterize mineralogically, it has a well-established place in petrographical classifications because these peculiar composite arenaceous deposits are very frequent among Silurian and Cambrian rocks, and are less common in Mesozoic or Cenozoic strata. Their essential features are their gritty character and their complex composition. By increasing metamorphism, greywackes frequently pass into mica-schists, chloritic schists and sedimentary gneisses.
Varieties
The term "greywacke" can be confusing, since it can refer to either the immature (rock fragment) aspect of the rock or its fine-grained (clay) component.
Greywackes are mostly grey, brown, yellow, or black, dull-colored sandy rocks that may occur in thick or thin beds along with shales and limestones. Some varieties include feldspathic greywacke, rich in feldspar, and lithic greywacke, rich in other tiny rock fragments. They can contain a very great variety of minerals, the principal ones being quartz, orthoclase and plagioclase feldspars, calcite, iron oxides and graphitic, carbonaceous matters, together with (in the coarser kinds) fragments of such rocks as felsite, chert, slate, gneiss, various schists, and quartzite. Among other minerals found in them are biotite, chlorite, tourmaline, epidote, apatite, garnet, hornblende, augite, sphene and pyrites. The cementing material may be siliceous or argillaceous and is sometimes calcareous.
In geology and geography
Greywackes are abundant in Wales, the south of Scotland, the Longford-Down Massif in Ireland and the Lake District National Park of England; they compose the majority of the main Southern Alps that make up the backbone of New Zealand. Both feldspathic and lithic greywacke have been recognized in Ecca Group in South Africa. Greywackes are also found in parts of the Eastern Desert east of the Nile.
They were an early object of geological study in Britain where the Geological Society was founded in 1807, and excited much public interest in geology. Greywacke was interesting because it was found in many places in Britain and its occurrence in particular places was evidence of the pattern of geological strata that had been laid down.
Uses
Greywacke stone has been used as a building material and a sculptural material across many eras and societies. Its oldest known uses date to the early third millennium BCE, in Egypt's early dynastic period. Its wide use in sculpture and vessels is thought to have been due to its fine grain size and resistance to fracturing, making it suitable for fine detail and intricate shapes.
Aside from its structural uses, greywacke stone (or molds taken from it) is valuable to practitioners of traditional motion picture miniature photography, because due to its unusually mixed nature, it remains looking natural when portraying a wide range of miniature scale ratios, from 1:1 to as high as 1:600.
Gallery
| Physical sciences | Sedimentary rocks | Earth science |
1132756 | https://en.wikipedia.org/wiki/Animal%20locomotion | Animal locomotion | In ethology, animal locomotion is any of a variety of methods that animals use to move from one place to another. Some modes of locomotion are (initially) self-propelled, e.g., running, swimming, jumping, flying, hopping, soaring and gliding. There are also many animal species that depend on their environment for transportation, a type of mobility called passive locomotion, e.g., sailing (some jellyfish), kiting (spiders), rolling (some beetles and spiders) or riding other animals (phoresis).
Animals move for a variety of reasons, such as to find food, a mate, a suitable microhabitat, or to escape predators. For many animals, the ability to move is essential for survival and, as a result, natural selection has shaped the locomotion methods and mechanisms used by moving organisms. For example, migratory animals that travel vast distances (such as the Arctic tern) typically have a locomotion mechanism that costs very little energy per unit distance, whereas non-migratory animals that must frequently move quickly to escape predators are likely to have energetically costly, but very fast, locomotion.
The anatomical structures that animals use for movement, including cilia, legs, wings, arms, fins, or tails are sometimes referred to as locomotory organs or locomotory structures.
Etymology
The term "locomotion" is formed in English from Latin loco "from a place" (ablative of locus "place") + motio "motion, a moving".
The movement of whole body is called locomotion
Aquatic
Swimming
In water, staying afloat is possible using buoyancy. If an animal's body is less dense than water, it can stay afloat. This requires little energy to maintain a vertical position, but requires more energy for locomotion in the horizontal plane compared to less buoyant animals. The drag encountered in water is much greater than in air. Morphology is therefore important for efficient locomotion, which is in most cases essential for basic functions such as catching prey. A fusiform, torpedo-like body form is seen in many aquatic animals, though the mechanisms they use for locomotion are diverse.
The primary means by which fish generate thrust is by oscillating the body from side-to-side, the resulting wave motion ending at a large tail fin. Finer control, such as for slow movements, is often achieved with thrust from pectoral fins (or front limbs in marine mammals). Some fish, e.g. the spotted ratfish (Hydrolagus colliei) and batiform fish (electric rays, sawfishes, guitarfishes, skates and stingrays) use their pectoral fins as the primary means of locomotion, sometimes termed labriform swimming. Marine mammals oscillate their body in an up-and-down (dorso-ventral) direction.
Other animals, e.g. penguins, diving ducks, move underwater in a manner which has been termed "aquatic flying". Some fish propel themselves without a wave motion of the body, as in the slow-moving seahorses and Gymnotus.
Other animals, such as cephalopods, use jet propulsion to travel fast, taking in water then squirting it back out in an explosive burst. Other swimming animals may rely predominantly on their limbs, much as humans do when swimming. Though life on land originated from the seas, terrestrial animals have returned to an aquatic lifestyle on several occasions, such as the fully aquatic cetaceans, now very distinct from their terrestrial ancestors.
Dolphins sometimes ride on the bow waves created by boats or surf on naturally breaking waves.
Benthic
Benthic locomotion is movement by animals that live on, in, or near the bottom of aquatic environments. In the sea, many animals walk over the seabed. Echinoderms primarily use their tube feet to move about. The tube feet typically have a tip shaped like a suction pad that can create a vacuum through contraction of muscles. This, along with some stickiness from the secretion of mucus, provides adhesion. Waves of tube feet contractions and relaxations move along the adherent surface and the animal moves slowly along. Some sea urchins also use their spines for benthic locomotion.
Crabs typically walk sideways (a behaviour that gives us the word crabwise). This is because of the articulation of the legs, which makes a sidelong gait more efficient. However, some crabs walk forwards or backwards, including raninids, Libinia emarginata and Mictyris platycheles. Some crabs, notably the Portunidae and Matutidae, are also capable of swimming, the Portunidae especially so as their last pair of walking legs are flattened into swimming paddles.
A stomatopod, Nannosquilla decemspinosa, can escape by rolling itself into a self-propelled wheel and somersault backwards at a speed of 72 rpm. They can travel more than 2 m using this unusual method of locomotion.
Aquatic surface
Velella, the by-the-wind sailor, is a cnidarian with no means of propulsion other than sailing. A small rigid sail projects into the air and catches the wind. Velella sails always align along the direction of the wind where the sail may act as an aerofoil, so that the animals tend to sail downwind at a small angle to the wind.
While larger animals such as ducks can move on water by floating, some small animals move across it without breaking through the surface. This surface locomotion takes advantage of the surface tension of water. Animals that move in such a way include the water strider. Water striders have legs that are hydrophobic, preventing them from interfering with the structure of water. Another form of locomotion (in which the surface layer is broken) is used by the basilisk lizard.
Aerial
Active flight
Gravity is the primary obstacle to flight. Because it is impossible for any organism to have a density as low as that of air, flying animals must generate enough lift to ascend and remain airborne. One way to achieve this is with wings, which when moved through the air generate an upward lift force on the animal's body. Flying animals must be very light to achieve flight, the largest living flying animals being birds of around 20 kilograms. Other structural adaptations of flying animals include reduced and redistributed body weight, fusiform shape and powerful flight muscles; there may also be physiological adaptations. Active flight has independently evolved at least four times, in the insects, pterosaurs, birds, and bats. Insects were the first taxon to evolve flight, approximately 400 million years ago (mya), followed by pterosaurs approximately 220 mya, birds approximately 160 mya, then bats about 60 mya.
Gliding
Rather than active flight, some (semi-) arboreal animals reduce their rate of falling by gliding. Gliding is heavier-than-air flight without the use of thrust; the term "volplaning" also refers to this mode of flight in animals. This mode of flight involves flying a greater distance horizontally than vertically and therefore can be distinguished from a simple descent like a parachute. Gliding has evolved on more occasions than active flight. There are examples of gliding animals in several major taxonomic classes such as the invertebrates (e.g., gliding ants), reptiles (e.g., banded flying snake), amphibians (e.g., flying frog), mammals (e.g., sugar glider, squirrel glider).
Some aquatic animals also regularly use gliding, for example, flying fish, octopus and squid. The flights of flying fish are typically around 50 meters (160 ft), though they can use updrafts at the leading edge of waves to cover distances of up to . To glide upward out of the water, a flying fish moves its tail up to 70 times per second.
Several oceanic squid, such as the Pacific flying squid, leap out of the water to escape predators, an adaptation similar to that of flying fish. Smaller squids fly in shoals, and have been observed to cover distances as long as 50 m. Small fins towards the back of the mantle help stabilize the motion of flight. They exit the water by expelling water out of their funnel, indeed some squid have been observed to continue jetting water while airborne providing thrust even after leaving the water. This may make flying squid the only animals with jet-propelled aerial locomotion. The neon flying squid has been observed to glide for distances over , at speeds of up to .
Soaring
Soaring birds can maintain flight without wing flapping, using rising air currents. Many gliding birds are able to "lock" their extended wings by means of a specialized tendon. Soaring birds may alternate glides with periods of soaring in rising air. Five principal types of lift are used: thermals, ridge lift, lee waves, convergences and dynamic soaring.
Examples of soaring flight by birds are the use of:
Thermals and convergences by raptors such as vultures
Ridge lift by gulls near cliffs
Wave lift by migrating birds
Dynamic effects near the surface of the sea by albatrosses
Ballooning
Ballooning is a method of locomotion used by spiders. Certain silk-producing arthropods, mostly small or young spiders, secrete a special light-weight gossamer silk for ballooning, sometimes traveling great distances at high altitude.
Terrestrial
Forms of locomotion on land include walking, running, hopping or jumping, dragging and crawling or slithering. Here friction and buoyancy are no longer an issue, but a strong skeletal and muscular framework are required in most terrestrial animals for structural support. Each step also requires much energy to overcome inertia, and animals can store elastic potential energy in their tendons to help overcome this. Balance is also required for movement on land. Human infants learn to crawl first before they are able to stand on two feet, which requires good coordination as well as physical development. Humans are bipedal animals, standing on two feet and keeping one on the ground at all times while walking. When running, only one foot is on the ground at any one time at most, and both leave the ground briefly. At higher speeds momentum helps keep the body upright, so more energy can be used in movement.
Jumping
Jumping (saltation) can be distinguished from running, galloping, and other gaits where the entire body is temporarily airborne by the relatively long duration of the aerial phase and high angle of initial launch. Many terrestrial animals use jumping (including hopping or leaping) to escape predators or catch prey—however, relatively few animals use this as a primary mode of locomotion. Those that do include the kangaroo and other macropods, rabbit, hare, jerboa, hopping mouse, and kangaroo rat. Kangaroo rats often leap 2 m and reportedly up to 2.75 m at speeds up to almost . They can quickly change their direction between jumps. The rapid locomotion of the banner-tailed kangaroo rat may minimize energy cost and predation risk. Its use of a "move-freeze" mode may also make it less conspicuous to nocturnal predators. Frogs are, relative to their size, the best jumpers of all vertebrates. The Australian rocket frog, Litoria nasuta, can leap over , more than fifty times its body length.
Peristalsis and looping
Other animals move in terrestrial habitats without the aid of legs. Earthworms crawl by a peristalsis, the same rhythmic contractions that propel food through the digestive tract.
Leeches and geometer moth caterpillars move by looping or inching (measuring off a length with each movement), using their paired circular and longitudinal muscles (as for peristalsis) along with the ability to attach to a surface at both anterior and posterior ends. One end is attached, often the thicker end, and the other end, often thinner, is projected forward peristaltically until it touches down, as far as it can reach; then the first end is released, pulled forward, and reattached; and the cycle repeats. In the case of leeches, attachment is by a sucker at each end of the body.
Sliding
Due to its low coefficient of friction, ice provides the opportunity for other modes of locomotion. Penguins either waddle on their feet or slide on their bellies across the snow, a movement called tobogganing, which conserves energy while moving quickly. Some pinnipeds perform a similar behaviour called sledding.
Climbing
Some animals are specialized for moving on non-horizontal surfaces. One common habitat for such climbing animals is in trees; for example, the gibbon is specialized for arboreal movement, travelling rapidly by brachiation (see below).
Others living on rock faces such as in mountains move on steep or even near-vertical surfaces by careful balancing and leaping. Perhaps the most exceptional are the various types of mountain-dwelling caprids (e.g., Barbary sheep, yak, ibex, rocky mountain goat, etc.), whose adaptations can include a soft rubbery pad between their hooves for grip, hooves with sharp keratin rims for lodging in small footholds, and prominent dew claws. Another case is the snow leopard, which being a predator of such caprids also has spectacular balance and leaping abilities, such as ability to leap up to 17m (50ft).
Some light animals are able to climb up smooth sheer surfaces or hang upside down by adhesion using suckers. Many insects can do this, though much larger animals such as geckos can also perform similar feats.
Walking and running
Species have different numbers of legs resulting in large differences in locomotion.
Modern birds, though classified as tetrapods, usually have only two functional legs, which some (e.g., ostrich, emu, kiwi) use as their primary, Bipedal, mode of locomotion. A few modern mammalian species are habitual bipeds, i.e., whose normal method of locomotion is two-legged. These include the macropods, kangaroo rats and mice, springhare, hopping mice, pangolins and homininan apes. Bipedalism is rarely found outside terrestrial animals—though at least two types of octopus walk bipedally on the sea floor using two of their arms, so they can use the remaining arms to camouflage themselves as a mat of algae or floating coconut.
There are no three-legged animals—though some macropods, such as kangaroos, that alternate between resting their weight on their muscular tails and their two hind legs could be looked at as an example of tripedal locomotion in animals.
Many familiar animals are quadrupedal, walking or running on four legs. A few birds use quadrupedal movement in some circumstances. For example, the shoebill sometimes uses its wings to right itself after lunging at prey. The newly hatched hoatzin bird has claws on its thumb and first finger enabling it to dexterously climb tree branches until its wings are strong enough for sustained flight. These claws are gone by the time the bird reaches adulthood.
A relatively few animals use five limbs for locomotion. Prehensile quadrupeds may use their tail to assist in locomotion and when grazing, the kangaroos and other macropods use their tail to propel themselves forward with the four legs used to maintain balance.
Insects generally walk with six legs—though some insects such as nymphalid butterflies do not use the front legs for walking.
Arachnids have eight legs. Most arachnids lack extensor muscles in the distal joints of their appendages. Spiders and whipscorpions extend their limbs hydraulically using the pressure of their hemolymph. Solifuges and some harvestmen extend their knees by the use of highly elastic thickenings in the joint cuticle. Scorpions, pseudoscorpions and some harvestmen have evolved muscles that extend two leg joints (the femur-patella and patella-tibia joints) at once.
The scorpion Hadrurus arizonensis walks by using two groups of legs (left 1, right 2, Left 3, Right 4 and Right 1, Left 2, Right 3, Left 4) in a reciprocating fashion. This alternating tetrapod coordination is used over all walking speeds.
Centipedes and millipedes have many sets of legs that move in metachronal rhythm. Some echinoderms locomote using the many tube feet on the underside of their arms. Although the tube feet resemble suction cups in appearance, the gripping action is a function of adhesive chemicals rather than suction. Other chemicals and relaxation of the ampullae allow for release from the substrate. The tube feet latch on to surfaces and move in a wave, with one arm section attaching to the surface as another releases. Some multi-armed, fast-moving starfish such as the sunflower seastar (Pycnopodia helianthoides) pull themselves along with some of their arms while letting others trail behind. Other starfish turn up the tips of their arms while moving, which exposes the sensory tube feet and eyespot to external stimuli. Most starfish cannot move quickly, a typical speed being that of the leather star (Dermasterias imbricata), which can manage just in a minute. Some burrowing species from the genera Astropecten and Luidia have points rather than suckers on their long tube feet and are capable of much more rapid motion, "gliding" across the ocean floor. The sand star (Luidia foliolata) can travel at a speed of per minute. Sunflower starfish are quick, efficient hunters, moving at a speed of using 15,000 tube feet.
Many animals temporarily change the number of legs they use for locomotion in different circumstances. For example, many quadrupedal animals switch to bipedalism to reach low-level browse on trees. The genus of Basiliscus are arboreal lizards that usually use quadrupedalism in the trees. When frightened, they can drop to water below and run across the surface on their hind limbs at about 1.5 m/s for a distance of approximately before they sink to all fours and swim. They can also sustain themselves on all fours while "water-walking" to increase the distance travelled above the surface by about 1.3 m. When cockroaches run rapidly, they rear up on their two hind legs like bipedal humans; this allows them to run at speeds up to 50 body lengths per second, equivalent to a "couple hundred miles per hour, if you scale up to the size of humans." When grazing, kangaroos use a form of pentapedalism (four legs plus the tail) but switch to hopping (bipedalism) when they wish to move at a greater speed.
Powered cartwheeling
The Moroccan flic-flac spider (Cebrennus rechenbergi) uses a series of rapid, acrobatic flic-flac movements of its legs similar to those used by gymnasts, to actively propel itself off the ground, allowing it to move both down and uphill, even at a 40 percent incline. This behaviour is different than other huntsman spiders, such as Carparachne aureoflava from the Namib Desert, which uses passive cartwheeling as a form of locomotion. The flic-flac spider can reach speeds of up to 2 m/s using forward or back flips to evade threats.
Subterranean
Some animals move through solids such as soil by burrowing using peristalsis, as in earthworms, or other methods. In loose solids such as sand some animals, such as the golden mole, marsupial mole, and the pink fairy armadillo, are able to move more rapidly, "swimming" through the loose substrate. Burrowing animals include moles, ground squirrels, naked mole-rats, tilefish, and mole crickets.
Arboreal locomotion
Arboreal locomotion is the locomotion of animals in trees. Some animals may only scale trees occasionally, while others are exclusively arboreal. These habitats pose numerous mechanical challenges to animals moving through them, leading to a variety of anatomical, behavioural and ecological consequences as well as variations throughout different species. Furthermore, many of these same principles may be applied to climbing without trees, such as on rock piles or mountains. The earliest known tetrapod with specializations that adapted it for climbing trees was Suminia, a synapsid of the late Permian, about 260 million years ago. Some invertebrate animals are exclusively arboreal in habitat, for example, the tree snail.
Brachiation (from brachium, Latin for "arm") is a form of arboreal locomotion in which primates swing from tree limb to tree limb using only their arms. During brachiation, the body is alternately supported under each forelimb. This is the primary means of locomotion for the small gibbons and siamangs of southeast Asia. Some New World monkeys such as spider monkeys and muriquis are "semibrachiators" and move through the trees with a combination of leaping and brachiation. Some New World species also practice suspensory behaviors by using their prehensile tail, which acts as a fifth grasping hand.
Pandas are known to swig their heads laterally as they ascend vertical surfaces astonishingly utilizing their head as a propulsive limb in an anatomical way that was thought to only be practiced by certain species of birds.
Energetics
Animal locomotion requires energy to overcome various forces including friction, drag, inertia and gravity, although the influence of these depends on the circumstances. In terrestrial environments, gravity must be overcome whereas the drag of air has little influence. In aqueous environments, friction (or drag) becomes the major energetic challenge with gravity being less of an influence. Remaining in the aqueous environment, animals with natural buoyancy expend little energy to maintain a vertical position in a water column. Others naturally sink, and must spend energy to remain afloat. Drag is also an energetic influence in flight, and the aerodynamically efficient body shapes of flying birds indicate how they have evolved to cope with this. Limbless organisms moving on land must energetically overcome surface friction, however, they do not usually need to expend significant energy to counteract gravity.
Newton's third law of motion is widely used in the study of animal locomotion: if at rest, to move forwards an animal must push something backwards. Terrestrial animals must push the solid ground, swimming and flying animals must push against a fluid (either water or air). The effect of forces during locomotion on the design of the skeletal system is also important, as is the interaction between locomotion and muscle physiology, in determining how the structures and effectors of locomotion enable or limit animal movement. The energetics of locomotion involves the energy expenditure by animals in moving. Energy consumed in locomotion is not available for other efforts, so animals typically have evolved to use the minimum energy possible during movement. However, in the case of certain behaviors, such as locomotion to escape a predator, performance (such as speed or maneuverability) is more crucial, and such movements may be energetically expensive. Furthermore, animals may use energetically expensive methods of locomotion when environmental conditions (such as being within a burrow) preclude other modes.
The most common metric of energy use during locomotion is the net (also termed "incremental") cost of transport, defined as the amount of energy (e.g., Joules) needed above baseline metabolic rate to move a given distance. For aerobic locomotion, most animals have a nearly constant cost of transport—moving a given distance requires the same caloric expenditure, regardless of speed. This constancy is usually accomplished by changes in gait. The net cost of transport of swimming is lowest, followed by flight, with terrestrial limbed locomotion being the most expensive per unit distance. However, because of the speeds involved, flight requires the most energy per unit time. This does not mean that an animal that normally moves by running would be a more efficient swimmer; however, these comparisons assume an animal is specialized for that form of motion. Another consideration here is body mass—heavier animals, though using more total energy, require less energy per unit mass to move. Physiologists generally measure energy use by the amount of oxygen consumed, or the amount of carbon dioxide produced, in an animal's respiration. In terrestrial animals, the cost of transport is typically measured while they walk or run on a motorized treadmill, either wearing a mask to capture gas exchange or with the entire treadmill enclosed in a metabolic chamber. For small rodents, such as deer mice, the cost of transport has also been measured during voluntary wheel running.
Energetics is important for explaining the evolution of foraging economic decisions in organisms; for example, a study of the African honey bee, A. m. scutellata, has shown that honey bees may trade the high sucrose content of viscous nectar off for the energetic benefits of warmer, less concentrated nectar, which also reduces their consumption and flight time.
Passive locomotion
Passive locomotion in animals is a type of mobility in which the animal depends on their environment for transportation; such animals are vagile but not motile.
Hydrozoans
The Portuguese man o' war (Physalia physalis) lives at the surface of the ocean. The gas-filled bladder, or pneumatophore (sometimes called a "sail"), remains at the surface, while the remainder is submerged. Because the Portuguese man o' war has no means of propulsion, it is moved by a combination of winds, currents, and tides. The sail is equipped with a siphon. In the event of a surface attack, the sail can be deflated, allowing the organism to briefly submerge.
Mollusca
The violet sea-snail (Janthina janthina) uses a buoyant foam raft stabilized by amphiphilic mucins to float at the sea surface.
Arachnids
The wheel spider (Carparachne aureoflava) is a huntsman spider approximately 20 mm in size and native to the Namib Desert of Southern Africa. The spider escapes parasitic pompilid wasps by flipping onto its side and cartwheeling down sand dunes at speeds of up to 44 turns per second. If the spider is on a sloped dune, its rolling speed may be 1 metre per second.
A spider (usually limited to individuals of a small species), or spiderling after hatching, climbs as high as it can, stands on raised legs with its abdomen pointed upwards ("tiptoeing"), and then releases several silk threads from its spinnerets into the air. These form a triangle-shaped parachute that carries the spider on updrafts of winds, where even the slightest breeze transports it. The Earth's static electric field may also provide lift in windless conditions.
Insects
The larva of Cicindela dorsalis, the eastern beach tiger beetle, is notable for its ability to leap into the air, loop its body into a rotating wheel and roll along the sand at a high speed using wind to propel itself. If the wind is strong enough, the larva can cover up to in this manner. This remarkable ability may have evolved to help the larva escape predators such as the thynnid wasp Methocha.
Members of the largest subfamily of cuckoo wasps, Chrysidinae, are generally kleptoparasites, laying their eggs in host nests, where their larvae consume the host egg or larva while it is still young. Chrysidines are distinguished from the members of other subfamilies in that most have flattened or concave lower abdomens and can curl into a defensive ball when attacked by a potential host, a process known as conglobation. Protected by hard chitin in this position, they are expelled from the nest without injury and can search for a less hostile host.
Fleas can jump vertically up to 18 cm and horizontally up to 33 cm; however, although this form of locomotion is initiated by the flea, it has little control of the jump—they always jump in the same direction, with very little variation in the trajectory between individual jumps.
Crustaceans
Although stomatopods typically display the standard locomotion types as seen in true shrimp and lobsters, one species, Nannosquilla decemspinosa, has been observed flipping itself into a crude wheel. The species lives in shallow, sandy areas. At low tides, N. decemspinosa is often stranded by its short rear legs, which are sufficient for locomotion when the body is supported by water, but not on dry land. The mantis shrimp then performs a forward flip in an attempt to roll towards the next tide pool. N. decemspinosa has been observed to roll repeatedly for , but they typically travel less than . Again, the animal initiates the movement but has little control during its locomotion.
Animal transport
Some animals change location because they are attached to, or reside on, another animal or moving structure. This is arguably more accurately termed "animal transport".
Remoras
Remoras are a family (Echeneidae) of ray-finned fish. They grow to long, and their distinctive first dorsal fins take the form of a modified oval, sucker-like organ with slat-like structures that open and close to create suction and take a firm hold against the skin of larger marine animals. By sliding backward, the remora can increase the suction, or it can release itself by swimming forward. Remoras sometimes attach to small boats. They swim well on their own, with a sinuous, or curved, motion. When the remora reaches about , the disc is fully formed and the remora can then attach to other animals. The remora's lower jaw projects beyond the upper, and the animal lacks a swim bladder. Some remoras associate primarily with specific host species. They are commonly found attached to sharks, manta rays, whales, turtles, and dugongs. Smaller remoras also fasten onto fish such as tuna and swordfish, and some small remoras travel in the mouths or gills of large manta rays, ocean sunfish, swordfish, and sailfish. The remora benefits by using the host as transport and protection, and also feeds on materials dropped by the host.
Angler fish
In some species of anglerfish, when a male finds a female, he bites into her skin, and releases an enzyme that digests the skin of his mouth and her body, fusing the pair down to the blood-vessel level. The male becomes dependent on the female host for survival by receiving nutrients via their shared circulatory system, and provides sperm to the female in return. After fusing, males increase in volume and become much larger relative to free-living males of the species. They live and remain reproductively functional as long as the female lives, and can take part in multiple spawnings. This extreme sexual dimorphism ensures, when the female is ready to spawn, she has a mate immediately available. Multiple males can be incorporated into a single individual female with up to eight males in some species, though some taxa appear to have a one male per female rule.
Parasites
Many parasites are transported by their hosts. For example, endoparasites such as tapeworms live in the alimentary tracts of other animals, and depend on the host's ability to move to distribute their eggs. Ectoparasites such as fleas can move around on the body of their host, but are transported much longer distances by the host's locomotion. Some ectoparasites such as lice can opportunistically hitch a ride on a fly (phoresis) and attempt to find a new host.
Changes between media
Some animals locomote between different media, e.g., from aquatic to aerial. This often requires different modes of locomotion in the different media and may require a distinct transitional locomotor behaviour.
There are a large number of semi-aquatic animals (animals that spend part of their life cycle in water, or generally have part of their anatomy underwater). These represent the major taxa of mammals (e.g., beaver, otter, polar bear), birds (e.g., penguins, ducks), reptiles (e.g., anaconda, bog turtle, marine iguana) and amphibians (e.g., salamanders, frogs, newts).
Fish
Some fish use multiple modes of locomotion. Walking fish may swim freely or at other times "walk" along the ocean or river floor, but not on land (e.g., the flying gurnard—which does not actually fly—and batfishes of the family Ogcocephalidae). Amphibious fish, are fish that are able to leave water for extended periods of time. These fish use a range of terrestrial locomotory modes, such as lateral undulation, tripod-like walking (using paired fins and tail), and jumping. Many of these locomotory modes incorporate multiple combinations of pectoral, pelvic and tail fin movement. Examples include eels, mudskippers and the walking catfish. Flying fish can make powerful, self-propelled leaps out of water into air, where their long, wing-like fins enable gliding flight for considerable distances above the water's surface. This uncommon ability is a natural defence mechanism to evade predators. The flights of flying fish are typically around 50 m, though they can use updrafts at the leading edge of waves to cover distances of up to . They can travel at speeds of more than . Maximum altitude is above the surface of the sea. Some accounts have them landing on ships' decks.
Marine mammals
When swimming, several marine mammals such as dolphins, porpoises and pinnipeds, frequently leap above the water surface whilst maintaining horizontal locomotion. This is done for various reasons. When travelling, jumping can save dolphins and porpoises energy as there is less friction while in the air. This type of travel is known as "porpoising". Other reasons for dolphins and porpoises performing porpoising include orientation, social displays, fighting, non-verbal communication, entertainment and attempting to dislodge parasites. In pinnipeds, two types of porpoising have been identified. "High porpoising" is most often near (within 100 m) the shore and is often followed by minor course changes; this may help seals get their bearings on beaching or rafting sites. "Low porpoising" is typically observed relatively far (more than 100 m) from shore and often aborted in favour of anti-predator movements; this may be a way for seals to maximize sub-surface vigilance and thereby reduce their vulnerability to sharks
Some whales raise their (entire) body vertically out of the water in a behaviour known as "breaching".
Birds
Some semi-aquatic birds use terrestrial locomotion, surface swimming, underwater swimming and flying (e.g., ducks, swans). Diving birds also use diving locomotion (e.g., dippers, auks). Some birds (e.g., ratites) have lost the primary locomotion of flight. The largest of these, ostriches, when being pursued by a predator, have been known to reach speeds over , and can maintain a steady speed of , which makes the ostrich the world's fastest two-legged animal: Ostriches can also locomote by swimming. Penguins either waddle on their feet or slide on their bellies across the snow, a movement called tobogganing, which conserves energy while moving quickly. They also jump with both feet together if they want to move more quickly or cross steep or rocky terrain. To get onto land, penguins sometimes propel themselves upwards at a great speed to leap out the water.
Changes during the life-cycle
An animal's mode of locomotion may change considerably during its life-cycle. Barnacles are exclusively marine and tend to live in shallow and tidal waters. They have two nektonic (active swimming) larval stages, but as adults, they are sessile (non-motile) suspension feeders. Frequently, adults are found attached to moving objects such as whales and ships, and are thereby transported (passive locomotion) around the oceans.
Function
Animals locomote for a variety of reasons, such as to find food, a mate, a suitable microhabitat, or to escape predators.
Food procurement
Animals use locomotion in a wide variety of ways to procure food. Terrestrial methods include ambush predation, social predation and grazing. Aquatic methods include filterfeeding, grazing, ram feeding, suction feeding, protrusion and pivot feeding. Other methods include parasitism and parasitoidism.
Quantifying body and limb movement
The study of animal locomotion is a branch of biology that investigates and quantifies how animals move. It is an application of kinematics, used to understand how the movements of animal limbs relate to the motion of the whole animal, for instance when walking or flying.
Galleries
| Biology and health sciences | Ethology | null |
2403422 | https://en.wikipedia.org/wiki/Lightheadedness | Lightheadedness | Lightheadedness is a common and typically unpleasant sensation of dizziness or a feeling that one may faint. The sensation of lightheadedness can be short-lived, prolonged, or, rarely, recurring. In addition to dizziness, the individual may feel as though their head is weightless. The individual may also feel as though the room is "spinning" or moving (vertigo). Most causes of lightheadedness are not serious and either cure themselves quickly or are easily treated.
Keeping a sense of balance requires the brain to process a variety of information received from the eyes, the nervous system, and the inner ears. If the brain is unable to process these signals, such as when the messages are contradictory, or if the sensory systems are improperly functioning, an individual may experience lightheadedness or dizziness.
Lightheadedness is very similar to pre-syncope. Pre-syncope is the immediate stage before syncope (fainting), particularly in cases of temporary visual field loss (i.e. vision getting "dark" or "closing in").
Causes
Lightheadedness can be simply (and most commonly) an indication of a temporary shortage of blood or oxygen to the brain due to a drop in blood pressure, rapid dehydration from vomiting, diarrhea, or fever. Other causes are: altitude sickness, low blood sugar, hyperventilation, postural orthostatic tachycardia syndrome (increase in heart rate upon sitting up or standing), panic attacks, and anemia. It can also be a symptom of many other conditions, some of them serious, such as heart problems (including abnormal heart rhythm or heart attack), respiratory problems such as pulmonary hypertension or pulmonary embolism, and also stroke, bleeding, and shock. If any of these serious disorders are present, the individual will usually have additional symptoms such as chest pain, a feeling of a racing heart, loss of speech or a change in vision.
Many people, especially as they age, experience lightheadedness if they arise too quickly from a lying or seated position. Lightheadedness often accompanies the flu, hypoglycaemia, common cold, or allergies.
Dizziness could be provoked by the use of antihistamine drugs, like levocetirizine, or by some antibiotics or SSRIs. Nicotine or tobacco products can cause lightheadedness for inexperienced users. Narcotic drugs, such as codeine, can also cause lightheadedness.
Treatment
Treatment for lightheadedness depends on the cause or underlying problem. Treatment may include drinking plenty of water or other fluids (unless the lightheadedness is the result of water intoxication in which case drinking water is quite dangerous). If a patient is unable to keep fluids down from nausea or vomiting, they may need intravenous fluids such as Ringer's lactate solution. They should try eating something sugary and lying down or sitting and reducing the elevation of the head relative to the body (for example, by positioning the head between the knees).
Other simple remedies include avoiding sudden changes in posture when sitting or lying and avoiding bright lights.
Several essential electrolytes are excreted when the body perspires. When people experience unusual or extreme heat for a long time, sweating excessively can cause a lack of some electrolytes, which in turn can cause lightheadedness.
| Biology and health sciences | Symptoms and signs | Health |
2403771 | https://en.wikipedia.org/wiki/Oakmoss | Oakmoss | Oakmoss (scientific name Evernia prunastri) is a species of lichen. It can be found in many mountainous temperate forests throughout the Northern Hemisphere. Oakmoss grows primarily on the trunk and branches of oak trees, but is also commonly found on the bark of other deciduous trees and conifers such as fir and pine. The thalli of oakmoss are short (3–4 cm in length) and bushy, and grow together on bark to form large clumps. Oakmoss thallus is flat and strap-like. They are also highly branched, resembling the form of antlers. The colour of oakmoss ranges from green to a greenish-white when dry, and dark olive-green to yellow-green when wet. The texture of the thalli is rough when dry and rubbery when wet. It is used extensively in modern perfumery.
Commercial uses
Oakmoss is commercially harvested in countries of South-Central Europe and usually exported to the Grasse region of France where its fragrant compounds are extracted as oakmoss absolutes and extracts. These raw materials are often used as perfume fixatives and form the base notes of many fragrances. They are also key components of Fougère and Chypre class perfumes. The lichen has a distinct and complex odor and can be described as woody, sharp and slightly sweet. Oakmoss growing on pines have a pronounced turpentine odor that is valued in certain perfume compositions.
In parts of Central Italy, oakmoss has been used as for biomonitoring the deposition of heavy metals at urban, rural, and industrial sites. Studies of bioaccumulation for zinc, lead, chromium, cadmium, and copper in lichen samples were performed five times at regular intervals between November 2000 and December 2001. As expected, the rural areas showed smaller impact of those five heavy metals when compared to urban and industrial areas.
Health and safety information
Oakmoss should be avoided by people with known skin sensitization issues. Its use in perfumes is now highly restricted by International Fragrance Association regulations, and many scents have been reformulated in recent years with other chemicals substituted for oakmoss.
Conservation status
Evernia prunastri is listed as critically endangered (CR) in Iceland, where it is found in only one location. As of April 2021, it has not been evaluated by the IUCN.
Gallery
| Biology and health sciences | Lichens | Plants |
4465266 | https://en.wikipedia.org/wiki/Hesperocyparis%20lusitanica | Hesperocyparis lusitanica | Hesperocyparis lusitanica, the Mexican cypress, cedar-of-Goa or Goa cedar, is a species of cypress native to Mexico and Central America (Guatemala, El Salvador and Honduras). It has also been introduced to Belize, Costa Rica and Nicaragua, growing at altitude.
The scientific name lusitanica (of Portugal) refers to its very early cultivation there, with plants imported from Mexico to the monastery at Buçaco, near Coimbra in Portugal in about 1634; these trees were already over 130 years old when the species was botanically described by Miller in 1768.
In Mexico, the tree is also known as cedro blanco (white cedar) or teotlate.
Description
Hesperocyparis lusitanica is an evergreen conifer tree with a conic to ovoid-conic crown, growing to 40 m tall. The foliage grows in dense sprays, dark green to somewhat yellow-green in colour. The leaves are scale-like, 2–5 mm long, and produced on rounded (not flattened) shoots. The seed cones are globose to oblong, 10–20 mm long, with four to 10 scales, green at first, maturing brown or grey-brown about 25 months after pollination.
The cones may either open at maturity to release the seeds, or remain closed for several years, only opening after the parent tree is killed in a wildfire, allowing the seeds to colonise the bare ground exposed by the fire. The male cones are 3–4 mm long, and release pollen in late Winter / Early Spring (February–March in the northern hemisphere). In most of its natural environment the rainfall occurs with more quantity in summer.
Taxonomy
Hesperocyparis lusitanica was given its first scientific name by the botanist Philip Miller who named it Cupressus lusitanica in 1768, because he described it from collections made in Portugal. The species has a large number of synonyms and the species Hesperocyparis benthamii has been treated as variety or subspecies of H. lusitanica. In 2009 a paper was published moving this species and most of the New World Cupressus to the new genus Hesperocyparis. this is listed as the accepted species name with no subspecies or varieties by Plants of the World Online, World Flora Online, and the Gymnosperm Database.
Cultivation and uses
Fast-growing and drought tolerant, but only slightly frost tolerant, Hesperocyparis lusitanica has been introduced from Mexico to different parts of the world like New Zealand. It is widely cultivated, both as an ornamental tree and for timber production, in warm, temperate and subtropical regions around the world. Trees have not been selected for cultivation from northern Mexico populations, which have a heavy drought endurance.
Locations
Its cultivation and subsequent naturalisation in parts of southern Asia has caused a degree of confusion with native Cupressus species in that region; plants sold by nurseries under the names of Asian species such as Cupressus torulosa often prove to be this species.
It has been planted widely for commercial production: at high altitudes in Colombia (), Bolivia, Ethiopia and South Africa, and near sea level throughout New Zealand. In Colombia trees are planted to form windbreak hedges and for preventing soil erosion on slopes. It has been planted by Tanzanian mountain farmers for soil preservation and commercial use since the 1990s.
It has been planted as an ornamental tree near sea level in temperate climates and has done very well in Portugal, Buenos Aires Province in Argentina; Austin, Texas and the British Isles where it can reach a height of 30 m (90 feet).
It is being planted in the Argentine province of San Luis, Argentina at above sea level to create artificial forests in a land originally lacking them in a very similar climate to that of its native habitat.
| Biology and health sciences | Cupressaceae | Plants |
4467052 | https://en.wikipedia.org/wiki/Bothriolepis | Bothriolepis | Bothriolepis (from , 'trench' and 'scale') was a widespread, abundant and diverse genus of antiarch placoderms that lived during the Middle to Late Devonian period of the Paleozoic Era.
Historically, Bothriolepis resided in an array of paleo-environments spread across every paleocontinent, including near shore marine and freshwater settings. Most species of Bothriolepis were characterized as relatively small, benthic, freshwater detritivores (organisms that obtain nutrients by consuming decomposing plant/animal material), averaging around in length. However, the largest species, B. rex, had an estimated bodylength of . Although expansive with over 60 species found worldwide, comparatively Bothriolepis is not unusually more diverse than most modern bottom dwelling species around today.
Classification
Bothriolepis is a genus placed within the placoderm order Antiarchi. The earliest antiarch placoderms first appeared in the Silurian period of the Paleozoic Era and could be found distributed on every paleocontinent by the Devonian period. The earliest members of Bothriolepis appear by the Middle Devonian.
Antiarchs, as well as other placoderms, are morphologically diverse and are characterized by bony plates that cover their head and the anterior part of the trunk. Early ontogenetic stages of placoderms had thinner bony plates within both the head and trunk-shield, which allowed for easy distinction between early placoderm ontogenetic stages within the fossil record and taxa that possessed fully developed bony plates but were small by characterization. Placoderm bony plates were generally made up of three layers, including a compact basal lamellar bony layer, a middle spongy bony layer and a superficial layer; Bothriolepis can be classified as a placoderm since it possesses these layers.
Placoderms were extinct by the end of the Devonian. Placodermi is a paraphyletic group of the clade Gnathostomata, which includes all jawed vertebrates. It is unclear exactly when gnathostomes emerged, but the scant early fossil record indicates that it was sometime in the Early Palaeozoic era. The last species of Bothriolepis died out, together with the rest of Placodermi, at the end of the Devonian period.
General anatomy
Head
There are two openings through the head of Bothriolepis: a keyhole opening along the midline on the upper side for the eyes and nostrils and an opening for the mouth on the lower side near the anterior end of the head. A discovery regarding preserved structures that appear to be nasal capsules confirms the belief that the external nasal openings lay on the dorsal side of the head near the eyes. Additionally, the position of the mouth on the ventral side of the skull is consistent with the typical horizontal resting orientation of Bothriolepis. It had a special feature on its skull, a separate partition of bone below the opening for the eyes and nostrils enclosing the nasal capsules called a preorbital recess.
Jaw
A new sample from the Gogo Formation in the Canning Basin of Western Australia has provided evidence regarding the morphological features of the visceral jaw elements of Bothriolepis. Using the sample, it is evident that the mental plate (a dermal bone that forms the upper part of the jaw) of antiarchs is homologous with the suborbital plate found in other placoderms. The lower jawbone consists of a differentiated blade and biting portions. Next to the mandibular joint are the prelateral and infraprelateral plates, which both are canal-bearing bones. The palatoquadrate lacks a high orbital process and was attached only to the ventral part of the mental plate, proving that the ethmoidal region of the braincase (the region of the skull that separates the brain and nasal cavity) was in fact deeper than originally believed. In addition to the above-listed sample from the Gogo Formation, several other specimens have been found with mouthparts held in the natural position by a membrane that covers the oral region and attaches to the lateral and anterior margins of the head. Bothriolepis has a jaw in which the two halves are separate and in the adult are functionally independent.
Trunk
Bothriolepis had a slender trunk that was likely covered in soft skin with no scales or markings. The orientation that appears to have been mostly stable for resting was the dorsal surface up, evidenced by the flat surface on the ventral side. The trunk's outline suggests that there may have been a notochord present surrounded by a membranous sheath, however, there is no direct evidence of this since the notochord is made up of soft tissue, which is not typically preserved in the fossil record. Similar to other antiarchs, the thoracic shield of Bothriolepis was attached to its heavily armored head. Its box-like body was enclosed in armor plates, providing protection from predators. Attached to the ventral surface of the trunk is a large, thin, circular plate marked by deep-lying lines and superficial ridges. This plate lies just below the opening to the cloaca.
Dermal skeleton
The dermal skeleton is organized in three layers: a superficial lamellar layer, a cancellous spongiosa, and a compact basal lamellar layer. Even in early ontogeny, these layers are apparent in specimen of Bothriolepis canadensis. The compact layers develop first. The superficial layer is speculated to have denticles that may have been made of cellular bone.
Fins and tail
Bothriolepis had a long pair of spine-like pectoral fins, jointed at the base, and again a little more than halfway along. These spike-like fins were probably used to lift the body clear off the bottom; its heavy armor would have made it sink quickly as soon as it lost forward momentum. It may also have used its pectoral fins to throw sediment (mud, sand or otherwise) over itself. In addition to the pectoral fins, it is originally considered to have two dorsal fins, but existence of a low, elongated anterior dorsal fin was denied in 1996, and now it is considered to have only a high rounded posterior dorsal fin. The caudal tail was elongated, ending in a narrow band, but is unfortunately rarely preserved in fossils. Bothriolepis lacked pelvic fins. Early antiarchs like Parayunnanolepis had pelvic fins, which implies secondary loss of pelvic fin in Bothriolepis.
Soft anatomy
Structures composed of soft tissue are typically not preserved in fossils because they break down easily and decompose much faster than hard tissues, meaning that the fossil record often lacks information regarding the internal anatomy of fossil species. Preservation of soft tissue structures can sometimes occur, however, if sediments fill the internal structures of an organism upon or after its death. Robert Denison's paper titled "The Soft Anatomy of Bothriolepis" explores the forms and organs of Bothriolepis. These internal structures were preserved when different types of sediments surrounding the exterior of the animal-filled the internal carapaces (only organs that communicate with the exterior could be preserved in this manner). Three different sediment types were identified within the different sections of Bothriolepis: the first a pale greenish-gray medium-textured sandstone largely consisting of calcite; the second similar but finer sediment which preserves many of the organ forms; and the third distinct, fine-grained siltstone consisting of quartz, mica and other minerals but no calcite. These sediments helped preserve the following internal elements:
Alimentary system
In general, the alimentary system of Bothriolepis –which includes the organs involved in ingestion, digestion, and removal of waste– can be described as simple and straight, unlike that of humans. It begins at the anterior end of the organism with a small mouth cavity located over the posterior area of the upper jaw plates. Posteriorly from the mouth, the alimentary system extends into a wider and dorso-ventrally flattened region called the pharynx, from which both the gills and lungs arise. The esophagus, which is also characterized as a dorso-ventrally flattened tube, extends from the mouth into the stomach and leads to a flattened ellipsoidal structure. This structure may be homologous to the anterior end of the intestine found in other fish. The flatness of these structures may have been exaggerated when the fossil specimens experienced tectonic deformation through geologic time. The intestine begins narrowly on the anterior end, expands transversely, and then again narrows posteriorly towards the cylindrical rectum, which terminates just within the posterior end of the trunk carapace. While the alimentary system is primitive in nature and lacks an expanded stomach region, it is specialized by an independently acquired complex spiral valve, comparable to that in elasmobranchs and many bony fish and similar to that found in some sharks. A single fold of tissue rolled upon its own axis forms this specialized spiral valve.
Gills
It is inferred that the gills of Bothriolepis are of the primitive type, though their structure is still not well understood. Laterally, they are enclosed by an opercular fold and are found in the space beneath the lateral part of the head shield, extending medially underneath the neurocranium. Compared to the gills of normally-shaped fish, the gill region of Bothriolepis is considered to be placed more dorsally, is anteriorly more crowded, and in general is relatively short and broad.
Paired ventral sacs
Extending posteriorly from the trunk carapace are paired ventral sacs that extend to the anterior end of the spiral intestine. The sacs seem to originate at the pharynx as a single median tube, which then broadens posteriorly and eventually splits into two sacs that may be homologous to the lungs of certain dipnoans and tetrapods. It has been hypothesized that these lungs, coupled with the jointed arms and rigid, supportive skeleton, would have allowed Bothriolepis to travel on land. Additionally, as Robert Denison states because there is no evidence of a connection between the external naris and mouth, Bothriolepis likely breathed similarly to present-day lungfish, i.e., by placing the mouth above the water's surface and swallowing air.
Despite the original interpretation presented by Denison in 1941, not all paleontologists agree that placoderms like Bothriolepis actually possessed lungs. For example, in his paper "Lungs" in Placoderms, a Persistent Palaeobiological Myth Related to Environmental Preconceived Interpretations, D. Goujet suggests that although traces of some digestive organs may be apparent from the sedimentary structures, there is no evidence supporting the presence of lungs in the samples from the Escuminac formation of Canada upon which the original assertion was based. He notes that the worldwide distribution of Bothriolepis is restricted to strictly marine environments, and thus believes that the presence of lungs in Bothriolepis is uncertain. Further investigation of the fossils is likely necessary to reach a conclusion about the presence of lungs in Bothriolepis.
Feeding
Bothriolepis, as with all other antiarchs, are thought to have fed by directly swallowing mouthfuls of mud and other soft sediments in order to digest detritus, small or microorganisms, algae, and other forms of organic matter in the swallowed sediments. Additionally, the positioning of the mouth on the ventral side of its head further suggests that Bothriolepis was likely a bottom-feeder. The regular presence of "carbonaceous material in the alimentary tract" is believed to indicate that most of its diet consisted of plant material.
Distribution
Bothriolepis fossils are found in Middle and Late Devonian strata (from 387 to 360 million years ago). Because the fossils are found in freshwater sediments, Bothriolepis is presumed to have spent most of its life in freshwater rivers and lakes, but was probably able to enter salt water as well, because its range appeared to have corresponded with the Devonian continental coastlines. Large groupings of Bothriolepis specimens have been found in Asia, Europe, Australia (Gogo Formation and Mandagery Sandstone), Africa (Waterloo Farm lagerstätte) Pennsylvania (Catskill Formation), Quebec (Escuminac Formation), Virginia (Chemung), Colorado, Cuche Formation (Boyacá, Colombia), and all around the world.
Catskill Formation site
The Catskill Formation (Upper Devonian, Famennian Stage), located in Tioga County, Pennsylvania, is the site of a large sample of small individuals of Bothriolepis. The sample was collected from a series of rock slabs that consisted of partial or complete, articulated, external skeletons. More than two hundred individuals were found packed closely together with little to no overlap. From this sample, much information regarding characteristics of juvenile Bothriolepis can be determined. A morphometric study performed by Jason Downs and co-authors highlights certain characteristics that indicate juvenility in Bothriolepis, including a moderately large head and moderately large orbital fenestra—both of which are characteristics also recognized by Erik Stensio in 1948 in the smallest B. canadensis individuals. Several other features that Stensio marked indicative of young individuals can also be seen exhibited in the Catskill sample. These features include "delicate dermal bones with ornament consisting of continuous anastomosing ridges rather than tubercles, a dorsal trunk shield narrower than long and with a continuous and pronounced dorsal median ridge, and a pre-median plate that is wider than it is long". B. nitida and B. minor are also described from this site.
Species
Vertebrate paleontology is heavily dependent on the ability to differentiate between different species in a way that is consistent both within a particular genus and across all organisms. The genus Bothriolepis is no exception to this principle. Listed below are a few of the notable species within Bothriolepis; more than sixty species have been named in total, and it is likely that a sizeable proportion of them are valid due to the cosmopolitan nature of Bothriolepis.
Bothriolepis canadensis
Bothriolepis canadensis is a taxon that often serves as a model organism for the order Antiarchi because of its enormous sample of complete, intact specimens found at the Escuminac Formation in Quebec, Canada. Because of the vast sample size, this species is often used to compare growth data of newly acquired specimens of Bothriolepis, including those found in the Catskill Formation mentioned above. This comparison allows researchers to determine if newly found samples represent juvenile individuals or new "Bothriolepis" species.
B. canadensis was first described in 1880 by J.F. Whiteaves, using a limited number of disfigured samples. The next to propose a reconstruction of the species was W. Patten, who published his findings in 1904 after a discovery of several specimens that were well preserved in 3-D. In 1948, E. Stensio released a detailed depiction of B. canadensis anatomy using an abundance of material, which eventually became the most widely accepted description of this species. Since Stensio's publication, many others have provided reconstructed models of B. canadensis with modified aspects of the anatomy, including Vezina's modified single dorsal fin and more recently, reconstructions by Arsenault et al from specimens with little taphonomic distortion. Presently, the model of Arsenault et al. is regarded to be the most accurate, while there is still much debate about various aspects of this species' external anatomy. Despite the uncertainty, B. canadensis is still classically considered one of the most well-known species.
The external skeleton of Bothriolepis canadensis is made of cellular dermal bone tissue and is characterized by distinct horizontal zonation or stratification. The model fish has an average total length of and an average dermal armor length of , which accounts for 35.6% of the estimated total length. Like many antiarchs, B. canadensis also had narrow pectoral fins, a heterocercal caudal fin (meaning the notochord extends into the upper lobe of the caudal tail) and a large dorsal fin which likely didn't play an important role in propulsion but instead acted more as a stabilizer.
Bothriolepis africana
Bothriolepis africana is the Bothriolepis species known from the highest paleolatitude, being described from deposits originally laid down within the Late Devonian Antarctic circle. Remains have exclusively been recovered from a single carbonaceous shale near the top of the Late Devonian, Famennian, Witpoort Formation (Witteberg Group) exposed in a road cutting south of Makhanda/Grahamstown in South Africa. This site, the Waterloo Farm lagerstätte is interpreted as representing a back barrier coastal lagoonal setting with both marine and fluvial influences. Gess observed that Bothriolepis was less abundant at the Waterloo Farm site than at most Bothriolepis-bearing localities, though a full ontogenetic series is represented. The head and trunk armour lengths ranged between which translates, based on the proportions of two of the smallest individuals (in which tail impressions are preserved) into full body lengths varying between . According to original description, Bothriolepis africana was considered to be most closely similar to Bothriolepis barretti from the late Givetian of Antarctica. The similarities between the two have been used to suggest derivation of Bothriolepis africana from an East Gondwanan environment.
Bothriolepis coloradensis
First described by Eastman in 1904, this species was found localized in present-day Colorado. There is a possibility that this species is similar, if not identical, to B. nitida, however because the material available regarding B. coloradensis is fragmented, it is impossible to compare the two species with any degree of certainty.
Bothriolepis nitida
This species, found in present-day Pennsylvania, was originally described by J. Leidy in 1856. As mentioned above, there is much debate regarding the distinguishability between B. nitida and B. virginiensis, however based on evidence presented by Weems (2004), there are several distinguishable traits specific to each species. B. nitida has a maximum headshield length of , a narrow and shallow trifid preorbital recess, has an anterior-median-dorsal (AMD) plate that is wider than it is long and a ventral thoracic shield that has convex lateral borders.
Bothriolepis rex
Originally described by Downs et al. (2016), Bothriolepis rex is from the Nordstrand Point Formation of Ellesmere Island, Canada. B. rex's body length is estimated at and is, therefore, the largest known species of Bothriolepis. Its armor is especially thick and dense even when taking its size into account. Downs et al. (2016) suggest that this may have both protected the animal from large predators and served as ballast to prevent this large bottom-dweller from floating to the surface.
Bothriolepis virginiensis
Originally described by Weems et al. in 1981, this species, Bothriolepis virginiensis, is from the "Chemung", near Winchester, Virginia. Several traits found in B. virginiensis can also be found in other species of Bothriolepis, (especially B. nitida), including posterior oblique cephalic sensory line grooves that meet relatively far anteriorly on the nuchal plate, relatively elongated orbital fenestra and a low anterior-median-dorsal crest. Characteristics that distinguish B. virginiensis from other species include but are not limited to fused head sutures, fused elements in adult distal pectoral fin segments and long premedian plate relative to headshield length.
Currently, there is much debate regarding whether the species B. virginiensis and B. nitida can actually be distinguished from one another. Thomson and Thomas state that five species of Bothriolepis from the United States (B. nitida, B. minor, B. virginiensis, B. darbiensis and B. coloradensis) are unable to be consistently distinguished from one another. Conversely, Weems asserts that there are several traits that distinguish the species from one another, including several listed above.
Bothriolepis yeungae
This species is described in 1998, from Mandagery Sandstone in Canowindra, where is known from high numbers of placoderm specimens gathered at one place. Bothriolepis is one of the most common fish in Canowindra site alongside Remigolepis, over 1,300 individuals are discovered by 1998. This species is differed from all other species by having a reduced anterior process of the submarginal, separated from the posterior process of the submarginal by a wide, open notch. The head and trunk armour lengths ranged between .
| Biology and health sciences | Prehistoric fish | Animals |
10005756 | https://en.wikipedia.org/wiki/Sample%20mean%20and%20covariance | Sample mean and covariance | The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables.
The sample mean is the average value (or mean value) of a sample of numbers taken from a larger population of numbers, where "population" indicates not number of people but the entirety of relevant data, whether collected or not. A sample of 40 companies' sales from the Fortune 500 might be used for convenience instead of looking at the population, all 500 companies' sales. The sample mean is used as an estimator for the population mean, the average value in the entire population, where the estimate is more likely to be close to the population mean if the sample is large and representative. The reliability of the sample mean is estimated using the standard error, which in turn is calculated using the variance of the sample. If the sample is random, the standard error falls with the size of the sample and the sample mean's distribution approaches the normal distribution as the sample size increases.
The term "sample mean" can also be used to refer to a vector of average values when the statistician is looking at the values of several variables in the sample, e.g. the sales, profits, and employees of a sample of Fortune 500 companies. In this case, there is not just a sample variance for each variable but a sample variance-covariance matrix (or simply covariance matrix) showing also the relationship between each pair of variables. This would be a 3×3 matrix when 3 variables are being considered. The sample covariance is useful in judging the reliability of the sample means as estimators and is also useful as an estimate of the population covariance matrix.
Due to their ease of calculation and other desirable characteristics, the sample mean and sample covariance are widely used in statistics to represent the location and dispersion of the distribution of values in the sample, and to estimate the values for the population.
Definition of the sample mean
The sample mean is the average of the values of a variable in a sample, which is the sum of those values divided by the number of values. Using mathematical notation, if a sample of N observations on variable X is taken from the population, the sample mean is:
Under this definition, if the sample (1, 4, 1) is taken from the population (1,1,3,4,0,2,1,0), then the sample mean is , as compared to the population mean of . Even if a sample is random, it is rarely perfectly representative, and other samples would have other sample means even if the samples were all from the same population. The sample (2, 1, 0), for example, would have a sample mean of 1.
If the statistician is interested in K variables rather than one, each observation having a value for each of those K variables, the overall sample mean consists of K sample means for individual variables. Let be the ith independently drawn observation (i=1,...,N) on the jth random variable (j=1,...,K). These observations can be arranged into N
column vectors, each with K entries, with the K×1 column vector giving the i-th observations of all variables being denoted (i=1,...,N).
The sample mean vector is a column vector whose j-th element is the average value of the N observations of the jth variable:
Thus, the sample mean vector contains the average of the observations for each variable, and is written
Definition of sample covariance
The sample covariance matrix is a K-by-K matrix with entries
where is an estimate of the covariance between the th
variable and the th variable of the population underlying the data.
In terms of the observation vectors, the sample covariance is
Alternatively, arranging the observation vectors as the columns of a matrix, so that
,
which is a matrix of K rows and N columns.
Here, the sample covariance matrix can be computed as
,
where is an N by vector of ones.
If the observations are arranged as rows instead of columns, so is now a 1×K row vector and is an N×K matrix whose column j is the vector of N observations on variable j, then applying transposes
in the appropriate places yields
Like covariance matrices for random vector, sample covariance matrices are positive semi-definite. To prove it, note that for any matrix the matrix is positive semi-definite. Furthermore, a covariance matrix is positive definite if and only if the rank of the vectors is K.
Unbiasedness
The sample mean and the sample covariance matrix are unbiased estimates of the mean and the covariance matrix of the random vector , a row vector whose jth element (j = 1, ..., K) is one of the random variables. The sample covariance matrix has in the denominator rather than due to a variant of Bessel's correction: In short, the sample covariance relies on the difference between each observation and the sample mean, but the sample mean is slightly correlated with each observation since it is defined in terms of all observations. If the population mean is known, the analogous unbiased estimate
using the population mean, has in the denominator. This is an example of why in probability and statistics it is essential to distinguish between random variables (upper case letters) and realizations of the random variables (lower case letters).
The maximum likelihood estimate of the covariance
for the Gaussian distribution case has N in the denominator as well. The ratio of 1/N to 1/(N − 1) approaches 1 for large N, so the maximum likelihood estimate approximately approximately equals the unbiased estimate when the sample is large.
Distribution of the sample mean
For each random variable, the sample mean is a good estimator of the population mean, where a "good" estimator is defined as being efficient and unbiased. Of course the estimator will likely not be the true value of the population mean since different samples drawn from the same distribution will give different sample means and hence different estimates of the true mean. Thus the sample mean is a random variable, not a constant, and consequently has its own distribution. For a random sample of N observations on the jth random variable, the sample mean's distribution itself has mean equal to the population mean and variance equal to , where is the population variance.
The arithmetic mean of a population, or population mean, is often denoted μ. The sample mean (the arithmetic mean of a sample of values drawn from the population) makes a good estimator of the population mean, as its expected value is equal to the population mean (that is, it is an unbiased estimator). The sample mean is a random variable, not a constant, since its calculated value will randomly differ depending on which members of the population are sampled, and consequently it will have its own distribution. For a random sample of n independent observations, the expected value of the sample mean is
and the variance of the sample mean is
If the samples are not independent, but correlated, then special care has to be taken in order to avoid the problem of pseudoreplication.
If the population is normally distributed, then the sample mean is normally distributed as follows:
If the population is not normally distributed, the sample mean is nonetheless approximately normally distributed if n is large and σ2/n < +∞. This is a consequence of the central limit theorem.
Weighted samples
In a weighted sample, each vector (each set of single observations on each of the K random variables) is assigned a weight . Without loss of generality, assume that the weights are normalized:
(If they are not, divide the weights by their sum).
Then the weighted mean vector is given by
and the elements of the weighted covariance matrix are
If all weights are the same, , the weighted mean and covariance reduce to the (biased) sample mean and covariance mentioned above.
Criticism
The sample mean and sample covariance are not robust statistics, meaning that they are sensitive to outliers. As robustness is often a desired trait, particularly in real-world applications, robust alternatives may prove desirable, notably quantile-based statistics such as the sample median for location, and interquartile range (IQR) for dispersion. Other alternatives include trimming and Winsorising, as in the trimmed mean and the Winsorized mean.
| Mathematics | Statistics | null |
10006190 | https://en.wikipedia.org/wiki/Bus%20garage | Bus garage | A bus garage, also known as a bus depot, bus base or bus barn, is a facility where buses are stored and maintained. In many conurbations, bus garages are on the site of former car barns or tram sheds, where trams (streetcars) were stored, and the operation transferred to buses. In other areas, garages were built to replace horsebus yards or on virgin sites when populations were not as high as now.
Description
Most bus garages will contain the following elements:
Internal parking
External parking
Fueling point
Fuel storage tanks
Engineering section
Inspection pits
Bus wash
Brake test lane
Staff canteen/break room
Administration office
Smaller garages may contain the minimum engineering facilities, restricted to light servicing capabilities only. Garages may also contain recovery vehicles, often converted buses, although their incidence has declined with the use of contractors to recover break-downs, and the increase in reliability.
Overnight, the more valuable or regularly in-service buses will usually be stored in the interior of the garage, with less used or older service vehicles, and vehicles withdrawn for storage or awaiting disposal, stored externally. During the day, internal and external areas will see a variety of movements. Heritage vehicles are almost exclusively stored inside the garage.
Often garages will feature rest rooms for drivers assigned to 'as required' duties, whereby they may be required to drive relief or replacement buses in the event of breakdown. The garage may also have 'light duties' drivers, who merely move the buses internally around the garage, often called shunting. Shunter or light duty drivers are often employed in larger depot facilities and work night shifts in order to position buses in the correct order for morning departures from the depot with the first buses due to leave the depot parked logical order nearest the exit. Because they are driving on privately owned land in many jurisdictions a full bus licence may not be required to perform such tasks. In addition they may also perform other tasks such as cleaning buses, refuelling and light maintenance tasks.
United Kingdom
Several bus companies such as London Buses and Lothian Buses used to operate multiple storage garages around their operating area, supplemented by a central works facility. Central works have declined with increase in sub-contract engineering, and improvements in mechanical reliability of bus designs. Also, the practice of routine mid-life refurbishment of bus fleets has declined, which has resulted in generally shorter service lives.
Bus garages will generally have large areas unobstructed by supporting columns as well as high roofs, especially for storage of double-decker buses. Recently in London, the transfer of routes from double-decker operation to articulated buses has caused problems at some garages that were found to be too small to accommodate all the replacement buses, requiring splitting of allocations, or the building of new garages.
Some bus companies in the UK make use of outstations (or out-stations) as an additional bus storage facility. These are generally outdoor parking locations, where buses are stored overnight or between peaks, which are more conveniently located for operations, reducing dead mileage. There does not appear to be a universal definition of an outstation, but it seems agreed that there are no maintenance facilities at a bus outstation.
Largest
The largest bus depot in the world is Millennium Park Bus Depot In Delhi India, built for the Commonwealth Games in 2010.
| Technology | Concepts of ground transport | null |
1722009 | https://en.wikipedia.org/wiki/Paranthropus%20aethiopicus | Paranthropus aethiopicus | Paranthropus aethiopicus is an extinct species of robust australopithecine from the Late Pliocene to Early Pleistocene of East Africa about 2.7–2.3 million years ago. However, it is much debated whether or not Paranthropus is an invalid grouping and is synonymous with Australopithecus, so the species is also often classified as Australopithecus aethiopicus. Whatever the case, it is considered to have been the ancestor of the much more robust P. boisei. It is debated if P. aethiopicus should be subsumed under P. boisei, and the terms P. boisei sensu lato ("in the broad sense") and P. boisei sensu stricto ("in the strict sense") can be used to respectively include and exclude P. aethiopicus from P. boisei.
Like other Paranthropus, P. aethiopicus had a tall face, thick palate, and especially enlarged cheek teeth. However, likely due to its archaicness, it also diverges from other Paranthropus, with some aspects resembling the much earlier A. afarensis. P. aethiopicus is known primarily by the skull KNM WT 17000 from West Lake Turkana, Kenya, as well as some jawbones from Koobi Fora; the Shungura Formation, Ethiopia; and Laetoli, Tanzania. These locations featured bushland to open woodland landscapes with edaphic (water-logged) grasslands.
Taxonomy
Research history
In 1968, French palaeontologist Camille Arambourg and Breton anthropologist Yves Coppens described "Paraustralopithecus aethiopicus" based on a toothless mandible (Omo 18) from the Shungura Formation, Ethiopia. The name aethiopicus refers to Ethiopia. In 1976, American anthropologist Francis Clark Howell and Coppens reclassified it as A. africanus.
In 1985, the skull KNM WT 17000 dating to 2.5 million years ago was reported from Koobi Fora, Lake Turkana, Kenya, by anthropologists Alan Walker and Richard Leakey. A partial jawbone from a different individual, KNM-WT 16005, was also discovered. They clearly belonged to a robust australopithecine. By this point in time, much younger robust australopithecines had been reported from South Africa (robustus) and East Africa (boisei), and been variously assigned to either Australopithecus or a unique genus Paranthropus. Walker and Leakey assigned KNM WT 17000 to the boisei clade. They noted several anatomical differences, but were unsure if this stemmed from the specimens' archaicness or represented the normal range of variation for the species. If the former, they recommended classifying them and similar specimens into a different species, aethiopicus (and recommended that Paraustralopithecus be invalid). The discovery of these archaic specimens overturned previous postulations that P. robustus was the ancestor of the much more robust P. boisei (a hypothesis notably argued by palaeoanthropologist in 1985) by establishing the boisei lineage as beginning long before robustus had existed.
In 1989, palaeoartist Walter Ferguson recommended KNM WT 17000 be classified into a different species, walkeri, because the holotype of aethiopicus comprised only the jawbone and KNM WT 17000 preserves no jaw elements. Ferguson's classification is almost universally ignored, and is considered to be synonymous with P. aethiopicus.
Several more lower and upper jaw specimens have been unearthed in the Shungura Formation, including a juvenile specimen, L338y-6. In 2002, a 2.7–2.5 Ma maxilla, EP 1500, from Laetoli, Tanzania, was assigned to P. aethiopicus. Also found was the upper portion of a tibia, but it cannot definitively be associated with EP 1500 and thus with P. aethiopicus.
Classification
The genus Paranthropus (from Ancient Greek παρα para beside or alongside, and άνθρωπος ánthropos man, otherwise known as "robust australopithecines") typically includes P. aethiopicus, P. boisei, and P. robustus. P. aethiopicus is the earliest member of the genus, with the oldest remains, from the Ethiopian Omo Kibish Formation, dated to 2.6 million years ago (mya) at the end of the Pliocene. It is possible that P. aethiopicus evolved even earlier, up to 3.3 mya, on the expansive Kenyan floodplains of the time. P. aethiopicus is only confidently identified from the skull KNM WT 17000 and a few jaws and isolated teeth, and is generally considered to have been ancestral to P. boisei which also inhabited East Africa, making it a chronospecies. Because of this relationship, it is debatable if P. aethiopicus should be subsumed under P. boisei or if the differences stemming from archaicness should justify species distinction. The terms P. boisei sensu lato ("in the broad sense") and P. boisei sensu stricto ("in the strict sense") can be used to respectively include and exclude P. aethiopicus from P. boisei when discussing the lineage as a whole.
It is also debated if Paranthropus is a valid natural grouping (monophyletic) or an invalid grouping of similar-looking hominins (paraphyletic). Because skeletal elements are so limited in these species, their affinities with each other and to other australopithecines is difficult to gauge with accuracy. The jaws are the main argument for monophyly, but such anatomy is strongly influenced by diet and environment, and could in all likelihood have evolved independently in P. boisei and P. robustus. Proponents of monophyly consider P. aethiopicus to be ancestral to the other two species, or closely related to the ancestor. Proponents of paraphyly allocate these three species to the genus Australopithecus as A. boisei, A. aethiopicus, and A. robustus. British geologist Bernard Wood and American palaeoanthropologist William Kimbel are major proponents of monophyly, and against include Walker.
This species, originally named Paraustralopithecus aethiopicus, cannot retain the species epithet aethiopicus if moved to genus Australopithecus because Australopithecus aethiopicus is already a junior synonym of Australopithecus afarensis. Such a classification would have to use the name Australopithecus walkeri for this species. The change of species epithet would also happen in a taxonomy that classifies all hominins as Homo.
Description
Typical of Paranthropus, KNM WT 17000 is heavily built, and the palate and base of the skull are about the same size as the P. boisei holotype OH 5. The brain volume of KNM WT 17000 was estimated to have been , which is smaller than that of other Paranthropus. The combination of a tall face, thick palate, and small braincase caused a highly defined sagittal crest on the midline of the skull. The only complete tooth crown of the specimen is the right third premolar, whose dimensions are well above the range of variation for P. robustus and on the upper end for P. boisei. Unlike other Paranthropus, KNM WT 17000 did not have a flat face, and the jaw jutted out (prognathism). In regard to the temporal bone, KNM WT 17000 differs from other Paranthropus in that the squamous part of temporal bone is extensively pneumaticised, the tympanic part of the temporal bone is not as vertically orientated, the base of the skull is weakly flexed, the postglenoid process is completely anterior to (in front of) the tympanic, the tympanic is somewhat tubular, and the articular tubercle is weak. Like P. boisei, the foramen magnum where the skull connects to the spine is heart-shaped. The temporalis muscle was probably not directed as forward as it was in P. boisei, meaning the P. aethiopicus jaw likely processed food with the incisors before using the cheek teeth. The incisors of P. boisei are thought to have not been involved in processing food. The long distance between the first molar and the jaw hinge would suggest KNM WT 17000 had an exceptionally long ramus of the mandible (connecting the lower jaw to the skull), though the hinge's location indicates the ramus would not have been particularly deep (it would have been weaker). This may have produced a less effective bite compared to P. boisei.
KNM-WT 16005 is quite similar to the Peninj Mandible assigned to P. boisei, exhibiting postcanine megadontia with relatively small incisors and canines (based on the tooth roots) and large cheek teeth. Nonetheless, the incisors were likely much broader in KNM-WT 16005. KNM-WT 16005 preserved four cheek teeth on the left side: the third premolar measuring , the fourth premolar measuring , the first molar measuring , and the second molar measuring . The fourth premolar and first molar are a little smaller than those of the Peninj mandible, and the second molar a bit bigger. The KNM-WT 16005 jawbone is smaller than what KNM WT 17000 would have had.
Many of these P. aethiopicus features are shared with the early A. afarensis, further reiterating the species' archaicness.
Palaeoecology
In general, Paranthropus are thought to have been generalist feeders, with the heavily built skull becoming important when chewing less desirable, lower quality foods in times of famine. Unlike P. boisei which generally is found in the context of closed, wet environments, P. aethiopicus seems to have inhabited bushland to open woodland habitats around edaphic (water-logged) grasslands. Around 2.5 million years ago, at the Pliocene/Pleistocene border, the Omo–Turkana Basin featured a mix of forests, woodlands, grasslands, and bushlands, though grasslands appear to have been expanding through the Early Pleistocene. Homo seems to have entered the region 2.5–2.4 million years ago.
| Biology and health sciences | Australopithecines | Biology |
1722616 | https://en.wikipedia.org/wiki/Physical%20object | Physical object | In natural language and physical science, a physical object or material object (or simply an object or body) is a contiguous collection of matter, within a defined boundary (or surface), that exists in space and time. Usually contrasted with abstract objects and mental objects.
Also in common usage, an object is not constrained to consist of the same collection of matter. Atoms or parts of an object may change over time. An object is usually meant to be defined by the simplest representation of the boundary consistent with the observations. However the laws of physics only apply directly to objects that consist of the same collection of matter.
In physics, an object is an identifiable collection of matter, which may be constrained by an identifiable boundary, and may move as a unit by translation or rotation, in 3-dimensional space.
Each object has a unique identity, independent of any other properties. Two objects may be identical, in all properties except position, but still remain distinguishable. In most cases the boundaries of two objects may not overlap at any point in time. The property of identity allows objects to be counted.
Examples of models of physical bodies include, but are not limited to a particle, several interacting smaller bodies (particulate or otherwise). Discrete objects are in contrast to continuous media.
The common conception of physical objects includes that they have extension in the physical world, although there do exist theories of quantum physics and cosmology which arguably challenge this. In modern physics, "extension" is understood in terms of the spacetime: roughly speaking, it means that for a given moment of time the body has some location in the space (although not necessarily amounting to the abstraction of a point in space and time). A physical body as a whole is assumed to have such quantitative properties as mass, momentum, electric charge, other conserved quantities, and possibly other quantities.
An object with known composition and described in an adequate physical theory is an example of physical system.
In common usage
An object is known by the application of senses. The properties of an object are inferred by learning and reasoning based on the information perceived. Abstractly, an object is a construction of our mind consistent with the information provided by our senses, using Occam's razor.
In common usage an object is the material inside the boundary of an object, in three-dimensional space. The boundary of an object is a contiguous surface which may be used to determine what is inside, and what is outside an object. An object is a single piece of material, whose extent is determined by a description based on the properties of the material. An imaginary sphere of granite within a larger block of granite would not be considered an identifiable object, in common usage. A fossilized skull encased in a rock may be considered an object because it is possible to determine the extent of the skull based on the properties of the material.
For a rigid body, the boundary of an object may change over time by continuous translation and rotation. For a deformable body the boundary may also be continuously deformed over time in other ways.
An object has an identity. In general two objects with identical properties, other than position at an instance in time, may be distinguished as two objects and may not occupy the same space at the same time (excluding component objects). An object's identity may be tracked using the continuity of the change in its boundary over time. The identity of objects allows objects to be arranged in sets and counted.
The material in an object may change over time. For example, a rock may wear away or have pieces broken off it. The object will be regarded as the same object after the addition or removal of material, if the system may be more simply described with the continued existence of the object, than in any other way. The addition or removal of material may discontinuously change the boundary of the object. The continuation of the object's identity is then based on the description of the system by continued identity being simpler than without continued identity.
For example, a particular car might have all its wheels changed, and still be regarded as the same car.
The identity of an object may not split. If an object is broken into two pieces at most one of the pieces has the same identity. An object's identity may also be destroyed if the simplest description of the system at a point in time changes from identifying the object to not identifying it. Also an object's identity is created at the first point in time that the simplest model of the system consistent with perception identifies it.
An object may be composed of components. A component is an object completely within the boundary of a containing object.
A living thing may be an object, and is distinguished from non-living things by the designation of the latter as inanimate objects. Inanimate objects generally lack the capacity or desire to undertake actions, although humans in some cultures may tend to attribute such characteristics to non-living things.
In physics
Classical mechanics
In classical mechanics a physical body is collection of matter having properties including mass, velocity, momentum and energy. The matter exists in a volume of three-dimensional space. This space is its extension.
Interactions between objects are partly described by orientation and external shape.
In continuum mechanics an object may be described as a collection of sub objects, down to an infinitesimal division, which interact with each other by forces that may be described internally by pressure and mechanical stress.
Quantum mechanics
In quantum mechanics an object is a particle or collection of particles. Until measured, a particle does not have a physical position. A particle is defined by a probability distribution of finding the particle at a particular position. There is a limit to the accuracy with which the position and velocity may be measured. A particle or collection of particles is described by a quantum state.
These ideas vary from the common usage understanding of what an object is.
String theory
In particle physics, there is a debate as to whether some elementary particles are not bodies, but are points without extension in physical space within spacetime, or are always extended in at least one dimension of space as in string theory or M theory.
In psychology
In some branches of psychology, depending on school of thought, a physical object has physical properties, as compared to mental objects. In (reductionistic) behaviorism, objects and their properties are the (only) meaningful objects of study. While in the modern day behavioral psychotherapy it is still only the means for goal oriented behavior modifications, in Body Psychotherapy it is not a means only anymore, but its felt sense is a goal of its own. In cognitive psychology, physical bodies as they occur in biology are studied in order to understand the mind, which may not be a physical body, as in functionalist schools of thought.
In philosophy
A physical body is an enduring object that exists throughout a particular trajectory of space and orientation over a particular duration of time, and which is located in the world of physical space (i.e., as studied by physics). This contrasts with abstract objects such as mathematical objects which do not exist at any particular time or place.
Examples are a cloud, a human body, a banana, a billiard ball, a table, or a proton. This is contrasted with abstract objects such as mental objects, which exist in the mental world, and mathematical objects. Other examples that are not physical bodies are emotions, the concept of "justice", a feeling of hatred, or the number "3". In some philosophies, like the idealism of George Berkeley, a physical body is a mental object, but still has extension in the space of a visual field.
| Physical sciences | Physics basics: General | Physics |
1722958 | https://en.wikipedia.org/wiki/Diammonium%20phosphate | Diammonium phosphate | Diammonium phosphate (DAP; IUPAC name diammonium hydrogen phosphate; chemical formula (NH4)2(HPO4)) is one of a series of water-soluble ammonium phosphate salts that can be produced when ammonia reacts with phosphoric acid.
Solid diammonium phosphate shows a dissociation pressure of ammonia as given by the following expression and equation:
At 100 °C, the dissociation pressure of diammonium phosphate is approximately 5 mmHg.
According to the diammonium phosphate MSDS from CF Industries, Inc., decomposition starts as low as 70 °C: "Hazardous Decomposition Products: Gradually loses ammonia when exposed to air at room temperature. Decomposes to ammonia and monoammonium phosphate at around 70 °C (158 °F). At 155 °C (311 °F), DAP emits phosphorus oxides, nitrogen oxides and ammonia."
Uses
DAP is used as a fertilizer. When applied as plant fertilizer, it temporarily increases the soil pH, but over a long term the treated ground becomes more acidic than before, upon nitrification of the ammonium. It is incompatible with alkaline chemicals because its ammonium ion is more likely to convert to ammonia in a high-pH environment. The average pH in solution is 7.5–8. The typical formulation is 18-46-0 (18% N, 46% P2O5, 0% K2O).
DAP can be used as a fire retardant. It lowers the combustion temperature of the material, decreases maximum weight loss rates, and causes an increase in the production of residue or char. These are important effects in fighting wildfires as lowering the pyrolysis temperature and increasing the amount of char formed reduces that amount of available fuel and can lead to the formation of a firebreak.
DAP is also used as a yeast nutrient in winemaking and mead-making; as an additive in some brands of cigarettes purportedly as a nicotine enhancer; to prevent afterglow in matches, in purifying sugar; as a flux for soldering tin, copper, zinc and brass; and to control precipitation of alkali-soluble and acid-insoluble colloidal dyes on wool.
Natural occurrence
The compound occurs in the nature as the exceedingly rare mineral phosphammite. The related dihydrogen compound occurs as the mineral biphosphammite. Both are related to guano deposits.
| Physical sciences | Phosphoric oxyanions | Chemistry |
406624 | https://en.wikipedia.org/wiki/Time%20series | Time series | In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Examples of time series are heights of ocean tides, counts of sunspots, and the daily closing value of the Dow Jones Industrial Average.
A time series is very frequently plotted via a run chart (which is a temporal line chart). Time series are used in statistics, signal processing, pattern recognition, econometrics, mathematical finance, weather forecasting, earthquake prediction, electroencephalography, control engineering, astronomy, communications engineering, and largely in any domain of applied science and engineering which involves temporal measurements.
Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. Generally, time series data is modelled as a stochastic process. While regression analysis is often employed in such a way as to test relationships between one or more different time series, this type of analysis is not usually called "time series analysis", which refers in particular to relationships between different points in time within a single series.
Time series data have a natural temporal ordering. This makes time series analysis distinct from cross-sectional studies, in which there is no natural ordering of the observations (e.g. explaining people's wages by reference to their respective education levels, where the individuals' data could be entered in any order). Time series analysis is also distinct from spatial data analysis where the observations typically relate to geographical locations (e.g. accounting for house prices by the location as well as the intrinsic characteristics of the houses). A stochastic model for a time series will generally reflect the fact that observations close together in time will be more closely related than observations further apart. In addition, time series models will often make use of the natural one-way ordering of time so that values for a given period will be expressed as deriving in some way from past values, rather than from future values (see time reversibility).
Time series analysis can be applied to real-valued, continuous data, discrete numeric data, or discrete symbolic data (i.e. sequences of characters, such as letters and words in the English language).
Methods for analysis
Methods for time series analysis may be divided into two classes: frequency-domain methods and time-domain methods. The former include spectral analysis and wavelet analysis; the latter include auto-correlation and cross-correlation analysis. In the time domain, correlation and analysis can be made in a filter-like manner using scaled correlation, thereby mitigating the need to operate in the frequency domain.
Additionally, time series analysis techniques may be divided into parametric and non-parametric methods. The parametric approaches assume that the underlying stationary stochastic process has a certain structure which can be described using a small number of parameters (for example, using an autoregressive or moving-average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. By contrast, non-parametric approaches explicitly estimate the covariance or the spectrum of the process without assuming that the process has any particular structure.
Methods of time series analysis may also be divided into linear and non-linear, and univariate and multivariate.
Panel data
A time series is one type of panel data. Panel data is the general class, a multidimensional data set, whereas a time series data set is a one-dimensional panel (as is a cross-sectional dataset). A data set may exhibit characteristics of both panel data and time series data. One way to tell is to ask what makes one data record unique from the other records. If the answer is the time data field, then this is a time series data set candidate. If determining a unique record requires a time data field and an additional identifier which is unrelated to time (e.g. student ID, stock symbol, country code), then it is panel data candidate. If the differentiation lies on the non-time identifier, then the data set is a cross-sectional data set candidate.
Analysis
There are several types of motivation and data analysis available for time series which are appropriate for different purposes.
Motivation
In the context of statistics, econometrics, quantitative finance, seismology, meteorology, and geophysics the primary goal of time series analysis is forecasting. In the context of signal processing, control engineering and communication engineering it is used for signal detection. Other applications are in data mining, pattern recognition and machine learning, where time series analysis can be used for clustering, classification, query by content, anomaly detection as well as forecasting.
Exploratory analysis
A simple way to examine a regular time series is manually with a line chart. The datagraphic shows tuberculosis deaths in the United States, along with the yearly change and the percentage change from year to year. The total number of deaths declined in every year until the mid-1980s, after which there were occasional increases, often proportionately - but not absolutely - quite large.
A study of corporate data analysts found two challenges to exploratory time series analysis: discovering the shape of interesting patterns, and finding an explanation for these patterns. Visual tools that represent time series data as heat map matrices can help overcome these challenges.
Estimation, filtering, and smoothing
This approach may be based on harmonic analysis and filtering of signals in the frequency domain using the Fourier transform, and spectral density estimation. Its development was significantly accelerated during World War II by mathematician Norbert Wiener, electrical engineers Rudolf E. Kálmán, Dennis Gabor and others for filtering signals from noise and predicting signal values at a certain point in time.
An equivalent effect may be achieved in the time domain, as in a Kalman filter; see filtering and smoothing for more techniques.
Other related techniques include:
Autocorrelation analysis to examine serial dependence
Spectral analysis to examine cyclic behavior which need not be related to seasonality. For example, sunspot activity varies over 11 year cycles. Other common examples include celestial phenomena, weather patterns, neural activity, commodity prices, and economic activity.
Separation into components representing trend, seasonality, slow and fast variation, and cyclical irregularity: see trend estimation and decomposition of time series
Curve fitting
Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data is required, or smoothing, in which a "smooth" function is constructed that approximately fits the data. A related topic is regression analysis, which focuses more on questions of statistical inference such as how much uncertainty is present in a curve that is fit to data observed with random errors. Fitted curves can be used as an aid for data visualization, to infer values of a function where no data are available, and to summarize the relationships among two or more variables. Extrapolation refers to the use of a fitted curve beyond the range of the observed data, and is subject to a degree of uncertainty since it may reflect the method used to construct the curve as much as it reflects the observed data.
For processes that are expected to generally grow in magnitude one of the curves in the graphic (and many others) can be fitted by estimating their parameters.
The construction of economic time series involves the estimation of some components for some dates by interpolation between values ("benchmarks") for earlier and later dates. Interpolation is estimation of an unknown quantity between two known quantities (historical data), or drawing conclusions about missing information from the available information ("reading between the lines"). Interpolation is useful where the data surrounding the missing data is available and its trend, seasonality, and longer-term cycles are known. This is often done by using a related series known for all relevant dates. Alternatively polynomial interpolation or spline interpolation is used where piecewise polynomial functions are fitted in time intervals such that they fit smoothly together. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function (also called regression). The main difference between regression and interpolation is that polynomial regression gives a single polynomial that models the entire data set. Spline interpolation, however, yield a piecewise continuous function composed of many polynomials to model the data set.
Extrapolation is the process of estimating, beyond the original observation range, the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between known observations, but extrapolation is subject to greater uncertainty and a higher risk of producing meaningless results.
Function approximation
In general, a function approximation problem asks us to select a function among a well-defined class that closely matches ("approximates") a target function in a task-specific way.
One can distinguish two major classes of function approximation problems: First, for known target functions, approximation theory is the branch of numerical analysis that investigates how certain known functions (for example, special functions) can be approximated by a specific class of functions (for example, polynomials or rational functions) that often have desirable properties (inexpensive computation, continuity, integral and limit values, etc.).
Second, the target function, call it g, may be unknown; instead of an explicit formula, only a set of points (a time series) of the form (x, g(x)) is provided. Depending on the structure of the domain and codomain of g, several techniques for approximating g may be applicable. For example, if g is an operation on the real numbers, techniques of interpolation, extrapolation, regression analysis, and curve fitting can be used. If the codomain (range or target set) of g is a finite set, one is dealing with a classification problem instead. A related problem of online time series approximation is to summarize the data in one-pass and construct an approximate representation that can support a variety of time series queries with bounds on worst-case error.
To some extent, the different problems (regression, classification, fitness approximation) have received a unified treatment in statistical learning theory, where they are viewed as supervised learning problems.
Prediction and forecasting
In statistics, prediction is a part of statistical inference. One particular approach to such inference is known as predictive inference, but the prediction can be undertaken within any of the several approaches to statistical inference. Indeed, one description of statistics is that it provides a means of transferring knowledge about a sample of a population to the whole population, and to other related populations, which is not necessarily the same as prediction over time. When information is transferred across time, often to specific points in time, the process is known as forecasting.
Fully formed statistical models for stochastic simulation purposes, so as to generate alternative versions of the time series, representing what might happen over non-specific time-periods in the future
Simple or fully formed statistical models to describe the likely outcome of the time series in the immediate future, given knowledge of the most recent outcomes (forecasting).
Forecasting on time series is usually done using automated statistical software packages and programming languages, such as Julia, Python, R, SAS, SPSS and many others.
Forecasting on large scale data can be done with Apache Spark using the Spark-TS library, a third-party package.
Classification
Assigning time series pattern to a specific category, for example identify a word based on series of hand movements in sign language.
Segmentation
Splitting a time-series into a sequence of segments. It is often the case that a time-series can be represented as a sequence of individual segments, each with its own characteristic properties. For example, the audio signal from a conference call can be partitioned into pieces corresponding to the times during which each person was speaking. In time-series segmentation, the goal is to identify the segment boundary points in the time-series, and to characterize the dynamical properties associated with each segment. One can approach this problem using change-point detection, or by modeling the time-series as a more sophisticated system, such as a Markov jump linear system.
Clustering
Time series data may be clustered, however special care has to be taken when considering subsequence clustering.
Time series clustering may be split into
whole time series clustering (multiple time series for which to find a cluster)
subsequence time series clustering (single timeseries, split into chunks using sliding windows)
time point clustering
Subsequence time series clustering
Subsequence time series clustering resulted in unstable (random) clusters induced by the feature extraction using chunking with sliding windows. It was found that the cluster centers (the average of the time series in a cluster - also a time series) follow an arbitrarily shifted sine pattern (regardless of the dataset, even on realizations of a random walk). This means that the found cluster centers are non-descriptive for the dataset because the cluster centers are always nonrepresentative sine waves.
Models
Models for time series data can have many forms and represent different stochastic processes. When modeling variations in the level of a process, three broad classes of practical importance are the autoregressive (AR) models, the integrated (I) models, and the moving-average (MA) models. These three classes depend linearly on previous data points. Combinations of these ideas produce autoregressive moving-average (ARMA) and autoregressive integrated moving-average (ARIMA) models. The autoregressive fractionally integrated moving-average (ARFIMA) model generalizes the former three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an initial "V" for "vector", as in VAR for vector autoregression. An additional set of extensions of these models is available for use where the observed time-series is driven by some "forcing" time-series (which may not have a causal effect on the observed series): the distinction from the multivariate case is that the forcing series may be deterministic or under the experimenter's control. For these models, the acronyms are extended with a final "X" for "exogenous".
Non-linear dependence of the level of a series on previous data points is of interest, partly because of the possibility of producing a chaotic time series. However, more importantly, empirical investigations can indicate the advantage of using predictions derived from non-linear models, over those from linear models, as for example in nonlinear autoregressive exogenous models. Further references on nonlinear time series analysis: (Kantz and Schreiber), and (Abarbanel)
Among other types of non-linear time series models, there are models to represent the changes of variance over time (heteroskedasticity). These models represent autoregressive conditional heteroskedasticity (ARCH) and the collection comprises a wide variety of representation (GARCH, TARCH, EGARCH, FIGARCH, CGARCH, etc.). Here changes in variability are related to, or predicted by, recent past values of the observed series. This is in contrast to other possible representations of locally varying variability, where the variability might be modelled as being driven by a separate time-varying process, as in a doubly stochastic model.
In recent work on model-free analyses, wavelet transform based methods (for example locally stationary wavelets and wavelet decomposed neural networks) have gained favor. Multiscale (often referred to as multiresolution) techniques decompose a given time series, attempting to illustrate time dependence at multiple scales. | Mathematics | Statistics | null |
406902 | https://en.wikipedia.org/wiki/Geometrized%20unit%20system | Geometrized unit system | A geometrized unit system or geometrodynamic unit system is a system of natural units in which the base physical units are chosen so that the speed of light in vacuum, c, and the gravitational constant, G, are set equal to unity.
The geometrized unit system is not a completely defined system. Some systems are geometrized unit systems in the sense that they set these, in addition to other constants, to unity, for example Stoney units and Planck units.
This system is useful in physics, especially in the special and general theories of relativity. All physical quantities are identified with geometric quantities such as areas, lengths, dimensionless numbers, path curvatures, or sectional curvatures.
Many equations in relativistic physics appear simpler when expressed in geometric units, because all occurrences of G and of c drop out. For example, the Schwarzschild radius of a nonrotating uncharged black hole with mass m becomes . For this reason, many books and papers on relativistic physics use geometric units. An alternative system of geometrized units is often used in particle physics and cosmology, in which instead. This introduces an additional factor of 8π into Newton's law of universal gravitation but simplifies the Einstein field equations, the Einstein–Hilbert action, the Friedmann equations and the Newtonian Poisson equation by removing the corresponding factor.
Definition
Geometrized units were defined in the book Gravitation by Charles W. Misner, Kip S. Thorne, and John Archibald Wheeler with the speed of light, , the gravitational constant, , and Boltzmann constant, all set to . Some authors refer to these units as geometrodynamic units.
In geometric units, every time interval is interpreted as the distance travelled by light during that given time interval. That is, one second is interpreted as one light-second, so time has the geometric units of length. This is dimensionally consistent with the notion that, according to the kinematical laws of special relativity, time and distance are on an equal footing.
Energy and momentum are interpreted as components of the four-momentum vector, and mass is the magnitude of this vector, so in geometric units these must all have the dimension of length. We can convert a mass expressed in kilograms to the equivalent mass expressed in metres by multiplying by the conversion factor G/c2. For example, the Sun's mass of in SI units is equivalent to . This is half the Schwarzschild radius of a one solar mass black hole. All other conversion factors can be worked out by combining these two.
The small numerical size of the few conversion factors reflects the fact that relativistic effects are only noticeable when large masses or high speeds are considered.
Conversions
Listed below are all conversion factors that are useful to convert between all combinations of the SI base units, and if not possible, between them and their unique elements, because ampere is a dimensionless ratio of two lengths such as [C/s], and candela (1/683 [W/sr]) is a dimensionless ratio of two dimensionless ratios such as ratio of two volumes [kg⋅m2/s3] = [W] and ratio of two areas [m2/m2] = [sr], while mole is only a dimensionless Avogadro number of entities such as atoms or particles:
| Physical sciences | Measurement systems | Basics and measurement |
407233 | https://en.wikipedia.org/wiki/Global%20dimming | Global dimming | Global dimming is a decline in the amount of sunlight reaching the Earth's surface. It is caused by atmospheric particulate matter, predominantly sulfate aerosols, which are components of air pollution. Global dimming was observed soon after the first systematic measurements of solar irradiance began in the 1950s. This weakening of visible sunlight proceeded at the rate of 4–5% per decade until the 1980s. During these years, air pollution increased due to post-war industrialization. Solar activity did not vary more than the usual during this period.
Aerosols have a cooling effect on the earth's atmosphere, and global dimming has masked the extent of global warming experienced to date, with the most polluted regions even experiencing cooling in the 1970s. Global dimming has interfered with the water cycle by lowering evaporation, and thus has probably reduced rainfall in certain areas. It may have weakened the Monsoon of South Asia and caused the entire tropical rain belt to shift southwards between 1950 and 1985, with a limited recovery afterwards. Record levels of particulate pollution in the Northern Hemisphere caused or at least exacerbated the monsoon failure behind the 1984 Ethiopian famine.
Since the 1980s, a decrease in air pollution has led to a partial reversal of the dimming trend, sometimes referred to as global brightening. This global brightening had contributed to the acceleration of global warming, which began in the 1990s. According to climate models, the dimming effect of aerosols most likely offsets around of warming as of 2021. As nations act to reduce the toll of air pollution on the health of their citizens, the masking effect on global warming is expected to decline further. The scenarios for climate action required to meet and targets incorporate the predicted decrease in aerosol levels. However, model simulations of the effects of aerosols on weather systems remain uncertain.
The processes behind global dimming are similar to stratospheric aerosol injection. This is a proposed solar geoengineering intervention which aims to counteract global warming through intentional releases of reflective aerosols. Stratospheric aerosol injection could be very effective at stopping or reversing warming but it would also have substantial effects on the global water cycle, regional weather, and ecosystems. Furthermore, it would have to be carried out over centuries to prevent a rapid and violent return of the warming.
History
In the 1970s, numerous studies showed that atmospheric aerosols could affect the propagation of sunlight through the atmosphere, a measure also known as direct solar irradiance. One study showed that less sunlight was filtering through at the height of above Los Angeles, even on those days when there was no visible smog. Another suggested that sulfate pollution or a volcano eruption could provoke the onset of an ice age. In the 1980s, Atsumu Ohmura, a geography researcher at the Swiss Federal Institute of Technology, found that solar radiation striking the Earth's surface had declined by more than 10% over the three previous decades, even as the global temperature had been generally rising since the 1970s. In the 1990s, this was followed by the papers describing multi-decade declines in Estonia, Germany, Israel and across the former Soviet Union.
Subsequent research estimated an average reduction in sunlight striking the terrestrial surface of around 4–5% per decade over the late 1950s–1980s, and 2–3% per decade when 1990s were included. Notably, solar radiation at the top of the atmosphere did not vary by more than 0.1-0.3% in all that time, strongly suggesting that the reasons for the dimming were on Earth. Additionally, only visible light and infrared radiation were dimmed, rather than the ultraviolet part of the spectrum. Further, the dimming had occurred even when the skies were clear, and it was in fact stronger than during the cloudy days, proving that it was not caused by changes in cloud cover alone.
Causes
Anthropogenic sulfates
Global dimming is primarily caused by the presence of sulfate particles which hang in the Earth's atmosphere as aerosols. These aerosols have both a direct contribution to dimming, as they reflect sunlight like tiny mirrors. They also have an indirect effect as nuclei, meaning that water droplets in clouds coalesce around the particles. Increased pollution causes more particulates and thereby creates clouds consisting of a greater number of smaller droplets (that is, the same amount of water is spread over more droplets). The smaller droplets make clouds more reflective, so that more incoming sunlight is reflected back into space and less reaches the Earth's surface. In models, these smaller droplets also decrease rainfall.
Before the Industrial Revolution, the main source of sulfate aerosols was dimethyl sulfide produced by some types of oceanic plankton. Emissions from volcano activity were the second largest source, although large volcanic eruptions, such as the 1991 eruption of Mount Pinatubo, dominate in the years when they occur. In 1990, the IPCC First Assessment Report estimated dimethyl sulfide emissions at 40 million tons per year, while volcano emissions were estimated at 10 million tons. These annual levels have been largely stable for a long time. On the other hand, global human-caused emissions of sulfur into the atmosphere increased from less than 3 million tons per year in 1860 to 15 million tonnes in 1900, 40 million tonnes in 1940 and about 80 million tonnes in 1980. This meant that by 1980, the human-caused emissions from the burning of sulfur-containing fuels (mostly coal and bunker fuel) became at least as large as all natural emissions of sulfur-containing compounds. The report also concluded that "in the industrialized regions of Europe and North America, anthropogenic emissions dominate over natural emissions by about a factor of ten or even more".
Black carbon
Another important type of aerosol is black carbon, colloquially known as soot. It is formed due to incomplete combustion of fossil fuels, as well as of wood and other plant matter. Globally, the single largest source of black carbon is from grassland and forest fires, including both wildfires and intentional burning. However, coal use is responsible for the majority (60 to 80%) of black carbon emissions in Asia and Africa, while diesel combustion produces 70% of black carbon in Europe and The Americas.
Black carbon in the lower atmosphere is a major contributor to 7 million premature deaths caused by air pollution every year. Its presence is particularly visible, as the so-called "brown clouds" appear in heavily polluted areas. In fact, it was 1970s research into the Denver brown cloud which had first found that black carbon particles absorb solar energy and so can affect the amount of visible sunlight. Later research found that black carbon is 190 times more effective at absorbing sunlight within clouds than the regular dust from soil particles. At worst, all clouds within an atmospheric layer thick are visibly darkened, and the plume can reach transcontinental scale (i.e. the Asian brown cloud.) Even so, the overall dimming from black carbon is much lower than that from the sulfate particles.
Reversal
After 1990, the global dimming trend had clearly switched to global brightening. This followed measures taken to combat air pollution by the developed nations, typically through flue-gas desulfurization installations at thermal power plants, such as wet scrubbers or fluidized bed combustion. In the United States, sulfate aerosols have declined significantly since 1970 with the passage of the Clean Air Act, which was strengthened in 1977 and 1990. According to the EPA, from 1970 to 2005, total emissions of the six principal air pollutants, including sulfates, dropped by 53% in the US. By 2010, this reduction in sulfate pollution led to estimated healthcare cost savings valued at $50 billion annually. Similar measures were taken in Europe, such as the 1985 Helsinki Protocol on the Reduction of Sulfur Emissions under the Convention on Long-Range Transboundary Air Pollution, and with similar improvements.
On the other hand, a 2009 review found that dimming continued to increase in China after stabilizing in the 1990s and intensified in India, consistent with their continued industrialization, while the US, Europe, and South Korea continued to brighten. Evidence from Zimbabwe, Chile and Venezuela also pointed to increased dimming during that period, albeit at a lower confidence level due to the lower number of observations. Later research found that over China, the dimming trend continued at a slower rate after 1990, and did not begin to reverse until around 2005. Due to these contrasting trends, no statistically significant change had occurred on a global scale from 2001 to 2012. Post-2010 observations indicate that the global decline in aerosol concentrations and global dimming continued, with pollution controls on the global shipping industry playing a substantial role in the recent years. Since nearly 90% of the human population lives in the Northern Hemisphere, clouds there are far more affected by aerosols than in the Southern Hemisphere, but these differences have halved in the two decades since 2000, providing further evidence for the ongoing global brightening.
Relationship to climate change
Cooling from sulfate aerosols
Aerosols have a cooling effect, which has masked the total extent of global warming experienced to date.
It has been understood for a long time that any effect on solar irradiance from aerosols would necessarily impact Earth's radiation balance. Reductions in atmospheric temperatures have already been observed after large volcanic eruptions such as the 1963 eruption of Mount Agung in Bali, 1982 El Chichón eruption in Mexico, 1985 Nevado del Ruiz eruption in Colombia and 1991 eruption of Mount Pinatubo in the Philippines. However, even the major eruptions only result in temporary jumps of sulfur particles, unlike the more sustained increases caused by anthropogenic pollution.
In 1990, the IPCC First Assessment Report acknowledged that "Human-made aerosols, from sulphur emitted largely in fossil fuel combustion can modify clouds and this may act to lower temperatures", while "a decrease in emissions of sulphur might be expected to increase global temperatures". However, lack of observational data and difficulties in calculating indirect effects on clouds left the report unable to estimate whether the total impact of all anthropogenic aerosols on the global temperature amounted to cooling or warming. By 1995, the IPCC Second Assessment Report had confidently assessed the overall impact of aerosols as negative (cooling); however, aerosols were recognized as the largest source of uncertainty in future projections in that report and the subsequent ones.
Warming from black carbon
Unlike sulfate pollution, black carbon contributes to both global dimming and global warming, since its particles absorb sunlight and heat up instead of reflecting it away. These particles also develop thick coatings over time, which can increase the initial absorption by up to 40%. Because the rate at which these coatings are formed varies depending on the season, the warming from black carbon varies seasonally as well.
Though this warming is weaker than the -induced warming or the cooling from sulfates, it can be regionally significant when black carbon is deposited over ice masses like mountain glaciers and the Greenland ice sheet. There, it reduces their albedo and increases their absorption of solar radiation, which accelerates their melting. Black carbon also has an outsized contribution to local warming inside polluted cities. Even the indirect effect of soot particles acting as cloud nuclei is not strong enough to provide cooling: the "brown clouds" formed around soot particles were known to have a net warming effect since the 2000s. Black carbon pollution is particularly strong over India: thus, it is considered to be one of the few regions where cleaning up air pollution would reduce, rather than increase, warming.
Minor role of aircraft contrails
Aircraft leave behind visible contrails (also known as vapor trails) as they travel. These contrails both reflect incoming solar radiation and trap outgoing longwave radiation that is emitted by the Earth. Because contrails reflect sunlight only during the day, but trap heat day and night, they are normally considered to cause net warming, albeit very small. A 1992 estimate was between 3.5 mW/m2 and 17 mW/m2 – hundreds of times smaller than the radiative forcing from major greenhouse gases.
However, some scientists argued that the daytime cooling effect from contrails was much stronger than usually estimated, and this argument attracted attention following the September 11 attacks. Because no commercial aircraft flew across the US in the immediate aftermath of the attacks, this period was considered a real-world demonstration of contrail-free weather. Across 4,000 weather stations in the continental United States, the diurnal temperature variation (the difference in the day's highs and lows at a fixed station) was widened by – the largest recorded increase in 30 years. In the southern US, the difference was diminished by about , and by in the US midwest. This was interpreted by some scientists as a proof of a strong cooling influence of aircraft contrails.
Ultimately, follow-up studies found that a natural change in cloud cover which occurred at the time was sufficient to explain these findings. When the global response to the 2020 coronavirus pandemic led to a reduction in global air traffic of nearly 70% relative to 2019, multiple studies found "no significant response of diurnal surface air temperature range" as the result of contrail changes, and either "no net significant global ERF" (effective radiative forcing) or a very small warming effect.
Historical cooling
At the peak of global dimming, it was able to counteract the warming trend completely. By 1975, the continually increasing concentrations of greenhouse gases have overcome the masking effect and dominated ever since. Even then, regions with high concentrations of sulfate aerosols due to air pollution had initially experienced cooling, in contradiction to the overall warming trend. The eastern United States was a prominent example: the temperatures there declined by between 1970 and 1980, and by up to in the Arkansas and Missouri.
Brightening and accelerated warming
Starting in the 1980s, the reduction in global dimming has contributed to higher global temperatures. Hot extremes accelerated as global dimming abated. It has been estimated that since the mid-1990s, peak daily temperatures in northeast Asia and hottest days of the year in Western Europe would have been substantially less hot if aerosol concentrations had stayed the same as before. Some of the acceleration of sea level rise, as well as Arctic amplification and the associated Arctic sea ice decline, was also attributed to the reduction in aerosol masking.
In Europe, the declines in aerosol concentrations since the 1980s had also reduced the associated fog, mist and haze: altogether, it was responsible for about 10–20% of daytime warming across Europe, and about 50% of the warming over the more polluted Eastern Europe. Because aerosol cooling depends on reflecting sunlight, air quality improvements had a negligible impact on wintertime temperatures, but had increased temperatures from April to September by around in Central and Eastern Europe. The central and eastern United States experienced warming of between 1980 and 2010 as sulfate pollution was reduced, even as sulfate particles still accounted for around 25% of all particulates. By 2021, the northeastern coast of the United States was one of the fastest-warming regions of North America, as the slowdown of the Atlantic Meridional Overturning Circulation increased temperatures in that part of the North Atlantic Ocean.
In 2020, COVID-19 lockdowns provided a notable "natural experiment", as there had been a marked decline in sulfate and black carbon emissions caused by the curtailed road traffic and industrial output. That decline did have a detectable warming impact: it was estimated to have increased global temperatures by initially and up to by 2023, before disappearing. Regionally, the lockdowns were estimated to increase temperatures by in eastern China over January–March, and then by over Europe, eastern United States, and South Asia in March–May, with the peak impact of in some regions of the United States and Russia. In the city of Wuhan, the urban heat island effect was found to have decreased by at night and by overall during the strictest lockdowns.
Future
Since changes in aerosol concentrations already have an impact on the global climate, they would necessarily influence future projections as well. In fact, it is impossible to fully estimate the warming impact of all greenhouse gases without accounting for the counteracting cooling from aerosols.
Climate models started to account for the effects of sulfate aerosols around the IPCC Second Assessment Report; when the IPCC Fourth Assessment Report was published in 2007, every climate model had integrated sulfates, but only 5 were able to account for less impactful particulates like black carbon. By 2021, CMIP6 models estimated total aerosol cooling in the range from to ; The IPCC Sixth Assessment Report selected the best estimate of a cooling provided by sulfate aerosols, while black carbon amounts to about of warming. While these values are based on combining model estimates with observational constraints, including those on ocean heat content, the matter is not yet fully settled. The difference between model estimates mainly stems from disagreements over the indirect effects of aerosols on clouds.
Regardless of the current strength of aerosol cooling, all future climate change scenarios project decreases in particulates and this includes the scenarios where and targets are met: their specific emission reduction targets assume the need to make up for lower dimming. Since models estimate that the cooling caused by sulfates is largely equivalent to the warming caused by atmospheric methane (and since methane is a relatively short-lived greenhouse gas), it is believed that simultaneous reductions in both would effectively cancel each other out.
Yet, in the recent years, methane concentrations had been increasing at rates exceeding their previous period of peak growth in the 1980s, with wetland methane emissions driving much of the recent growth, while air pollution is getting cleaned up aggressively. These trends are some of the main reasons why warming is now expected around 2030, as opposed to the mid-2010s estimates where it would not occur until 2040.
It has also been suggested that aerosols are not given sufficient attention in regional risk assessments, in spite of being more influential on a regional scale than globally. For instance, a climate change scenario with high greenhouse gas emissions but strong reductions in air pollution would see more global warming by 2050 than the same scenario with little improvement in air quality, but regionally, the difference would add 5 more tropical nights per year in northern China and substantially increase precipitation in northern China and northern India. Likewise, a paper comparing current level of clean air policies with a hypothetical maximum technically feasible action under otherwise the same climate change scenario found that the latter would increase the risk of temperature extremes by 30–50% in China and in Europe.
Unfortunately, because historical records of aerosols are sparser in some regions than in others, accurate regional projections of aerosol impacts are difficult. Even the latest CMIP6 climate models can only accurately represent aerosol trends over Europe, but struggle with representing North America and Asia. This means that their near-future projections of regional impacts are likely to contain errors as well.
Relationship with water cycle
On regional and global scale, air pollution can affect the water cycle, in a manner similar to some natural processes. One example is the impact of Sahara dust on hurricane formation: air laden with sand and mineral particles moves over the Atlantic Ocean, where they block some of the sunlight from reaching the water surface, slightly cooling it and dampening the development of hurricanes. Likewise, it has been suggested since the early 2000s that since aerosols decrease solar radiation over the ocean and hence reduce evaporation from it, they would be "spinning down the hydrological cycle of the planet."
In 2011, it was found that anthropogenic aerosols had been the predominant factor behind 20th century changes in rainfall over the Atlantic Ocean sector, when the entire tropical rain belt shifted southwards between 1950 and 1985, with a limited northwards shift afterwards. Future reductions in aerosol emissions are expected to result in a more rapid northwards shift, with limited impact in the Atlantic but a substantially greater impact in the Pacific. Some research also suggests that these reductions would affect the AMOC (already expected to weaken due to climate change). Reductions from the stronger air quality policies could exacerbate this expected decline by around 10%, unless methane emissions are reduced by an equivalent amount.
Most notably, multiple studies connect aerosols from the Northern Hemisphere to the failed monsoon in sub-Saharan Africa during the 1970s and 1980s, which then led to the Sahel drought and the associated famine. However, model simulations of Sahel climate are very inconsistent, so it's difficult to prove that the drought would not have occurred without aerosol pollution, although it would have clearly been less severe. Some research indicates that those models which demonstrate warming alone driving strong precipitation increases in the Sahel are the most accurate, making it more likely that sulfate pollution was to blame for overpowering this response and sending the region into drought.
Another dramatic finding had connected the impact of aerosols with the weakening of the Monsoon of South Asia. It was first advanced in 2006, yet it also remained difficult to prove. In particular, some research suggested that warming itself increases the risk of monsoon failure, potentially pushing it past a tipping point. By 2021, however, it was concluded that global warming consistently strengthened the monsoon, and some strengthening was already observed in the aftermath of lockdown-caused aerosol reductions.
In 2009, an analysis of 50 years of data found that light rains had decreased over eastern China, even though there was no significant change in the amount of water held by the atmosphere. This was attributed to aerosols reducing droplet size within clouds, which led to those clouds retaining water for a longer time without raining. The phenomenon of aerosols suppressing rainfall through reducing cloud droplet size has been confirmed by subsequent studies. Later research found that aerosol pollution over South and East Asia didn't just suppress rainfall there, but also resulted in more moisture transferred to Central Asia, where summer rainfall had increased as the result. In the United States, effects of climate change on the water cycle would typically increase both mean and extreme precipitation across the country, but these effects have so far been "masked" by the drying due to historically strong aerosol concentrations. The IPCC Sixth Assessment Report had also linked changes in aerosol concentrations to altered precipitation in the Mediterranean region.
Relevance for solar geoengineering
Global dimming is also a relevant phenomenon for certain proposals about slowing, halting or reversing global warming. An increase in planetary albedo of 1% would eliminate most of radiative forcing from anthropogenic greenhouse gas emissions and thereby global warming, while a 2% albedo increase would negate the warming effect of doubling the atmospheric carbon dioxide concentration. This is the theory behind solar geoengineering, and the high reflective potential of sulfate aerosols means that they were considered in this capacity starting from the 1970s.
Because the historical levels of global dimming were associated with high mortality from air pollution and issues such as acid rain, the concept of relying on cooling directly from pollution has been described as a "Faustian bargain" and is not seriously considered by modern research. Instead, the seminal 2006 paper by Paul Crutzen suggested that the way to avoid increased warming as the sulfate pollution decreased was to revisit the 1974 proposal by the Soviet researcher Mikhail Budyko. The proposal involved releasing sulfates from the airplanes flying in the upper layers of the atmosphere, in what is now described as stratospheric aerosol injection, or SAI. In comparison, most air pollution is in the lower atmospheric layer (the troposphere), and only resides there for weeks. Because aerosols deposited in the stratosphere would last for years, far less sulfur would have to be emitted to result in the same amount of cooling.
While Crutzen's initial proposal was focused on avoiding the warming caused by the reductions in air pollution, it was immediately understood that scaling up this proposal could slow, stop, or outright reverse warming. It has been estimated that the amount of sulfur needed to offset a warming of around relative to now (and relative to the preindustrial), under the highest-emission scenario RCP 8.5 would be less than what is already emitted through air pollution today, and that reductions in sulfur pollution from future air quality improvements already expected under that scenario would offset the sulfur used for geoengineering. The trade-off is increased cost. Although there's a popular narrative that stratospheric aerosol injection can be carried out by individuals, small states, or other non-state rogue actors, scientific estimates suggest that cooling the atmosphere by through stratospheric aerosol injection would cost at least $18 billion annually (at 2020 USD value), meaning that only the largest economies or economic blocs could afford this intervention. Even so, these approaches would still be "orders of magnitude" cheaper than greenhouse gas mitigation, let alone the costs of unmitigated effects of climate change.
Even if SAI were to stop or outright reverse global warming, weather patterns in many areas would still change substantially. The habitat of mosquitoes and other disease vectors would shift, though it's unclear how it would compare to the shifts that are otherwise likely to occur from climate change. Lower sunlight would affect crop yields and carbon sinks due to reduced photosynthesis, but this would likely be offset by lack of thermal stress from warming and the greater CO2 fertilization effect relative to now. Most importantly, the warming from emissions lasts for hundreds to thousands of years, while the cooling from SAI stops 1–3 years after the last aerosol injection. This means that neither stratospheric aerosol injection nor other forms of solar geoengineering can be used as a substitute for reducing greenhouse gas emissions, because if solar geoengineering were to cease while greenhouse gas levels remained high, it would lead to "large and extremely rapid" warming and similarly abrupt changes to the water cycle. Many thousands of species would likely go extinct as the result. Instead, any solar geoengineering would act as a temporary measure to limit warming while emissions of greenhouse gases are reduced and carbon dioxide is removed, which may well take hundreds of years.
| Physical sciences | Climate change | Earth science |
407615 | https://en.wikipedia.org/wiki/Sodium%20nitrite | Sodium nitrite | Sodium nitrite is an inorganic compound with the chemical formula . It is a white to slightly yellowish crystalline powder that is very soluble in water and is hygroscopic. From an industrial perspective, it is the most important nitrite salt. It is a precursor to a variety of organic compounds, such as pharmaceuticals, dyes, and pesticides, but it is probably best known as a food additive used in processed meats and (in some countries) in fish products.
Uses
Industrial chemistry
The main use of sodium nitrite is for the industrial production of organonitrogen compounds. It is a reagent for conversion of amines into diazo compounds, which are key precursors to many dyes, such as diazo dyes. Nitroso compounds are produced from nitrites. These are used in the rubber industry.
It is used in a variety of metallurgical applications, for phosphatizing and detinning.
Sodium nitrite is an effective corrosion inhibitor and is used as an additive in industrial greases, as an aqueous solution in closed loop cooling systems, and in a molten state as a heat transfer medium.
Food additive and preservative
Sodium nitrite is used to speed up the curing of meat, inhibit the germination of Clostridium botulinum spores, and also impart an attractive pink color. Nitrite reacts with the meat myoglobin to cause color changes, first converting to nitrosomyoglobin (bright red), then, on heating, to nitrosohemochrome (a pink pigment).
Historically, salt has been used for the preservation of meat. The salt-preserved meat product was usually brownish-gray in color. When sodium nitrite is added with the salt, the meat develops a red, then pink color, which is associated with cured meats such as ham, bacon, hot dogs, and bologna.
In the early 1900s, irregular curing was commonplace. This led to further research surrounding the use of sodium nitrite as an additive in food, standardizing the amount present in foods to minimize the amount needed while maximizing its food additive role. Through this research, sodium nitrite has been found to give taste and color to the meat and inhibit lipid oxidation that leads to rancidity, with varying degrees of effectiveness for controlling growth of disease-causing microorganisms. The ability of sodium nitrite to address the above-mentioned issues has led to production of meat with extended storage life and has improved desirable color and taste. According to scientists working for the meat industry, nitrite has improved food safety. This view is disputed in the light of the possible carcinogenic effects caused by adding nitrites to meat.
Nitrite has the E number E250. Potassium nitrite (E249) is used in the same way. It is approved for usage in the European Union, USA, and Australia and New Zealand.
In meat processing, sodium nitrite is never used in a pure state but always mixed with common salt. This mixture is known as nitrited salt, curing salt or nitrited curing salt. In Europe, nitrited curing salt contains between 99.1% and 99.5% common salt and between 0.5% and 0.9% nitrite. In the US, nitrited curing salt is dosed at 6% and must be remixed with salt before use.
Color and taste
The appearance and taste of meat is an important component of consumer acceptance. Sodium nitrite is responsible for the desirable red color (or shaded pink) of meat. Very little nitrite is needed to induce this change. It has been reported that as little as 2 to 14 parts per million (ppm) is needed to induce this desirable color change. However, to extend the lifespan of this color change, significantly higher levels are needed. The mechanism responsible for this color change is the formation of nitrosylating agents by nitrite, which has the ability to transfer nitric oxide that subsequently reacts with myoglobin to produce the cured meat color. The unique taste associated with cured meat is also affected by the addition of sodium nitrite. However, the mechanism underlying this change in taste is still not fully understood.
Inhibition of microbial pathogens
In conjunction with salt and pH levels, sodium nitrite reduces the ability of Clostridium botulinum spores to grow to the point of producing toxin. Some dry-cured meat products are manufactured without nitrites. For example, Parma ham, which has been produced without nitrite since 1993, was reported in 2018 to have caused no cases of botulism. This is because the interior of the muscle is sterile and the surface is exposed to oxygen.
Sodium nitrite has shown varying degrees of effectiveness for controlling growth of other spoilage or disease causing microorganisms. Although the inhibitory mechanisms are not well known, its effectiveness depends on several factors including residual nitrite level, pH, salt concentration, reductants present and iron content. The type of bacteria also affects sodium nitrite's effectiveness. It is generally agreed that sodium nitrite is not effective for controlling Gram-negative enteric pathogens such as Salmonella and Escherichia coli.
Other food additives (such as lactate and sorbate) provide similar protection against bacteria, but do not provide the desired pink color.
Inhibition of lipid peroxidation
Sodium nitrite is also able to effectively delay the development of oxidative rancidity. Lipid peroxidation is considered to be a major reason for the deterioration of quality of meat products (rancidity and unappetizing flavors). Sodium nitrite acts as an antioxidant in a mechanism similar to the one responsible for the coloring effect. Nitrite reacts with heme proteins and metal ions, neutralizing free radicals by nitric oxide (one of its byproducts). Neutralization of these free radicals terminates the cycle of lipid oxidation that leads to rancidity.
Medication
Sodium nitrite is used as a medication together with sodium thiosulfate to treat cyanide poisoning. It is recommended only in severe cases of cyanide poisoning and has largely been replaced by use of hydroxocobalamin, a form of vitamin B12, but given in much higher doses than needed nutritionally.
In those who have both cyanide poisoning and carbon monoxide poisoning sodium thiosulfate by itself is usually recommended if the facility does not have sufficient hydroxycobalamin. It is given by slow injection into a vein.
side effects are chiefly related to creation of methemoglobinemia and vasodilation. Side effects can include low blood pressure, headache, shortness of breath, loss of consciousness, and vomiting. Greater care should be taken in people with underlying heart disease. The patient's levels of methemoglobin should be regularly checked during treatment. While not well studied during pregnancy, there is some evidence of potential harm to the baby. Sodium nitrite works by creating methemoglobin, where the iron atom at the center of the heme group is in the oxidized ferric () state, which binds with cyanide with greater affinity than its binding to the cytochrome C oxidase, and thus removes it from blocking the metabolic function of mitochondria.
Sodium nitrite came into medical use in the 1920s and 1930s. It is on the World Health Organization's List of Essential Medicines.
Suicide
Several academic publications in 2020 and 2021 have discussed the toxicity of sodium nitrite, and an apparent recent increase in suicides using sodium nitrite which had been ordered online. The usage of sodium nitrite as a suicide method has been heavily discussed on suicide forums, primarily Sanctioned Suicide. Sodium nitrite was also the focal-point of the McCarthy et al. v Amazon lawsuit alleging that Amazon knowingly assisted in the deaths of healthy children by selling them "suicide kits" as Amazon's "frequently bought together" feature recommended buying sodium nitrite, an antiemetic, and a suicide instruction book together. This lawsuit was dismissed in June 2023. The online marketplace eBay has globally prohibited the sale of sodium nitrite since 2019. Some online vendors of sodium nitrite have been prosecuted for assisting suicide. Furthermore, legislation has been introduced in the United States with the aim of deeming sodium nitrite products with a sodium nitrite concentration of greater than 10% by volume to be banned consumer products under the Consumer Product Safety Act. In cases of suspected suicide involving sodium nitrite, it is critical that responding individuals administer immediate methylene blue. Methylene blue is the antidote to the methemoglobinemia caused by intentional ingestion of sodium nitrite as a suicide agent.
Toxicity
Sodium nitrite is toxic. The LD50 in rats is 180 mg/kg and in humans LDLo is 71 mg/kg. The mechanism by which sodium nitrite causes death is methemoglobinemia. The oftentimes severe methemoglobinemia found in sodium nitrite poisoning cases results in systemic hypoxia, metabolic acidosis, and cyanosis. The reported signs of sodium nitrite poisoning are as follows:
With prompt action, sodium nitrite poisoning is reversible using an antidote, methylene blue. It has been reported that sodium nitrite poisoning can also be detected post-mortem:
Death by sodium nitrite ingestion can happen at lower doses than the LDLo. Sodium nitrite has been used for homicide and suicide. To prevent accidental intoxication, sodium nitrite (blended with salt) sold as a food additive in the US is dyed bright pink to avoid mistaking it for plain salt or sugar. In other countries, nitrited curing salt is not dyed but is strictly regulated.
Occurrence in vegetables
Nitrites do not occur naturally in vegetables in significant quantities, but deliberate fermentation of celery juice, for instance, with a naturally high level of nitrates, can produce nitrite levels sufficient for commercial meat curing. Boiling vegetables does not affect nitrite levels.
The presence of nitrite in animal tissue is a consequence of metabolism of nitric oxide, an important neurotransmitter. Nitric oxide can be created de novo from nitric oxide synthase utilizing arginine or from ingested nitrite.
Pigs
Due to sodium nitrite's high level of toxicity to swine (Sus scrofa) it is now being developed in Australia to control feral pigs and wild boar. The sodium nitrite induces methemoglobinemia in swine, i.e. it reduces the amount of oxygen that is released from hemoglobin, so the animal will feel faint and pass out, and then die in a humane manner after first being rendered unconscious. The Texas Parks and Wildlife Department operates a research facility at Kerr Wildlife Management Area, where they examine feral pig feeding preferences and bait tactics to administer sodium nitrite.
Cancer
Carcinogenicity is the ability or tendency of a chemical to induce tumors, increase their incidence or malignancy, or shorten the time of tumor occurrence.
Adding nitrites to meat has been shown to generate known carcinogens such as nitrosamines; the World Health Organization (WHO) advises that of "processed meats" a day would raise the risk of getting bowel cancer by 18% over a lifetime, and eating larger amounts raises the risk more. The World Health Organization's review of more than 400 studies concluded, in 2015, that there was sufficient evidence that "processed meats" caused cancer, particularly colon cancer; the WHO's International Agency for Research on Cancer (IARC) classified "processed meats" as carcinogenic to humans (Group 1); "processed meat" meaning meat that has been transformed through salting, curing, fermentation, smoking, or other processes to enhance flavour or improve preservation.).
Nitrosamines can be formed during the curing process used to preserve meats, when sodium nitrite-treated meat is cooked, and also from the reaction of nitrite with secondary amines under acidic conditions (such as occurs in the human stomach). Dietary sources of nitrosamines include US cured meats preserved with sodium nitrite as well as the dried salted fish eaten in Japan. In the 1920s, a significant change in US meat curing practices resulted in a 69% decrease in average nitrite content. This event preceded the beginning of a dramatic decline in gastric cancer mortality. Around 1970, it was found that ascorbic acid (vitamin C), an antioxidant, inhibits nitrosamine formation. Consequently, the addition of at least 550 ppm of ascorbic acid is required in meats manufactured in the United States. Manufacturers sometimes instead use erythorbic acid, a cheaper but equally effective isomer of ascorbic acid. Additionally, manufacturers may include α-tocopherol (vitamin E) to further inhibit nitrosamine production. α-Tocopherol, ascorbic acid, and erythorbic acid all inhibit nitrosamine production by their oxidation-reduction properties. Ascorbic acid, for example, forms dehydroascorbic acid when oxidized, which when in the presence of nitrosonium, a potent nitrosating agent formed from sodium nitrite, reduces the nitrosonium into nitric oxide. The nitrosonium ion formed in acidic nitrite solutions is commonly mislabeled nitrous anhydride, an unstable nitrogen oxide that cannot exist in vitro.
Ingesting nitrite under conditions that result in endogenous nitrosation has been classified as "probably carcinogenic to humans" by International Agency for Research on Cancer (IARC).
Sodium nitrite consumption has also been linked to the triggering of migraines in individuals who already experience them.
One study has found a correlation between highly frequent ingestion of meats cured with pink salt and the COPD form of lung disease. The study's researchers suggest that the high amount of nitrites in the meats was responsible; however, the team did not prove the nitrite theory. Additionally, the study does not prove that nitrites or cured meat caused higher rates of COPD, merely a link. The researchers did adjust for many of COPD's risk factors, but they commented they cannot rule out all possible unmeasurable causes or risks for COPD.
Production
Industrial production of sodium nitrite follows one of two processes, the reduction of nitrate salts, or the oxidation of lower nitrogen oxides.
One method uses molten sodium nitrate as the salt, and lead which is oxidized, while a more modern method uses scrap iron filings to reduce the nitrate.
A more commonly used method involves the general reaction of nitrogen oxides in alkaline aqueous solution, with the addition of a catalyst. The exact conditions depend on which nitrogen oxides are used, and what the oxidant is, as the conditions need to be carefully controlled to avoid over oxidation of the nitrogen atom.
Sodium nitrite has also been produced by reduction of nitrate salts by exposure to heat, light, ionizing radiation, metals, hydrogen, and electrolytic reduction.
Chemical reactions
In the laboratory, sodium nitrite can be used to destroy excess sodium azide.
Above 330 °C sodium nitrite decomposes (in air) to sodium oxide, nitric oxide and nitrogen dioxide.
Sodium nitrite can also be used in the production of nitrous acid:
The nitrous acid then, under normal conditions, decomposes:
The resulting nitrogen dioxide hydrolyzes to a mixture of nitric and nitrous acids:
Isotope labelling 15N
In organic synthesis isotope enriched sodium nitrite-15N can be used instead of normal sodium nitrite as their reactivity is nearly identical in most reactions.
The obtained products carry isotope 15N and hence nitrogen NMR can be efficiently carried out.
| Physical sciences | Nitric oxyanions | Chemistry |
407822 | https://en.wikipedia.org/wiki/Grouper | Grouper | Groupers are fish of any of a number of genera in the subfamily Epinephelinae of the family Serranidae, in the order Perciformes.
Not all serranids are called "groupers"; the family also includes the sea basses. The common name "grouper" is usually given to fish in one of two large genera: Epinephelus and Mycteroperca. In addition, the species classified in the small genera Anyperidon, Cromileptes, Dermatolepis, Graciela, Saloptia, and Triso are also called "groupers". Fish in the genus Plectropomus are referred to as "coral groupers". These genera are all classified in the subfamily Epiphelinae. However, some of the hamlets (genus Alphestes), the hinds (genus Cephalopholis), the lyretails (genus Variola), and some other small genera (Gonioplectrus, Niphon, Paranthias) are also in this subfamily, and occasional species in other serranid genera have common names involving the word "grouper". Nonetheless, the word "grouper" on its own is usually taken as meaning the subfamily Epinephelinae.
Description
Groupers are teleosts, typically having a stout body and a large mouth. They are not built for long-distance, fast swimming. They can be quite large: in length, over a meter. The largest is the Atlantic goliath grouper (Epinephelus itajara) which has been weighed at and a length of , though in such a large group, species vary considerably. They swallow prey rather than biting pieces off of them. They do not have many teeth on the edges of their jaws, but they have heavy crushing tooth plates inside the pharynx. They habitually eat fish, octopuses, and crustaceans. Some species prefer to ambush their prey, while others are active predators. Reports of fatal attacks on humans by the largest species, such as the giant grouper (Epinephelus lanceolatus), are unconfirmed.
Their mouths and gills form a powerful vacuum that pulls their prey in from a distance. They also use their mouths to dig into sand to form their shelters under big rocks, jetting it out through their gills.
Research indicates roving coralgroupers (Plectropomus pessuliferus) sometimes cooperate with giant morays in hunting. Groupers are also one of the only animals that eat invasive red lionfish.
Systematics
Etymology
The word "grouper" is from the Portuguese name, garoupa, which has been speculated to come from an indigenous South American language.
In Australia, "groper" is used instead of "grouper" for several species, such as the Queensland grouper (Epinephelus lanceolatus). In New Zealand, "groper" refers to a type of wreckfish, Polyprion oxygeneios, which goes by the name hapuka (from the Māori language ). In the Philippines, groupers are generally known as lapu-lapu in Luzon, while in the Visayas and Mindanao they are known as pugapo. It is known as kerapu in both Indonesian and Malay. In the Middle East, the fish is known as 'hammour', and is widely eaten, especially in the Persian Gulf region. In Latin America, the fish is known as 'mero'.
The species in the tribes Grammistini and Diploprionini secrete a mucus-like toxin in their skin called grammistin, and when they are confined in a restricted space and subjected to stress, the mucus produces a foam that is toxic to nearby fish. These fishes are often called soapfishes. They have been classified either as their own families or within subfamilies, although the fifth Edition of the Fishes of the World classifies these two groups as tribes within the subfamily Epinephelinae.
Classification
According to the 5th edition of Fishes of the World, the subfamily is divided up into 5 tribes containing a total of 32 genera and 234 species.
Subfamily Epinephelinae Bleeker, 1874 (groupers)
Tribe Niphonini D.S. Jordan, 1923
Niphon Cuvier, 1828
Tribe Epinephelini Bleeker, 1874
Aethaloperca Fowler, 1904
Alphestes Bloch & Schneider, 1801
Anyperodon Günther, 1859
Cephalopholis Bloch & Schneider, 1801
Chromileptes Swainson, 1839
Dermatolepis Gill, 1861
Epinephelus Bloch, 1793
Gonioplectrus Gill, 1862
Gracila Randall, 1964
Hyporthodus Gill, 1861
Mycteroperca Gill, 1862
Paranthias Guichenot, 1868
Plectropomus Pken, 1817
Saloptia J.L.B. Smith, 1964
Triso Randall, Johnson & Lowe, 1989
Variola Swainson, 1839
Tribe Diploprionini Bleeker, 1874
Aulacocephalus Temminck & Schlegel, 1843
Belonoperca Fowler & B.A. Bean, 1930
Diploprion Cuvier, 1828
Tribe Liopropomini Poey, 1867
Bathyanthias Günther, 1880
Liopropoma Gill, 1861
Rainfordia McCulloch, 1923
Tribe Grammistini Bleeker, 1857
Aporops Schultz, 1943
Grammistes Bloch & Schneider, 1801
Grammistops Schultz 1953
Jeboehlkia Robins, 1967
Pogonoperca Günther 1859
Pseudogramma Bleeker, 1875
Rypticus Cuvier, 1829
Suttonia J.L.B. Smith, 1953
Reproduction
Groupers are mostly monandric protogynous hermaphrodites, i.e., they mature only as females and can change sex after sexual maturity. Some species of groupers grow about a kilogram per year and are generally adolescents until they reach three kilograms when they become female. The largest males often control harems containing three to 15 females. Groupers often pair spawn, which enables large males to competitively exclude smaller males from reproducing. As such, if a small female grouper were to change sex before it could control a harem as a male, its fitness would decrease. If no male is available, the largest female that can increase fitness by changing sex will do so.
However, some groupers are gonochoristic. Gonochorism, or a reproductive strategy with two distinct sexes, has evolved independently in groupers at least five times. The evolution of gonochorism is linked to group spawning high amounts of habitat cover. Both group spawning and habitat cover increase the likelihood of a smaller male reproducing in the presence of large males. The fitness of male groupers in environments where competitive exclusion of smaller males is impossible is correlated with sperm production and thus testicle size. Gonochoristic groupers have larger testes than protogynous groupers (10% of body mass compared to 1% of body mass), indicating the evolution of gonochorism increased male grouper fitness in environments where large males were unable to competitively exclude small males from reproducing.
Parasites
Like other fish, groupers harbor parasites, including digeneans, nematodes, cestodes, monogeneans, isopods, and copepods. A study conducted in New Caledonia has shown that coral reef-associated groupers have about ten species of parasites per fish species. Species of Pseudorhabdosynochus, monogeneans of the family Diplectanidae are typical of and especially numerous on groupers.
Modern use
Many groupers are important food fish; some are now farmed. Unlike most other fish species, which are chilled or frozen, groupers are usually sold alive in markets. Many species are popular game fish for sea-angling. Some species are small enough to be kept in aquaria, though even the small species are inclined to grow rapidly.
Groupers are commonly reported as a source of ciguatera fish poisoning. DNA barcoding of grouper species might help control Ciguatera fish poisoning since fish are easily identified, even from meal remnants, with molecular tools.
Size
Malaysian newspaper The Star reported a grouper being caught off the waters near Pulau Sembilan in the Strait of Malacca in January 2008. Shenzhen News in China reported that a grouper swallowed a whitetip reef shark at the Fuzhou Sea World aquarium.
In September 2010, a Costa Rican newspaper reported a grouper in Cieneguita, Limón. The weight of the fish was , and it was lured using one kilogram of bait. In November 2013, a grouper had been caught and sold to a hotel in Dongyuan, China.
In August 2014, off Bonita Springs in Florida (USA), a big grouper took in one gulp a 4-foot shark that an angler had caught.
| Biology and health sciences | Acanthomorpha | null |
407852 | https://en.wikipedia.org/wiki/Hyacinth | Hyacinth | Hyacinthus is a small genus of bulbous herbs, spring-blooming perennials. They are fragrant flowering plants in the family Asparagaceae, subfamily Scilloideae and are commonly called hyacinths (). The genus is native predominantly to the Eastern Mediterranean region from the south of Turkey to Northern Israel, although naturalized more widely.
The name comes from Greek mythology: Hyacinth was killed by Zephyrus, the god of the west wind, jealous of his love for Apollo, who then transformed the drops of blood into flowers.
Several species of Brodiaea, Scilla, and other plants that were formerly classified in the Liliaceae family and have flower clusters borne along the stalk also have common names with the word "hyacinth" in them. Hyacinths should also not be confused with the genus Muscari, which are commonly known as grape hyacinths.
Description
Hyacinthus grows from bulbs, each producing around four to six narrow untoothed leaves and one to three spikes or racemes of flowers. In the wild species, the flowers are widely spaced, with as few as two per raceme in H. litwinovii and typically six to eight in H. orientalis which grows to a height of . Cultivars of H. orientalis have much denser flower spikes and are generally more robust.
Taxonomy
The genus name Hyacinthus was attributed to Joseph Pitton de Tournefort when used by Carl Linnaeus in 1753. It is derived from a Greek name used for a plant by Homer, (), the flowers supposedly having grown up from the blood of a youth of this name killed by the god Zephyr out of jealousy. The original wild plant known as hyakinthos to Homer has been identified with Scilla bifolia, among other possibilities. Linnaeus defined the genus Hyacinthus widely to include species now placed in other genera of the subfamily Scilloideae, such as Muscari (e.g. his Hyacinthus botryoides) and Hyacinthoides (e.g. his Hyacinthus non-scriptus).
Hyacinthus was formerly the type genus of the separate family Hyacinthaceae; prior to that, the genus was placed in the lily family Liliaceae.
Species
Three species are placed within the genus Hyacinthus:
Hyacinthus litwinovii – north-east Iran to southern Turkmenistan
Hyacinthus orientalis - Iraq, Lebanon-Syria, Israel, Turkey; common, Dutch or garden hyacinth
Hyacinthus transcaspicus – north-east Iran to southern Turkmenistan
Some authorities place H. litwonovii and H. transcaspicus in the related genus Hyacinthella, which would make Hyacinthus a monotypic genus.
Distribution
The genus Hyacinthus is considered native to the eastern Mediterranean from southern Turkey to the region of Palestine, including Lebanon and Syria, and on through Iraq and Iran to Turkmenistan. It is widely naturalized elsewhere, including Europe (Bulgaria, France, Greece, Italy, the Netherlands, Sardinia, Sicily and former Yugoslavia), Cyprus, North America (California, Pennsylvania, Texas), central Mexico, the Caribbean (Cuba, Haiti) and Korea.
Cultivation
The Dutch, or common hyacinth, of house and garden culture (H. orientalis, native to Southwest Asia) was so popular in the 18th century that over 2,000 cultivars were grown in the Netherlands, its chief commercial producer. This hyacinth has a single dense spike of fragrant flowers in shades of red, blue, white, orange, pink, violet or yellow. A form of the common hyacinth is the less hardy and smaller blue- or white-petalled Roman hyacinth. These flowers need full sunlight and should be watered moderately.
Toxicity
The inedible bulbs contain oxalic acid and may cause mild skin irritation. Protective gloves are recommended.
Some members of the plant subfamily Scilloideae are commonly called hyacinths but are not members of the genus Hyacinthus and are edible; one example is the tassel hyacinth, which forms part of the cuisine of some Mediterranean countries.
Culture
Hyacinths are often associated with spring and rebirth. The hyacinth flower is used in the Haft-Seen table setting for the Persian New Year celebration, Nowruz, held at the spring equinox. The Persian word for hyacinth is (), meaning 'cluster'.
The name () was used in Ancient Greece for at least two distinct plants, which have variously been identified as Scilla bifolia or Orchis quadripunctata and Consolida ajacis (larkspur). Plants known by this name were sacred to Aphrodite.
The hyacinth appears in the first section of T. S. Eliot's The Waste Land during a conversation between the narrator and the "hyacinth girl" that takes place in the spring.
In Roman Catholic tradition, H. orientalis represents prudence, constancy, desire of heaven and peace of mind.
American rock band The Doors released a song entitled "Hyacinth House" which appeared on their 1971 album L.A. Woman, the last to feature lead singer Jim Morrison.
Colour
The colour of the blue flower hyacinth plant varies between 'mid-blue', violet blue and bluish purple. Within this range can be found Persenche, which is an American color name (probably from French), for a hyacinth hue.
The colour analysis of Persenche is 73% ultramarine, 9% red and 18% white.
Gallery
| Biology and health sciences | Asparagales | Plants |
408026 | https://en.wikipedia.org/wiki/Relativistic%20Doppler%20effect | Relativistic Doppler effect | The relativistic Doppler effect is the change in frequency, wavelength and amplitude of light, caused by the relative motion of the source and the observer (as in the classical Doppler effect, first proposed by Christian Doppler in 1842), when taking into account effects described by the special theory of relativity.
The relativistic Doppler effect is different from the non-relativistic Doppler effect as the equations include the time dilation effect of special relativity and do not involve the medium of propagation as a reference point. They describe the total difference in observed frequencies and possess the required Lorentz symmetry.
Astronomers know of three sources of redshift/blueshift: Doppler shifts; gravitational redshifts (due to light exiting a gravitational field); and cosmological expansion (where space itself stretches). This article concerns itself only with Doppler shifts.
Summary of major results
In the following table, it is assumed that for the receiver and the source are moving away from each other, being the relative velocity and the speed of light, and .
Derivation
Relativistic longitudinal Doppler effect
Relativistic Doppler shift for the longitudinal case, with source and receiver moving directly towards or away from each other, is often derived as if it were the classical phenomenon, but modified by the addition of a time dilation term. This is the approach employed in first-year physics or mechanics textbooks such as those by Feynman or Morin.
Following this approach towards deriving the relativistic longitudinal Doppler effect, assume the receiver and the source are moving away from each other with a relative speed as measured by an observer on the receiver or the source (The sign convention adopted here is that is negative if the receiver and the source are moving towards each other).
Consider the problem in the reference frame of the source.
Suppose one wavefront arrives at the receiver. The next wavefront is then at a distance away from the receiver (where is the wavelength, is the frequency of the waves that the source emits, and is the speed of light).
The wavefront moves with speed , but at the same time the receiver moves away with speed during a time , which is the period of light waves impinging on the receiver, as observed in the frame of the source. So, where is the speed of the receiver in terms of the speed of light. The corresponding , the frequency of at which wavefronts impinge on the receiver in the source's frame, is:
Thus far, the equations have been identical to those of the classical Doppler effect with a stationary source and a moving receiver.
However, due to relativistic effects, clocks on the receiver are time dilated relative to clocks at the source: , where is the Lorentz factor. In order to know which time is dilated, we recall that is the time in the frame in which the source is at rest. The receiver will measure the received frequency to be
The ratio
is called the Doppler factor of the source relative to the receiver. (This terminology is particularly prevalent in the subject of astrophysics: see relativistic beaming.)
The corresponding wavelengths are related by
Identical expressions for relativistic Doppler shift are obtained when performing the analysis in the reference frame of the receiver with a moving source. This matches up with the expectations of the principle of relativity, which dictates that the result can not depend on which object is considered to be the one at rest. In contrast, the classic nonrelativistic Doppler effect is dependent on whether it is the source or the receiver that is stationary with respect to the medium.
Transverse Doppler effect
Suppose that a source and a receiver are both approaching each other in uniform inertial motion along paths that do not collide. The transverse Doppler effect (TDE) may refer to (a) the nominal blueshift predicted by special relativity that occurs when the emitter and receiver are at their points of closest approach; or (b) the nominal redshift predicted by special relativity when the receiver sees the emitter as being at its closest approach. The transverse Doppler effect is one of the main novel predictions of the special theory of relativity.
Whether a scientific report describes TDE as being a redshift or blueshift depends on the particulars of the experimental arrangement being related. For example, Einstein's original description of the TDE in 1907 described an experimenter looking at the center (nearest point) of a beam of "canal rays" (a beam of positive ions that is created by certain types of gas-discharge tubes). According to special relativity, the moving ions' emitted frequency would be reduced by the Lorentz factor, so that the received frequency would be reduced (redshifted) by the same factor.
On the other hand, Kündig (1963) described an experiment where a Mössbauer absorber was spun in a rapid circular path around a central Mössbauer emitter. As explained below, this experimental arrangement resulted in Kündig's measurement of a blueshift.
Source and receiver are at their points of closest approach
In this scenario, the point of closest approach is frame-independent and represents the moment where there is no change in distance versus time. Figure 2 demonstrates that the ease of analyzing this scenario depends on the frame in which it is analyzed.
Fig. 2a. If we analyze the scenario in the frame of the receiver, we find that the analysis is more complicated than it should be. The apparent position of a celestial object is displaced from its true position (or geometric position) because of the object's motion during the time it takes its light to reach an observer. The source would be time-dilated relative to the receiver, but the redshift implied by this time dilation would be offset by a blueshift due to the longitudinal component of the relative motion between the receiver and the apparent position of the source.
Fig. 2b. It is much easier if, instead, we analyze the scenario from the frame of the source. An observer situated at the source knows, from the problem statement, that the receiver is at its closest point to him. That means that the receiver has no longitudinal component of motion to complicate the analysis. (i.e. dr/dt = 0 where r is the distance between receiver and source) Since the receiver's clocks are time-dilated relative to the source, the light that the receiver receives is blue-shifted by a factor of gamma. In other words,
Receiver sees the source as being at its closest point
This scenario is equivalent to the receiver looking at a direct right angle to the path of the source. The analysis of this scenario is best conducted from the frame of the receiver. Figure 3 shows the receiver being illuminated by light from when the source was closest to the receiver, even though the source has moved on. Because the source's clock is time dilated as measured in the frame of the receiver, and because there is no longitudinal component of its motion, the light from the source, emitted from this closest point, is redshifted with frequency
In the literature, most reports of transverse Doppler shift analyze the effect in terms of the receiver pointed at direct right angles to the path of the source, thus seeing the source as being at its closest point and observing a redshift.
Point of null frequency shift
Given that, in the case where the inertially moving source and receiver are geometrically at their nearest approach to each other, the receiver observes a blueshift, whereas in the case where the receiver sees the source as being at its closest point, the receiver observes a redshift, there obviously must exist a point where blueshift changes to a redshift. In Fig. 2, the signal travels perpendicularly to the receiver path and is blueshifted. In Fig. 3, the signal travels perpendicularly to the source path and is redshifted.
As seen in Fig. 4, null frequency shift occurs for a pulse that travels the shortest distance from source to receiver. When viewed in the frame where source and receiver have the same speed, this pulse is emitted perpendicularly to the source's path and is received perpendicularly to the receiver's path. The pulse is emitted slightly before the point of closest approach, and it is received slightly after.
One object in circular motion around the other
Fig. 5 illustrates two variants of this scenario. Both variants can be analyzed using simple time dilation arguments. Figure 5a is essentially equivalent to the scenario described in Figure 2b, and the receiver observes light from the source as being blueshifted by a factor of . Figure 5b is essentially equivalent to the scenario described in Figure 3, and the light is redshifted.
The only seeming complication is that the orbiting objects are in accelerated motion. An accelerated particle does not have an inertial frame in which it is always at rest. However, an inertial frame can always be found which is momentarily comoving with the particle. This frame, the momentarily comoving reference frame (MCRF), enables application of special relativity to the analysis of accelerated particles. If an inertial observer looks at an accelerating clock, only the clock's instantaneous speed is important when computing time dilation.
The converse, however, is not true. The analysis of scenarios where both objects are in accelerated motion requires a somewhat more sophisticated analysis. Not understanding this point has led to confusion and misunderstanding.
Source and receiver both in circular motion around a common center
Suppose source and receiver are located on opposite ends of a spinning rotor, as illustrated in Fig. 6. Kinematic arguments (special relativity) and arguments based on noting that there is no difference in potential between source and receiver in the pseudogravitational field of the rotor (general relativity) both lead to the conclusion that there should be no Doppler shift between source and receiver.
In 1961, Champeney and Moon conducted a Mössbauer rotor experiment testing exactly this scenario, and found that the Mössbauer absorption process was unaffected by rotation. They concluded that their findings supported special relativity.
This conclusion generated some controversy. A certain persistent critic of relativity maintained that, although the experiment was consistent with general relativity, it refuted special relativity, his point being that since the emitter and absorber were in uniform relative motion, special relativity demanded that a Doppler shift be observed. The fallacy with this critic's argument was, as demonstrated in section Point of null frequency shift, that it is simply not true that a Doppler shift must always be observed between two frames in uniform relative motion. Furthermore, as demonstrated in section Source and receiver are at their points of closest approach, the difficulty of analyzing a relativistic scenario often depends on the choice of reference frame. Attempting to analyze the scenario in the frame of the receiver involves much tedious algebra. It is much easier, almost trivial, to establish the lack of Doppler shift between emitter and absorber in the laboratory frame.
As a matter of fact, however, Champeney and Moon's experiment said nothing either pro or con about special relativity. Because of the symmetry of the setup, it turns out that virtually any conceivable theory of the Doppler shift between frames in uniform inertial motion must yield a null result in this experiment.
Rather than being equidistant from the center, suppose the emitter and absorber were at differing distances from the rotor's center. For an emitter at radius and the absorber at radius anywhere on the rotor, the ratio of the emitter frequency, and the absorber frequency, is given by
where is the angular velocity of the rotor. The source and emitter do not have to be 180° apart, but can be at any angle with respect to the center.
Motion in an arbitrary direction
The analysis used in section Relativistic longitudinal Doppler effect can be extended in a straightforward fashion to calculate the Doppler shift for the case where the inertial motions of the source and receiver are at any specified angle.
Fig. 7 presents the scenario from the frame of the receiver, with the source moving at speed at an angle measured in the frame of the receiver. The radial component of the source's motion along the line of sight is equal to
The equation below can be interpreted as the classical Doppler shift for a stationary and moving source modified by the Lorentz factor
In the case when , one obtains the transverse Doppler effect:
In his 1905 paper on special relativity, Einstein obtained a somewhat different looking equation for the Doppler shift equation. After changing the variable names in Einstein's equation to be consistent with those used here, his equation reads
The differences stem from the fact that Einstein evaluated the angle with respect to the source rest frame rather than the receiver rest frame. is not equal to because of the effect of relativistic aberration. The relativistic aberration equation is:
Substituting the relativistic aberration equation into yields , demonstrating the consistency of these alternate equations for the Doppler shift.
Setting in or in yields , the expression for relativistic longitudinal Doppler shift.
A four-vector approach to deriving these results may be found in Landau and Lifshitz (2005).
In electromagnetic waves both the electric and the magnetic field amplitudes E and B transform in a similar manner as the frequency:
Visualization
Fig. 8 helps us understand, in a rough qualitative sense, how the relativistic Doppler effect and relativistic aberration differ from the non-relativistic Doppler effect and non-relativistic aberration of light. Assume that the observer is uniformly surrounded in all directions by yellow stars emitting monochromatic light of 570 nm. The arrows in each diagram represent the observer's velocity vector relative to its surroundings, with a magnitude of 0.89 c.
In the relativistic case, the light ahead of the observer is blueshifted to a wavelength of 137 nm in the far ultraviolet, while light behind the observer is redshifted to 2400 nm in the short wavelength infrared. Because of the relativistic aberration of light, objects formerly at right angles to the observer appear shifted forwards by 63°.
In the non-relativistic case, the light ahead of the observer is blueshifted to a wavelength of 300 nm in the medium ultraviolet, while light behind the observer is redshifted to 5200 nm in the intermediate infrared. Because of the aberration of light, objects formerly at right angles to the observer appear shifted forwards by 42°.
In both cases, the monochromatic stars ahead of and behind the observer are Doppler-shifted towards invisible wavelengths. If, however, the observer had eyes that could see into the ultraviolet and infrared, he would see the stars ahead of him as brighter and more closely clustered together than the stars behind, but the stars would be far brighter and far more concentrated in the relativistic case.
Real stars are not monochromatic, but emit a range of wavelengths approximating a black body distribution. It is not necessarily true that stars ahead of the observer would show a bluer color. This is because the whole spectral energy distribution is shifted. At the same time that visible light is blueshifted into invisible ultraviolet wavelengths, infrared light is blueshifted into the visible range. Precisely what changes in the colors one sees depends on the physiology of the human eye and on the spectral characteristics of the light sources being observed.
Doppler effect on intensity
The Doppler effect (with arbitrary direction) also modifies the perceived source intensity: this can be expressed concisely by the fact that source strength divided by the cube of the frequency is a Lorentz invariant This implies that the total radiant intensity (summing over all frequencies) is multiplied by the fourth power of the Doppler factor for frequency.
As a consequence, since Planck's law describes the black-body radiation as having a spectral intensity in frequency proportional to (where is the source temperature and the frequency), we can draw the conclusion that a black body spectrum seen through a Doppler shift (with arbitrary direction) is still a black body spectrum with a temperature multiplied by the same Doppler factor as frequency.
This result provides one of the pieces of evidence that serves to distinguish the Big Bang theory from alternative theories proposed to explain the cosmological redshift.
Experimental verification
Since the transverse Doppler effect is one of the main novel predictions of the special theory of relativity, the detection and precise quantification of this effect has been an important goal of experiments attempting to validate special relativity.
Ives and Stilwell-type measurements
Einstein (1907) had initially suggested that the TDE might be measured by observing a beam of "canal rays" at right angles to the beam. Attempts to measure TDE following this scheme proved to be impractical, since the maximum speed of a particle beam available at the time was only a few thousandths of the speed of light.
Fig. 9 shows the results of attempting to measure the 4861 Angstrom line emitted by a beam of canal rays (a mixture of H1+, H2+, and H3+ ions) as they recombine with electrons stripped from the dilute hydrogen gas used to fill the Canal ray tube. Here, the predicted result of the TDE is a 4861.06 Angstrom line. On the left, longitudinal Doppler shift results in broadening the emission line to such an extent that the TDE cannot be observed. The middle figures illustrate that even if one narrows one's view to the exact center of the beam, very small deviations of the beam from an exact right angle introduce shifts comparable to the predicted effect.
Rather than attempt direct measurement of the TDE, Ives and Stilwell (1938) used a concave mirror that allowed them to simultaneously observe a nearly longitudinal direct beam (blue) and its reflected image (red). Spectroscopically, three lines would be observed: An undisplaced emission line, and blueshifted and redshifted lines. The average of the redshifted and blueshifted lines would be compared with the wavelength of the undisplaced emission line. The difference that Ives and Stilwell measured corresponded, within experimental limits, to the effect predicted by special relativity.
Various of the subsequent repetitions of the Ives and Stilwell experiment have adopted other strategies for measuring the mean of blueshifted and redshifted particle beam emissions. In some recent repetitions of the experiment, modern accelerator technology has been used to arrange for the observation of two counter-rotating particle beams. In other repetitions, the energies of gamma rays emitted by a rapidly moving particle beam have been measured at opposite angles relative to the direction of the particle beam. Since these experiments do not actually measure the wavelength of the particle beam at right angles to the beam, some authors have preferred to refer to the effect they are measuring as the "quadratic Doppler shift" rather than TDE.
Direct measurement of transverse Doppler effect
The advent of particle accelerator technology has made possible the production of particle beams of considerably higher energy than was available to Ives and Stilwell. This has enabled the design of tests of the transverse Doppler effect directly along the lines of how Einstein originally envisioned them, i.e. by directly viewing a particle beam at a 90° angle. For example, Hasselkamp et al. (1979) observed the Hα line emitted by hydrogen atoms moving at speeds ranging from 2.53×108 cm/s to 9.28×108 cm/s, finding the coefficient of the second order term in the relativistic approximation to be 0.52±0.03, in excellent agreement with the theoretical value of 1/2.
Other direct tests of the TDE on rotating platforms were made possible by the discovery of the Mössbauer effect, which enables the production of exceedingly narrow resonance lines for nuclear gamma ray emission and absorption. Mössbauer effect experiments have proven themselves easily capable of detecting TDE using emitter-absorber relative velocities on the order of 2×104 cm/s. These experiments include ones performed by Hay et al. (1960), Champeney et al. (1965), and Kündig (1963).
Time dilation measurements
The transverse Doppler effect and the kinematic time dilation of special relativity are closely related. All validations of TDE represent validations of kinematic time dilation, and most validations of kinematic time dilation have also represented validations of TDE. An online resource, "What is the experimental basis of Special Relativity?" has documented, with brief commentary, many of the tests that, over the years, have been used to validate various aspects of special relativity. Kaivola et al. (1985) and McGowan et al. (1993) are examples of experiments classified in this resource as time dilation experiments. These two also represent tests of TDE. These experiments compared the frequency of two lasers, one locked to the frequency of a neon atom transition in a fast beam, the other locked to the same transition in thermal neon. The 1993 version of the experiment verified time dilation, and hence TDE, to an accuracy of 2.3×10−6.
Relativistic Doppler effect for sound and light
First-year physics textbooks almost invariably analyze Doppler shift for sound in terms of Newtonian kinematics, while analyzing Doppler shift for light and electromagnetic phenomena in terms of relativistic kinematics. This gives the false impression that acoustic phenomena require a different analysis than light and radio waves.
The traditional analysis of the Doppler effect for sound represents a low speed approximation to the exact, relativistic analysis. The fully relativistic analysis for sound is, in fact, equally applicable to both sound and electromagnetic phenomena.
Consider the spacetime diagram in Fig. 10. Worldlines for a tuning fork (the source) and a receiver are both illustrated on this diagram. The tuning fork and receiver start at O, at which point the tuning fork starts to vibrate, emitting waves and moving along the negative x-axis while the receiver starts to move along the positive x-axis. The tuning fork continues until it reaches A, at which point it stops emitting waves: a wavepacket has therefore been generated, and all the waves in the wavepacket are received by the receiver with the last wave reaching it at B. The proper time for the duration of the packet in the tuning fork's frame of reference is the length of OA while the proper time for the duration of the wavepacket in the receiver's frame of reference is the length of OB. If waves were emitted, then , while ; the inverse slope of AB represents the speed of signal propagation (i.e. the speed of sound) to event B. We can therefore write:
(speed of sound)
(speeds of source and receiver)
and are assumed to be less than since otherwise their passage through the medium will set up shock waves, invalidating the calculation. Some routine algebra gives the ratio of frequencies:
If and are small compared with , the above equation reduces to the classical Doppler formula for sound.
If the speed of signal propagation approaches , it can be shown that the absolute speeds and of the source and receiver merge into a single relative speed independent of any reference to a fixed medium. Indeed, we obtain , the formula for relativistic longitudinal Doppler shift.
Analysis of the spacetime diagram in Fig. 10 gave a general formula for source and receiver moving directly along their line of sight, i.e. in collinear motion.
Fig. 11 illustrates a scenario in two dimensions. The source moves with velocity (at the time of emission). It emits a signal which travels at velocity towards the receiver, which is traveling at velocity at the time of reception. The analysis is performed in a coordinate system in which the signal's speed is independent of direction.
The ratio between the proper frequencies for the source and receiver is
The leading ratio has the form of the classical Doppler effect, while the square root term represents the relativistic correction. If we consider the angles relative to the frame of the source, then and the equation reduces to , Einstein's 1905 formula for the Doppler effect. If we consider the angles relative to the frame of the receiver, then and the equation reduces to , the alternative form of the Doppler shift equation discussed previously.
| Physical sciences | Waves | Physics |
408169 | https://en.wikipedia.org/wiki/Eastern%20gray%20squirrel | Eastern gray squirrel | The eastern gray squirrel (Sciurus carolinensis), also known, particularly outside of the United States, as simply the grey squirrel, is a tree squirrel in the genus Sciurus. It is native to eastern North America, where it is the most prodigious and ecologically essential natural forest regenerator. Widely introduced to certain places around the world, the eastern gray squirrel in Europe, in particular, is regarded as an invasive species.
In Europe, Sciurus carolinensis is included since 2016 in the list of Invasive Alien Species of Union concern (the Union list). This implies that this species cannot be imported, bred, transported, commercialized, or intentionally released into the environment in the whole of the European Union.
Distribution
Sciurus carolinensis is native to the eastern and midwestern United States, and to the southerly portions of the central provinces of Canada. In the mid-1800s the population in the midwestern United States was described as being "truly astonishing," but human predation and habitat destruction through deforestation resulted in drastic population reductions, to the point that the animal was almost absent from Illinois by 1900.
The native range of the eastern gray squirrel overlaps with that of the fox squirrel (Sciurus niger), with which it is sometimes confused, although the core of the fox squirrel's range is slightly more to the west. The eastern gray squirrel is found from New Brunswick, through southwestern Quebec and throughout southern Ontario plus in southern Manitoba, south to East Texas and Florida. Breeding eastern gray squirrels are found in Nova Scotia, but whether this population was introduced or came from natural range expansion is not known.
A prolific and adaptable species, the eastern gray squirrel has also been introduced to, and thrives in, several regions of the western United States and in 1966, this squirrel was introduced onto Vancouver Island in Western Canada in the area of Metchosin, and has spread widely from there. They are considered highly invasive and a threat to both the local ecosystem and the native squirrel, the American red squirrel.
Overseas, eastern gray squirrels in Europe are a concern because they have displaced some of the native squirrels there. They have been introduced into Ireland, Britain, Italy, South Africa, and Australia (where it was extirpated by 1973).
In Ireland, the native squirrel – also colored red – the Eurasian red squirrel S. vulgaris – has been displaced in several eastern counties, though it still remains common in the south and west of the country. The gray squirrel is also an invasive species in Britain; it has spread across the country and has largely displaced the red squirrel. That such a displacement might happen in Italy is of concern, as gray squirrels might spread to other parts of mainland Europe.
Fossil record
S. carolinensis is known to have occurred at least 18 times throughout the North American Pleistocene fossil record, across eight different US states, with some fossils allegedly dating back as early as the late Irvingtonian period. Body size seems to have increased during the early to middle Holocene and then decreased to the present size seen today.
A single fossil specimen (held and cataloged by South Carolina State Museum's natural history collection as SC93.105.172–.174) is known from the Ardis local fauna site in Harleyville, South Carolina. This specimen consists of a partial humerus (.172) and two partial tibiae (.173, .174), which are anatomically indistinguishable from that of contemporary S. carolinensis. Surrounding fossil material from the site has been dated back to 18,940–18,530 years ago, during the Late Pleistocene (late Rancholabrean) epoch, and indicates that the site was likely a hardwood-conifer swamp during this time.
Etymology
The generic name, Sciurus, is derived from two Greek words, skia 'shadow' and oura 'tail'. This name alludes to the squirrel sitting in the shadow of its tail. The specific epithet, carolinensis, refers to the Carolinas, where the species was first recorded and where the animal is still extremely common. In the United Kingdom and Canada, it is simply referred to as the "grey squirrel". In the US, "eastern" is used to differentiate the species from the western gray squirrel (Sciurus griseus).
Characteristics
The eastern gray squirrel has predominantly gray fur, but it can have a brownish color. It has a usual white underside as compared to the typical brownish-orange underside of the fox squirrel. It has a large bushy tail. Particularly in urban situations where the risk of predation is reduced, both white-colored and black-colored individuals are quite often found. The melanistic form, which is almost entirely black, is predominant in certain populations and in certain geographic areas, such as in large parts of southeastern Canada. Melanistic squirrels appear to exhibit a higher cold tolerance than the common gray morph; when exposed to −10 °C, black squirrels showed an 18% reduction in heat loss, a 20% reduction in basal metabolic rate, and an 11% increase to non-shivering thermogenesis capacity when compared to the common gray morph. The black coloration is caused by an incomplete dominant mutation of MC1R, where E+/E+ is a wild type squirrel, E+/EB is brown-black, and EB/EB is black.
The head and body length is from , the tail from , and the adult weight varies between . They do not display sexual dimorphism, meaning there is no gender difference in size or coloration.
The tracks of an eastern gray squirrel are difficult to distinguish from the related fox squirrel and Abert's squirrel, though the latter's range is almost entirely different from the gray's. Like all squirrels, the eastern gray shows four toes on the front feet and five on the hind feet. The hind foot-pad is often not visible in the track. When bounding or moving at speed, the front foot tracks will be behind the hind foot tracks. The bounding stride can be two to three feet long.
The dental formula of the eastern gray squirrel is 1023/1013 (upper teeth/lower teeth).
× 2 = 22 total teeth.
Incisors exhibit indeterminate growth, meaning they grow consistently throughout life, and their cheek teeth exhibit brachydont (low-crowned teeth) and bunodont (having tubercles on crowns) structures.
Growth and ontogeny
Newborn squirrels are called kits, kittens, or pups. They weigh 13–18 grams at birth. They are born blind, entirely hairless, and pink with vibrissae present at birth. 7–10 days postpartum, the skin begins to darken, just before the juvenile pelage grows in. Lower incisors erupt 19–21 days postpartum, while upper incisors erupt after 4 weeks. Cheek teeth erupt during week 6. Eyes open after 21–42 days, and ears open 3–4 weeks postpartum. Weaning is initiated around 7 weeks postpartum, and is usually finished by week 10, followed by the loss of the juvenile pelage. Full adult body mass is achieved by 8–9 months after birth.
Diseases
Diseases such as typhus, plague, and tularemia are spread by eastern gray squirrels. If not properly treated, these diseases have the potential to kill squirrels. When bitten or exposed to bodily fluids, humans can contract these diseases. Also carried by eastern gray squirrels are parasites such as ringworm, fleas, lice, mites, and ticks which can kill their squirrel host. Their skin may become rough, blotchy, and prone to hair loss due to the mite parasite during the chilly winter months. The parasites are not transferred to people when these squirrels reside in attics or homes. A frequent illness spread by ticks is Lyme disease. Ticks can also spread Rocky Mountain spotted fever. It can result in damage to internal organs including the heart and kidney if not properly treated. An eastern gray squirrel is susceptible to diseases including fibromatosis and squirrelpox. A squirrel with fibromatosis, a virus-induced illness, may grow massive skin tumors all over the body. Blindness could result from a tumor that is located close to a squirrel's eye.
Behavior and ecology
Like many members of the family Sciuridae, the eastern gray squirrel is a scatter-hoarder; it hoards food in numerous small caches for later recovery. Some caches are quite temporary, especially those made near the site of a sudden abundance of food which can be retrieved within hours or days for reburial in a more secure site. Others are more permanent and are not retrieved until months later. Each squirrel is estimated to make several thousand caches each season. The squirrels have very accurate spatial memory for the locations of these caches, using distant and nearby landmarks to retrieve them. Smell is used partly to uncover food caches, and also to find food in other squirrels' caches. Scent can be unreliable when the ground is too dry or covered in snow.
Squirrels sometimes use deceptive behavior to prevent other animals from retrieving cached food. For example, they will pretend to bury the object if they feel that they are being watched. They do this by preparing the spot as usual, for instance, digging a hole or widening a crack, miming the placement of the food, while actually concealing it in their mouths, and then covering up the "cache" as if they had deposited the object. They also hide behind vegetation while burying food or hide it high up in trees (if their rival is not arboreal). Such a complex repertoire suggests that the behaviours are not innate, and imply theory of mind thinking.
The eastern gray squirrel is one of very few mammalian species that can descend a tree head-first. It does this by turning its feet so the claws of its hind paws are backward-pointing and can grip the tree bark.
Eastern gray squirrels build a type of nest, known as a drey, in the forks of trees, consisting mainly of dry leaves and twigs. The dreys are roughly spherical, about 30 to 60 cm in diameter and are usually insulated with moss, thistledown, dried grass, and feathers to reduce heat loss. Males and females may share the same nest for short times during the breeding season, and during cold winter spells. Squirrels may share a drey to stay warm. They may also nest in the attic or exterior walls of a house, where they may be regarded as pests, as well as fire hazards due to their habit of gnawing on electrical cables. In addition, squirrels may inhabit a permanent tree den hollowed out in the trunk or a large branch of a tree.
Eastern gray squirrels are crepuscular, or more active during the early and late hours of the day, and tend to avoid the heat in the middle of a summer day. They do not hibernate.
Eastern gray squirrels can breed twice a year, but younger and less experienced mothers normally have a single litter per year in the spring. Depending on forage availability, older and more experienced females may breed again in summer. In a year of abundant food, 36% of females bear two litters, but none will do so in a year of poor food. Their breeding seasons are December to February and May to June, though this is slightly delayed in more northern latitudes. The first litter is born in February or March, the second in June or July, though, again, bearing may be advanced or delayed by a few weeks depending on climate, temperature, and forage availability. In any given breeding season, an average of 61 – 66% of females bear young. If a female fails to conceive or loses her young to unusually cold weather or predation, she re-enters estrus and has a later litter. Five days before a female enters estrus, she may attract up to 34 males from up to 500 meters away. Eastern gray squirrels exhibit a form of polyandry, in which the competing males will form a hierarchy of dominance, and the female will mate with multiple males depending on the hierarchy established.
Normally, one to four young are born in each litter, but the largest possible litter size is eight. The gestation period is about 44 days. The young are weaned around 10 weeks, though some may wean up to six weeks later in the wild. They begin to leave the nest after 12 weeks, with autumn born young often wintering with their mother. Only one in four squirrel kits survives to one year of age, with mortality around 55% for the following year. Mortality rates then decrease to around 30% for following years until they increase sharply at eight years of age.
Rarely, eastern gray females can enter estrus as early as five and a half months old, but females are not normally fertile until at least one year of age. Their mean age of first estrus is 1.25 years. The presence of a fertile male will induce ovulation in a female going through estrus. Male eastern grays are sexually mature between one and two years of age. Reproductive longevity for females appears to be over 8 years, with 12.5 years documented in North Carolina. These squirrels can live to be 20 years old in captivity, but in the wild live much shorter lives due to predation and the challenges of their habitat. At birth, their life expectancy is 1–2 years, an adult typically can live to be six, with exceptional individuals making it to 12 years.
Communication
As in most other mammals, communication among eastern gray squirrel individuals involves both vocalizations and posturing. The species has a quite varied repertoire of vocalizations, including a squeak similar to that of a mouse, a low-pitched noise, a chatter, and a raspy "mehr mehr mehr". Other methods of communication include tail-flicking and other gestures, including facial expressions. Tail flicking and the "kuk" or "quaa" call are used to ward off and warn other squirrels about predators, as well as to announce when a predator is leaving the area. Squirrels also make an affectionate coo-purring sound that biologists call the "muk-muk" sound. This is used as a contact sound between a mother and her kits and in adulthood, by the male when he courts the female during mating season. The use of vocal and visual communication has been shown to vary by location, based on elements such as noise pollution and the amount of open space. For instance, populations living in large cities generally rely more on the visual signals, due to the generally louder environment with more areas without much visual restriction. However, in heavily wooded areas, vocal signals are used more often due to the relatively lower noise levels and a dense canopy restricting visual range.
Habitat
In the wild, eastern gray squirrels can be found inhabiting large areas of mature, dense woodland ecosystems, generally covering 100 acres (40 hectares) of land. These forests usually contain large mast-producing trees such as oaks and hickories, providing ample food sources. Oak-hickory hardwood forests are generally preferred over coniferous forests due to the greater abundance of mast forage. This is why they are found only in parts of eastern Canada which do not contain boreal forest (i.e. they are found in some parts of New Brunswick, in southwestern Quebec, throughout southern Ontario and in southern Manitoba).
Eastern gray squirrels generally prefer constructing their dens upon large tree branches and within the hollow trunks of trees. They also have been known to take shelter within abandoned bird nests. The dens are usually lined with moss plants, thistledown, dried grass, and feathers. These perhaps provide and assist in the insulation of the den, used to reduce heat loss. A cover to the den is usually built afterwards. Eastern gray squirrels also use dens for protection from prey and helps them look after their young. Young survive 40 percent less if they lived in a leaf nest compared to a den. Squirrels tend to claim 2-3 dens at the same time. Canopy and midstory trees are used by squirrels to hide from predators such as hawks and owls. The typical squirrel ranges over and tend to be smaller where more of them are found.
Close to human settlements, eastern gray squirrels are found in parks and back yards of houses within urban environments and in the farmlands of rural environments.
Ecosystem
Eastern gray squirrels forage tree seeds and disperse them through seed-caching. They may also contribute to the distribution of truffle fungal spores when they eat truffles. They are an important prey source and parasitic host for other animals.
Predation
Eastern gray squirrels predators include hawks, weasels, raccoons, bobcats, foxes, domestic and feral cats, snakes, owls, and dogs. Their primary predators are hawks, owls, and snakes. Raccoons and weasels may consume a squirrel depending on where it lives in the United States. Rattlesnakes eat squirrels in California as they are searching for food in a heavy forest. The squirrel is susceptible to be eaten by a fox in the eastern region of the United States.
In its introduced range in South Africa, it has been preyed on by African harrier-hawks. When a predator is approaching the eastern gray squirrel, other squirrels will inform the squirrel of the predator by sending an acoustic signal to let the squirrel know. The speed of a squirrel makes it hard for it to be captured by the predators.
Diet
Eastern gray squirrels eat a range of foods, such as tree bark, tree buds, flowers, berries, many types of seeds and acorns, walnuts, and other nuts, like hazelnuts (see picture) and some types of fungi found in the forests, including fly agaric mushrooms and truffles. They can cause damage to trees by tearing the bark and eating the soft cambial tissue underneath. In Europe, sycamore and beech suffer the greatest damage. Mast-bearing gymnosperms such as cedar, hemlock, pine, and spruce are another food source, as well as angiosperms such as hickory, oak, and walnut. These trees produce important foods for them during the spring and fall months. The squirrels will vary the species they forage from depending on the season. The squirrels also raid gardens for wheat, tomatoes, corn, strawberries, and other garden crops. Sometimes they eat the tomato seeds and discard the rest. On occasion, eastern gray squirrels also prey upon insects, frogs, small rodents including other squirrels, and small birds, their eggs, and young. They also gnaw on bones, antlers, and turtle shells – likely as a source of minerals scarce in their normal diet. In urban and suburban areas, these squirrels scavenge for food in trash bins. However, these foods are not safe for them to digest because they include sugar, fat, as well as additives that can make them sick. Eastern gray squirrels are sometimes mistakenly thought to be herbivores, but they are omnivores.
Eastern gray squirrels have a high enough tolerance for humans to inhabit residential neighborhoods and raid bird feeders for millet, corn, and sunflower seeds. Some people who feed and watch birds for entertainment also intentionally feed seeds and nuts to the squirrels for the same reason. However, in the UK eastern gray squirrels can take a significant proportion of supplementary food from feeders, preventing access and reducing use by wild birds. Attraction to supplementary feeders can increase local bird nest predation, as eastern gray squirrels are more likely to forage near feeders, resulting in increased likelihood of finding nests, eggs and nestlings of small passerines.
Introductions and impact
The eastern gray squirrel is an introduced species in a variety of locations in western North America: in western Canada, to the southwest corner of British Columbia and to the city of Calgary, Alberta; in the United States, to the states of Washington and Oregon and, in California, to the city of San Francisco and the San Francisco Peninsula area in San Mateo and Santa Clara Counties, south of the city. It has become the most common squirrel in many urban and suburban habitats in western North America, from north of central California to southwest British Columbia.
By the turn of the 20th century, breeding populations of the eastern gray squirrel had been introduced into South Africa, Ireland, Italy, Australia (extirpated by 1973), and the United Kingdom.
In South Africa, though exotic, it is not usually considered an invasive species owing to its small range (only found in the extreme southwestern part of the Western Cape, going north as far as the small farming town of Franschhoek), as well because it inhabits urban areas and places greatly affected by humans, such as agricultural areas and exotic pine plantations. Here, it mostly eats acorns and pine seeds, although it will take indigenous and commercial fruit, as well. Even so, it is unable to use the natural vegetation (fynbos) found in the area, a factor which has helped to limit its spread. It does not come into contact with native squirrels due to geographic isolation (a native tree squirrel, Paraxerus cepapi, is found only in the savanna regions in the northeast of the country) and different habitats.
Gray squirrels were first introduced to Britain in the 1870s, as fashionable additions to estates. In 1921 it was reported in The Times that the Zoological Society of London had released eastern greys to breed at liberty in Regents Park:
They spread rapidly across England, and then became established in both Wales and parts of southern Scotland. On mainland Britain, they have almost entirely displaced native red squirrels. Larger than red squirrels and capable of storing up to four times more fat, gray squirrels are better able to survive winter conditions. They produce more young and can live at higher densities. Gray squirrels also carry the squirrelpox virus, to which red squirrels have no immunity. When an infected gray squirrel introduces squirrelpox to a red squirrel population, its decline is 17–25 times greater than through competition alone.
In Ireland, the displacement of red squirrels has not been as rapid because only a single introduction occurred, in County Longford. Schemes have been introduced to control the population of gray squirrels in Ireland to encourage the native red squirrels. Eastern gray squirrels have also been introduced to Italy, and the European Union has expressed concern that they will similarly displace the red squirrel from parts of the European continent.
As food
Gray squirrels were eaten in earlier times by Native Americans and their meat is still popular with hunters across most of their range in North America. Today, it is still available for human consumption and is occasionally sold in the United Kingdom. However, physicians in the United States have warned that squirrel brains should not be eaten, because of the risk that they may carry Creutzfeldt–Jakob disease.
Displacement of red squirrels
In Britain and Ireland, the eastern gray squirrel is not regulated by natural predators, other than the European pine marten, which is generally absent from England and Wales. This has aided its rapid population growth and has led to the species being classed as a pest and it is now illegal to release captured eastern grey squirrels back into the wild in the UK. Measures are being devised to reduce its numbers, including a campaign starting in 2006 named "Save Our Squirrels" using the slogan "Save a red, eat a grey!" which attempted to re-introduce squirrel meat in to the local market, with celebrity chefs promoting the idea, cookbooks introducing recipes containing squirrel and the Forestry Commission providing a regular supply of squirrel meat to British restaurants, factories and butchers. In areas where relict populations of red squirrels survive, such as the islands of Anglesey, Brownsea and the Isle of Wight, programs exist to eradicate gray squirrels and prevent them from reaching these areas in order to allow red squirrel populations to recover and grow.
Although complex and controversial, the main factor in the eastern gray squirrel's displacement of the red squirrel is thought to be its greater fitness, hence a competitive advantage over the red squirrel on all measures. Within 15 years of the grey squirrel's introduction to a red squirrel habitat, red squirrel populations are extinct. The eastern gray squirrel tends to be larger and stronger than the red squirrel and has been shown to have a greater ability to store fat for winter. Due to the dearth of trees in their native Ireland for them to reside in, red squirrels are the only species being harmed by the invasion of grey squirrels. The squirrel can, therefore, compete more effectively for a larger share of the available food, resulting in relatively lower survival and breeding rates among the red squirrel. Parapoxvirus may also be a strongly contributing factor; red squirrels have long been fatally affected by the disease, while the eastern gray squirrels are unaffected, but thought to be carriers – although how the virus is transmitted has yet to be determined. Red squirrel extinction rates can be 20–25 times greater in areas with confirmed cases of squirrel pox than they are in areas without the disease. This competitive action done between these two squirrels is reasoned to qualify the eastern gray squirrel as a keystone species because since the eastern gray squirrel is coming and wiping out the red squirrels, there would be a reduced chance of competition hence more eastern gray squirrels will come in to Ireland. However, several cases of red squirrels surviving have been reported, as they have developed an immunity – although their population is still being massively affected. The red squirrel is also less tolerant of habitat destruction and fragmentation, which has led to its population decline, while the more adaptable eastern gray squirrel has taken advantage and expanded. Methods done to control this competition between these squirrels are that red squirrels should remain in their original habitats, such as Ireland, while the grey squirrels should be kept out of these places entirely as a means of controlling this squirrel competition.
Similar factors appear to have been at play in the Pacific region of North America, where the native American red squirrel has been largely displaced by the eastern gray squirrel in parks and forests throughout much of the region.
Ironically, "fears" for the future of the eastern gray squirrel arose in 2008, as the melanistic form (black) began to spread through the southern British population. In the UK, if a "grey squirrel" (eastern gray squirrel) is trapped, under the Wildlife and Countryside Act 1981, it is illegal to release it or to allow it to escape into the wild; instead, it is legally required be "humanely dispatched."
In the late 1990s, Italy's National Wildlife Institute and University of Turin launched an eradication attempt to halt the spread of gray squirrels in northwest Italy, but court action by animal rights groups blocked this. Hence gray squirrels are expected to cross the Alps into France and Switzerland in the next few decades.
| Biology and health sciences | Rodents | Animals |
408195 | https://en.wikipedia.org/wiki/Cauchy%20principal%20value | Cauchy principal value | In mathematics, the Cauchy principal value, named after Augustin-Louis Cauchy, is a method for assigning values to certain improper integrals which would otherwise be undefined. In this method, a singularity on an integral interval is avoided by limiting the integral interval to the non singular domain.
Formulation
Depending on the type of singularity in the integrand , the Cauchy principal value is defined according to the following rules:
In some cases it is necessary to deal simultaneously with singularities both at a finite number and at infinity. This is usually done by a limit of the form
In those cases where the integral may be split into two independent, finite limits,
and
then the function is integrable in the ordinary sense. The result of the procedure for principal value is the same as the ordinary integral; since it no longer matches the definition, it is technically not a "principal value".
The Cauchy principal value can also be defined in terms of contour integrals of a complex-valued function with with a pole on a contour . Define to be that same contour, where the portion inside the disk of radius around the pole has been removed. Provided the function is integrable over no matter how small becomes, then the Cauchy principal value is the limit:
In the case of Lebesgue-integrable functions, that is, functions which are integrable in absolute value, these definitions coincide with the standard definition of the integral.
If the function is meromorphic, the Sokhotski–Plemelj theorem relates the principal value of the integral over with the mean-value of the integrals with the contour displaced slightly above and below, so that the residue theorem can be applied to those integrals.
Principal value integrals play a central role in the discussion of Hilbert transforms.
Distribution theory
Let be the set of bump functions, i.e., the space of smooth functions with compact support on the real line . Then the map
defined via the Cauchy principal value as
is a distribution. The map itself may sometimes be called the principal value (hence the notation p.v.). This distribution appears, for example, in the Fourier transform of the sign function and the Heaviside step function.
Well-definedness as a distribution
To prove the existence of the limit
for a Schwartz function , first observe that is continuous on as
and hence
since is continuous and L'Hopital's rule applies.
Therefore, exists and by applying the mean value theorem to we get:
And furthermore:
we note that the map
is bounded by the usual seminorms for Schwartz functions . Therefore, this map defines, as it is obviously linear, a continuous functional on the Schwartz space and therefore a tempered distribution.
Note that the proof needs merely to be continuously differentiable in a neighbourhood of 0 and to be bounded towards infinity. The principal value therefore is defined on even weaker assumptions such as integrable with compact support and differentiable at 0.
More general definitions
The principal value is the inverse distribution of the function and is almost the only distribution with this property:
where is a constant and the Dirac distribution.
In a broader sense, the principal value can be defined for a wide class of singular integral kernels on the Euclidean space . If has an isolated singularity at the origin, but is an otherwise "nice" function, then the principal-value distribution is defined on compactly supported smooth functions by
Such a limit may not be well defined, or, being well-defined, it may not necessarily define a distribution. It is, however, well-defined if is a continuous homogeneous function of degree whose integral over any sphere centered at the origin vanishes. This is the case, for instance, with the Riesz transforms.
Examples
Consider the values of two limits:
This is the Cauchy principal value of the otherwise ill-defined expression
Also:
Similarly, we have
This is the principal value of the otherwise ill-defined expression
but
Notation
Different authors use different notations for the Cauchy principal value of a function , among others:
as well as P.V., and V.P.
| Mathematics | Complex analysis | null |
408201 | https://en.wikipedia.org/wiki/Sieve | Sieve | A sieve, fine mesh strainer, or sift, is a tool used for separating wanted elements from unwanted material or for controlling the particle size distribution of a sample, using a screen such as a woven mesh or net or perforated sheet material. The word sift derives from sieve.
In cooking, a sifter is used to separate and break up clumps in dry ingredients such as flour, as well as to aerate and combine them. A strainer (see Colander), meanwhile, is a form of sieve used to separate suspended solids from a liquid by filtration.
Industrial strainer
Some industrial strainers available are simplex basket strainers, duplex basket strainers, T-strainers and Y-strainers. Simple basket strainers are used to protect valuable or sensitive equipment in systems that are meant to be shut down temporarily. Some commonly used strainers are bell mouth strainers, foot valve strainers, basket strainers. Most processing industries (mainly pharmaceutical, coatings and liquid food industries) will opt for a self-cleaning strainer instead of a basket strainer or a simplex strainer due to limitations of simple filtration systems. The self-cleaning strainers or filters are more efficient and provide an automatic filtration solution.
Sieving
Sieving is a simple technique for separating particles of different sizes. A sieve such as used for sifting flour has very small holes. Coarse particles are separated or broken up by grinding against one another and the screen openings. Depending upon the types of particles to be separated, sieves with different types of holes are used. Sieves are also used to separate stones from sand. Sieving plays an important role in food industries where sieves (often vibrating) are used to prevent the contamination of the product by foreign bodies. The design of the industrial sieve is of primary importance here.
Triage sieving refers to grouping people according to their severity of injury.
Wooden sieves
The mesh in a wooden sieve might be made from wood or wicker. Use of wood to avoid contamination is important when the sieve is used for sampling. Henry Stephens, in his Book of the Farm, advised that the withes of a wooden riddle or sieve be made from fir or willow with American elm being best. The rims would be made of fir, oak or, especially, beech.
US standard test sieve series
A sieve analysis (or gradation test) is a practice or procedure used (commonly used in civil engineering or sedimentology) to assess the particle size distribution (also called gradation) of a granular material. Sieve sizes used in combinations of four to eight sieves.
Other types
Chinois, or conical sieve used as a strainer, also sometimes used like a food mill
Cocktail strainer, a bar accessory
Colander, a (typically) bowl-shaped sieve used as a strainer in cooking
Flour sifter or bolter, used in flour production and baking
Graduated sieves, used to separate varying small sizes of material, often soil, rock or minerals
Mesh strainer, or just "strainer", usually consisting of a fine metal mesh screen on a metal frame
Laundry strainer, to drain boiling water from laundry removed from a wash copper, usually with a wooden frame to facilitate manual handling with hot contents
Riddle, used for soil
Spider, used in Chinese cooking
Tamis, also known as a drum sieve
Tea strainer, specifically intended for use when making tea
Zaru, or bamboo sieve, used in Japanese cooking
Other uses
"Sieve" is a common term used in trash-talk referring to a goaltender in ice hockey who lets in too many goals
"Leaks like a sieve" is an English language idiom to describe a container that has multiple leaks, or, by allegory, an organization whose confidential information is routinely disclosed to the public.
| Technology | Flexible components | null |
408239 | https://en.wikipedia.org/wiki/Tracer%20ammunition | Tracer ammunition | Tracer ammunition, or tracers, are bullets or cannon-caliber projectiles that are built with a small pyrotechnic charge in their base. When fired, the pyrotechnic composition is ignited by the burning powder and burns very brightly, making the projectile trajectory visible to the naked eye during daylight, and very bright during nighttime firing. This allows the shooter to visually trace the trajectory of the projectile and thus make necessary ballistic corrections, without having to confirm projectile impacts and without even using the sights of the weapon. Tracer fire can also be used as a marking tool to signal other shooters to concentrate their fire on a particular target during battle.
When used, tracers are usually loaded as every fifth round in machine gun belts, referred to as four-to-one tracer. Platoon and squad leaders will load some tracer rounds in their magazine or even use solely tracers to mark targets for their soldiers to fire on. Tracers are also sometimes placed two or three rounds from the bottom of magazines to alert shooters that their weapons are almost empty. During World War II, aircraft with fixed machine guns or cannons mounted would sometimes have a series of tracer rounds added near the end of the ammunition belts, to alert the pilot that he was almost out of ammunition. However, this practice similarly alerted astute enemies that their foes were nearly out of ammunition. More often, however, the entire magazine was loaded four-to-one, on both fixed offensive and flexible defensive guns, to help mitigate the difficulties of aerial gunnery. Tracers were very common on most WWII aircraft, except for night fighters, which needed to be able to attack and shoot down the enemy before they realized they were under attack, and without betraying their own location to the enemy defensive gunners. The United States relied heavily on tracer ammunition for the defensive Browning M2 .50 caliber machine guns on its heavy bombers such as the B-24 Liberator.
History
There are fireworks manuals from the 14th century specifying a way to make "flaming cannonballs," with pitch, gunpowder, a cloth/cordage cover, and finally smearing them with lard and tallow. These were however not available for small arms, and before the development of tracers, gunners still relied on seeing their bullets' impact to adjust their aim. However, these were not always visible, especially as the effective range of ammunition increased dramatically during the later half of the 19th century, meaning the bullets could impact a mile or more away in long-range area fire. In the early 20th century, ammunition designers developed "spotlight" bullets, which would create a flash or smoke puff on impact to increase their visibility. However, these projectiles were deemed in violation of the Hague Conventions prohibition of "exploding bullets." This strategy was also useless when firing at aircraft, as there was nothing for the projectiles to hit if they missed the target. Designers also developed bullets that would trail white smoke. However, these designs required an excessive amount of mass loss to generate a satisfactory trail. The loss of mass en route to the target severely affected the bullet's ballistics.
The United Kingdom was the first to develop and introduce a tracer round, a version of the .303 cartridge in 1915. The United States introduced a .30-06 tracer in 1917. Prior to adopting red (among a variety of other colors) bullet tips for tracers, American tracers were identified by blackened cartridge cases.
Tracers proved useful as a countermeasure against Zeppelins used by Germany during World War I. The airships were used for reconnaissance, surveillance, and bombing operations. Normal bullets merely had the effect of causing a slow leak, but tracers could ignite the hydrogen gasbags, and bring down the airship quickly.
In World War II US Naval and marine aircrew were issued tracer rounds with their side arms for emergency signaling use as well as defense.
Construction
A tracer projectile is constructed with a hollow base filled with a pyrotechnic flare material, made of a mixture of a very finely ground metallic fuel, oxidizer, and a small amount of organic fuel. Metallic fuels include magnesium, aluminum, and occasionally zirconium. The oxidizer is a salt molecule that contains oxygen combined with a specific atom responsible for the desired color output. Upon ignition, the heated salt releases its oxygen to sustain the combustion of the fuel in the mixture. The color-emitting atom in the salt is also released and reacts chemically with excess oxygen providing the source of the colored flame. In NATO standard ammunition (including US), the oxidizer salt is usually a mixture of strontium compounds (nitrate, peroxide, etc.) and the metallic fuel is magnesium. Burning strontium yields a bright red light. Russian and Chinese tracer ammunition generates green light using barium salts. An oxidizer and metallic fuel alone, however, do not make a practical pyrotechnic for the purpose of producing colored light. The reaction is too energetic, consuming all materials in one big flash of white lightwhite light being the characteristic output of magnesium-oxide (MgO), for example. Therefore, in the case of using strontium nitrate and magnesium, to produce a red-colored flame that is not over-powered by the white light from the burning fuel, a chlorine donor is provided in the pyrotechnic mixture, so that strontium chloride can also form in the flame, cooling it so that the white light of MgO is greatly reduced. Cooling the flame in this manner also lengthens the reaction rate so that the mixture has an appreciable burn time. Polyvinyl chloride (PVC) is a typical organic fuel in colored light for this purpose. Some modern designs use compositions that produce little to no visible light and radiate mainly in infrared, being visible only on night vision equipment.
Types
There are three types of tracers: bright tracer, subdued tracer, and dim tracer. Bright tracers are the standard type, which starts burning very shortly after exiting the muzzle. A disadvantage of bright tracers is that they give away the shooter's location to the enemy; as a military adage puts it, "tracers work both ways". Bright tracers can also overwhelm night-vision devices, rendering them useless. Subdued tracers burn at full brightness after a hundred or more yards to avoid giving away the gunner's position. Dim tracers burn very dimly but are clearly visible through night-vision equipment.
The M196 tracer cartridge (54-grain bullet) 5.56×45mm NATO was developed for the original M16 rifle and is compatible with the M16A1 barrel also using 1:12 rifling twist. It has a red tip and is designed to trace out to 500 yards, and its trajectory matches the M193 (56-grain) ball cartridge, which has no tip color. Trajectory match, or ballistic match, is achieved between two bullets of slightly different weight and aerodynamic characteristics by adjusting the cartridge propellant weight, propellant type, and muzzle velocity, to remain within safe pressure limits, yet provide each bullet with a trajectory to the target that is nearly identical over all atmospheric conditions and target engagement ranges, while using the same gunsight aimpoint. Trajectory match is not intended to be perfect, an engineering impossibility under the closest of similarities between the two bullets, that is further complicated in the case of the tracer losing mass and changing its drag properties as it flies. The intent is that the tracer matches the ball round well enough for the purposes of machinegun fire.
The M856 tracer cartridge (63.7-grain bullet) is used in the M16A2/3/4, M4-series, and M249 weapons (among other 5.56mm NATO weapons). This round is designed to trace out to 875 yards, has an orange tip color, and is trajectory matched to the M855 (62-grain, green tip) ball cartridge. The M856 tracer should not be used in the M16A1 except under emergency conditions and only in relatively warm weather, because the M16A1's slower 1:12 rifling twist is not sufficient to properly stabilize this projectile at colder combat service temperatures (freezing down to –40 degrees), when the air density is much greater, disrupting the gyroscopic stability of the bullet. The M16A2 and newer models have a rifling twist of 1 in 7" necessary to stabilize the M856 tracer round under all temperature conditions. (The M196, however, does function safely in all 1:7 twist barrels, as well as those with 1:12 twist.)
The M25 is an orange-tipped 30-06 Springfield tracer cartridge consisting of a 145 gr bullet with 50 grains of IMR 4895 powder. The tracer compound contains composition R 321 which is 16% polyvinyl chloride, 26% magnesium powder, and 52% strontium nitrate.
The M62 is an orange-tipped 7.62×51mm NATO tracer consisting of a 142 gr bullet with 46 grains of WC 846 powder. The tracer compound contains composition R 284 which is 17% polyvinyl chloride, 28% magnesium powder, and 55% strontium nitrate. (This is the same composition used on the M196.)
The M276 is a violet-tipped 7.62×51mm NATO dim tracer that uses composition R 440, which is barium peroxide, strontium peroxide, calcium resinate for example calcium abietate, and magnesium carbonate.
Tracer compositions can also emit primarily in infrared, for use with night-vision devices. An example composition is boron, potassium perchlorate, sodium salicylate, iron carbonate or magnesium carbonate (as combustion retardant), and binder. Many variants exist.
Other applications
Tracers can also serve to direct fire at a given target because they are visible to other combatants. The disadvantage is that they betray the gunner's position; the tracer path leads back to its source. To make it more difficult for an enemy to do this, most modern tracers have a delay element, which results in the trace becoming visible some distance from the muzzle.
Depending on the target, tracer bullet lethality may be similar to standard ball ammunition. The forward portion of a tracer bullet contains a substantial slug of lead filler, nearly as much as the non-tracer ball round that it trajectory matches. In the case of the M196/M193 bullet set, the lethality differences are probably negligible for this reason. However, with the M856/M855 bullet set, the M855 ball round contains a steel penetrator tip that is not present in the M856 tracer bullet. As a result, different lethality effects can be expected against various targets. Nevertheless, under some circumstances, a slight degradation in lethality can often be made up for by the psychological and suppressive-fire effects tracer bullets can have on an enemy who is receiving them.
Besides guiding the shooter's direction of fire, tracer rounds can also be loaded at the end of a magazine to alert the shooter that the magazine is almost empty. This is particularly useful in weapons that do not lock the bolt back when empty (such as the AK-47). During World War II, the Soviet Air Force also used this practice for aircraft machine guns. One disadvantage in this practice is that the enemy is alerted that the pilot or shooter is low on ammunition and possibly vulnerable. For ground forces, this generally offers no tactical advantage to the enemy, since a soldier with a crew-served weapon such as a machine gun who is out of ammunition is supposed to alert his team that he is "dry" and rely on their cover fire while he reloads the machine gun. Thus, an enemy must risk exposing himself in order to attack the reloading soldier.
Modern jet fighters primarily rely on radar and infrared seeker missiles to track and destroy enemy planes and laser-guided missiles to attack surface targets, rather than the plane's cannon, which may be just an ancillary weapon for air-to-air combat; although in the ground attack role, cannon fire may be emphasized. However, modern fighter aircraft use gyroscopes and inertial sensors coupled with radar and optical computing gunsights that make the use of tracers in cannon ammunition unnecessary. As long as the pilot can put the "pipper" (aiming point) in the head-up display (HUD) onto the target, he can be assured that the burst will be on target since the computers automatically compute range, closing rate, deflection, lateral accelerations, and even weather conditions to calculate target lead and aimpoint. Thus one of the primary reasons for using tracers on aircraft in the first place, uncertainty over where the bullets will end up in relation to the target, is removed.
Another use for the tracer is in tank hull machineguns, of mostly outdated tanks, where the machinegun operator cannot sight directly along the barrel, thus he has to rely on tracer bullets to guide his aim. Modern main battle tanks and armored fighting vehicles, however, employ advanced fire control systems that can accurately aim secondary weapons along with the main armament; although the continued use of tracers provides reassurance to gunners on the direction of machinegun fire.
In anti-aircraft autocannon tracer ammunition the tracer material can be part of a shell self-destruct mechanism to prevent missed shots from falling back down on friendly targets. As the tracer material burns to the end it triggers the self-destruct.
Safety restrictions
Tracers are associated with grass fires if used in summer over dry vegetation.
In the UK, the use of tracers is prohibited on National Rifle Association-operated ranges, due to the increased risk of fire. Use of tracers is usually only authorized during military training. During spells of hot weather, the Ministry of Defence will suspend the use of tracers for non-essential training to reduce the risk of wildfires on sites such as Salisbury Plain.
In July 2009, a large fire was started by tracer ammunition near Marseille, France, an area where shrub vegetation is very dry and flammable in the summer, and where normally this kind of ammunition should not be used.
On February 24, 2013, a fire was started at DFW Gun Club in Dallas, Texas, by the use of a tracer round inside the facility.
On July 3, 2018, the Lake Christine Fire near Basalt, Colorado, was started by tracer rounds fired at a gun range. The two individuals who were deemed responsible for the fire were shooting the tracer round ammunition outside of the designated target area at the gun range.
| Technology | Ammunition | null |
408703 | https://en.wikipedia.org/wiki/Intercropping | Intercropping | Intercropping is a multiple cropping practice that involves the cultivation of two or more crops simultaneously on the same field, a form of polyculture. The most common goal of intercropping is to produce a greater yield on a given piece of land by making use of resources or ecological processes that would otherwise not be utilized by a single crop.
Methods
The degree of spatial and temporal overlap in the two crops can vary somewhat, but both requirements must be met for a cropping system to be an intercrop. Numerous types of intercropping, all of which vary the temporal and spatial mixture to some degree, have been identified.
Mixed intercropping
Mixed intercropping consists of multiple crops freely mixed in the available space. In the 21st century, it remains a common practice in Ethiopia, Eritrea, Georgia, and a few other places. Freely mixed intercropping has been practiced for thousands of years. In medieval England, farmers mixed oat and barley, which they called dredge, or dredge corn, to make livestock feed. French peasants similarly ground wheat and rye together to make pain de méteil, or bread of mixed grains. Ease of harvesting and buyer preferences led later farmers to plant single-species fields instead.
Row crops
A row crop is a crop that can be planted in rows wide enough to allow it to be tilled or otherwise cultivated by agricultural machinery, machinery tailored for the seasonal activities of row crops. Such crops are sown by drilling or transplanting rather than broadcasting. They are often grown in market gardening (truck farming) contexts or in kitchen gardens. Growing row crops first started in Ancient China in the 6th century BC.
Temporal
Temporal intercropping uses the practice of sowing a fast-growing crop with a slow-growing crop, so that the fast-growing crop is harvested before the slow-growing crop starts to mature.
Relay
Further temporal separation is found in relay cropping, where the second crop is sown during the growth, often near the onset of reproductive development or fruiting, of the first crop, so that the first crop is harvested to make room for the full development of the second.
Crop rotation is a related practice but is not a form of intercropping, as the different crops are grown in separate growing seasons rather than in a single season.
Potential benefits
Resource partitioning
Careful planning is required, taking into account the soil, climate, crops, and varieties. It is particularly important not to have crops competing with each other for physical space, nutrients, water, or sunlight. Examples of intercropping strategies are planting a deep-rooted crop with a shallow-rooted crop, or planting a tall crop with a shorter crop that requires partial shade. Inga alley cropping has been proposed as an alternative to the ecological destruction of slash-and-burn farming.
When crops are carefully selected, other agronomic benefits are also achieved.
Mutualism
Planting two crops in close proximity can especially be beneficial when the two plants interact in a way that increases one or both of the plant's fitness (and therefore yield). For example, plants that are prone to tip over in wind or heavy rain (lodging-prone plants), may be given structural support by their companion crop. Climbing plants such as black pepper can also benefit from structural support. Some plants are used to suppress weeds or provide nutrients. Delicate or light-sensitive plants may be given shade or protection, or otherwise wasted space can be utilized. An example is the tropical multi-tier system where coconut occupies the upper tier, banana the middle tier, and pineapple, ginger, or leguminous fodder, medicinal or aromatic plants occupy the lowest tier.
Intercropping of compatible plants can also encourage biodiversity, McDaniel et al. 2014 and Lori et al. 2017 finding a legume intercrop to increase soil diversity, or by providing a habitat for a variety of insects and soil organisms that would not be present in a single-crop environment. These organisms may provide crops valuable nutrients, such as through nitrogen fixation.
Pest management
There are several ways in which increasing crop diversity may help improve pest management. For example, such practices may limit outbreaks of crop pests by increasing predator biodiversity. Additionally, reducing the homogeneity of the crop can potentially increase the barriers against biological dispersal of pest organisms through the crop.
There are several ways pests, typically herbivorous insects, can be controlled through intercropping:
Trap cropping, this involves planting a crop nearby that is more attractive for pests compared to the production crop, the pests will target this crop and not the production crop.
Repellant intercrops, an intercrop that has a repellent effect to certain pests can be used. This system involved the repellant crop masking the smell of the production crop in order to keep pests away from it.
Push-pull cropping, this is a mixture of trap cropping and repellant intercropping. An attractant crop attracts the pest and a repellant crop is also used to repel the pest away.
Limitations
Intercropping to reduce pest damage in agriculture, has been deployed with varying success. For example, while many trap crops have successfully diverted pests off of focal crops in small-scale greenhouse, garden and field experiments, only a small portion of these plants have been shown to reduce pest damage at larger commercial scales. Furthermore, increasing crop diversity through intercropping does not necessarily increase the presence of the predators of crop pests. In a systematic review of the literature, in 2008, in the studies examined, predators of pests tended to increase under crop diversification strategies in only 53 percent of studies, and crop diversification only led to increased yield in only 32% of the studies. A common explanation for reported trap cropping failures, is that attractive trap plants only protect nearby plants if the insects do not move back into the main crop. In a review of 100 trap cropping examples in 2006, only 10 trap crops were classified as successful at a commercial scale, and in all successful cases, trap cropping was supplemented with management practices that specifically limited insect dispersal from the trap crop back into the main crop.
Gallery
| Technology | Agriculture_2 | null |
409255 | https://en.wikipedia.org/wiki/Green%20manure | Green manure | In agriculture, a green manure is a crop specifically cultivated to be incorporated into the soil while still green. Typically, the green manure's biomass is incorporated with a plow or disk, as is often done with (brown) manure. The primary goal is to add organic matter to the soil for its benefits. Green manuring is often used with legume crops to add nitrogen to the soil for following crops, especially in organic farming, but is also used in conventional farming.
Method of application
Farmers apply green manure by blending available plant discards into the soil. Farmers begin the process of green manuring by growing legumes or collecting tree/shrub clippings. Harvesters gather the green manure crops and mix the plant material into the soil. The un-decomposed plants prepare the ground for cash crops by slowly releasing nutrients like nitrogen into the soil.
Farmers may decide to add the green manure into the soil before or after cash crop planting. This variety in planting schedules can be seen in rice farming.
Functions
Green manures usually perform multiple functions that include soil improvement and soil protection:
Leguminous green manures such as clover and vetch contain nitrogen-fixing symbiotic bacteria in root nodules that fix atmospheric nitrogen in a form that plants can use. This performs the vital function of fertilization.
Depending on the species of cover crop grown, the amount of nitrogen released into the soil lies between 40 and 200 pounds per acre. With green manure use, the amount of nitrogen that is available to the succeeding crop is usually in the range of 40-60% of the total amount of nitrogen that is contained within the green manure crop.
Green manure acts mainly as soil-acidifying matter to decrease the alkalinity/pH of alkali soils by generating humic acid and acetic acid.
Incorporation of cover crops into the soil allows the nutrients held within the green manure to be released and made available to the succeeding crops. This results immediately from an increase in abundance of soil microorganisms from the degradation of plant material that aid in the decomposition of this fresh material. This additional decomposition also allows for the re-incorporation of nutrients that are found in the soil in a particular form such as nitrogen (N), potassium (K), phosphorus (P), calcium (Ca), magnesium (Mg), and sulfur (S).
Microbial activity from incorporation of cover crops into the soil leads to the formation of mycelium and viscous materials which benefit the health of the soil by increasing its soil structure (i.e. by aggregation). The increased percentage of organic matter (biomass) improves water infiltration and retention, aeration, and other soil characteristics. The soil is more easily turned or tilled than non-aggregated soil. Further aeration of the soil results from the ability of the root systems of many green manure crops to efficiently penetrate compact soils. The amount of humus found in the soil also increases with higher rates of decomposition, which is beneficial for the growth of the crop succeeding the green manure crop. Non-leguminous crops are primarily used to increase biomass.
The root systems of some varieties of green manure grow deep in the soil and bring up nutrient resources unavailable to shallower-rooted crops.
Common cover crop functions of weed suppression. Non-leguminous crops are primarily used (e.g. buckwheat). The deep rooting properties of many green manure crops make them efficient at suppressing weeds.
Some green manure crops, when allowed to flower, provide forage for pollinating insects. Green manure crops also often provide habitat for predatory beneficial insects, which allow for a reduction in the application of insecticides where cover crops are planted.
Some green manure crops (e.g. winter wheat and winter rye) can also be used for grazing.
Erosion control is often also taken into account when selecting which green manure cover crop to plant.
Some green crops reduce plant insect pests and diseases. Verticillium wilt is especially reduced in potato plants.
Incorporation of green manures into a farming system can drastically reduce the need for additional products such as supplemental fertilizers and pesticides.
Limitations to consider in the use of green manure are time, energy, and resources (monetary and natural) required to successfully grow and utilize these cover crops. Consequently, it is important to choose green manure crops based on the growing region and annual precipitation amounts to ensure efficient growth and use of the cover crop(s).
Nutrient release
Green manure is broken down into plant nutrient components by heterotrophic bacteria that consumes organic matter. Warmth and moisture contribute to this process, similar to creating compost fertilizer. The plant matter releases large amounts of carbon dioxide and weak acids that react with insoluble soil minerals to release beneficial nutrients. Soils that are high in calcium minerals, for example, can be given green manure to generate a higher phosphate content in the soil, which in turn acts as a fertilizer.
The ratio of carbon to nitrogen in a plant is a crucial factor to consider, since it will impact the nutrient content of the soil and may starve a crop of nitrogen, if the incorrect plants are used to make green manure. The ratio of carbon to nitrogen will differ from species to species, and depending upon the age of the plant. The ratio is referred to as C:N. The value of N is always one, whereas the value of carbon or carbohydrates is expressed in a value of about 10 up to 90; the ratio must be less than 30:1 to prevent the manure bacteria from depleting existing nitrogen in the soil. Rhizobium are soil organisms that interact with green manure to retain atmospheric nitrogen in the soil. Legumes, such as beans, alfalfa, clover and lupines, have root systems rich in rhizobium, often making them the preferred source of green manure material.
Crops
Many green manures are planted in autumn or winter to cover the ground before spring or summer sowing.
Alfalfa, which sends roots deep to bring nutrients to the surface.
Buckwheat in temperate regions
Cowpea
Clover (e.g. annual sweet clover)
Fava beans
Fenugreek
Ferns of the genus Azolla have been used as a green manure in southeast Asia.
Lupin
Millet
Mustard
Peanut
Phacelia tanacetifolia
Radish such as tillage radish or daikon radish.
Sesbania
Sorghum
Soybean
Sudangrass
Sunn hemp, a legume widely grown throughout the tropics and subtropics
Velvet bean (Mucuna pruriens), common in the southern US during the early part of the 20th century, before being replaced by soybeans, popular today in most tropical countries, especially in Central America, where it is the main green manure used in slash/mulch farming practices
Vetch (Vicia sativa, Vicia villosa)
Winter field bean
History
Green manures have been used since ancient times. Farmers could only use organic fertilizers before the invention of chemical nitrogen fertilizer. There is evidence for the Greeks plowing broad beans and faba beans into the soil around 300 B.C. The Romans also used green manures like faba beans and lupines to make their soil more fertile. Chinese agricultural texts dating back hundreds of years refer to the importance of grasses and weeds in providing nutrients for farm soil. It was also known to early North American colonists arriving from Europe. Common colonial green manure crops were rye, buckwheat and oats.
Traditionally, the incorporation of green manure into the soil is known as the fallow cycle of crop rotation, which was used to allow the soil to regain its fertility after the harvest.
Limitations of green manure
Managing green manure improperly or without additional chemical inputs may limit crop production. Mixing green manures into the soil without enough time before crop planting could stop the flow of nitrogen (nitrogen immobilization). When nitrogen stops flowing there won't be enough nutrients for the next crop planting. Farming systems with short growth spans for green manure are not usually efficient. Farmers must weigh the cost of green manures with their productivity to determine suitability.
| Technology | Agronomical techniques | null |
210481 | https://en.wikipedia.org/wiki/Gull | Gull | Gulls, or colloquially seagulls, are seabirds of the subfamily Larinae. They are most closely related to terns and skimmers, distantly related to auks, and even more distantly related to waders. Until the 21st century, most gulls were placed in the genus Larus, but that arrangement is now considered polyphyletic, leading to the resurrection of several genera. An older name for gulls is mews; this still exists in certain regional English dialects and is cognate with German , Danish , Swedish , Dutch , Norwegian , and French .
Typically medium to large in size, gulls are usually grey or white, often with black markings on the head or wings. They normally have harsh wailing or squawking calls; stout, longish bills; and webbed feet. Most gulls are ground-nesting piscivores or carnivores which take live food or scavenge opportunistically, particularly the Larus species. Live food often includes crustaceans, molluscs, fish and small birds. Gulls have unhinging jaws that provide the flexibility to consume large prey. Gulls are typically coastal or inland species, rarely venturing far out to sea, except for the kittiwakes and Sabine's gull. The large species take up to four years to attain full adult plumage, but two years is typical for small gulls. Large white-headed gulls are usually long-lived birds, with a maximum age of 49 years recorded for the European herring gull.
Gulls nest in large, densely packed, noisy colonies. They lay two or three speckled eggs in nests composed of vegetation. The young are precocial, born with dark mottled down and mobile upon hatching. Gulls are resourceful, inquisitive, and intelligent, the larger species in particular, demonstrating complex methods of communication and a highly developed social structure. For example, many gull colonies display mobbing behaviour, attacking and harassing predators and other intruders. Certain species, such as the herring gull, have exhibited tool-use behaviour, for example using pieces of bread as bait with which to catch goldfish. Many species of gulls have learned to coexist successfully with humans and thrive in human habitats. Others rely on kleptoparasitism to get their food. Gulls have been observed preying on live whales, landing on the whale as it surfaces and pecking out pieces of flesh.
Description and morphology
Gulls range in size from the little gull, at and , to the great black-backed gull, at and . They are generally uniform in shape, with heavy bodies, long wing, and moderately long necks. The tails of all but three species are rounded; the exceptions being Sabine's gull and swallow-tailed gulls, which have forked tails, and Ross's gull, which has a wedge-shaped tail. Gulls have moderately long legs, especially when compared to the similar terns, with fully webbed feet. The bill is generally heavy and slightly hooked, with the larger species having stouter bills than the smaller species. The bill colour is often yellow with a red spot for the larger white-headed species and red, dark red or black in the smaller species.
Gulls are a generalist species that can thrive in various environments and survive on a widely varied diet. They are the least specialised of all the seabirds, and their morphology allows for equal adeptness in swimming, flying, and walking. They are more adept walking on land than most other seabirds, and the smaller gulls tend to be more manoeuvrable while walking. The walking gait of gulls includes a slight side to side motion, something that can be exaggerated in breeding displays. In the air, they are able to hover and they are also able to take off quickly with little space.
The general pattern of plumage in adult gulls is a white body with a darker mantle; the extent to which the mantle is darker varies from pale grey to black. A few species vary in this, the ivory gull is entirely white, and some like the lava gull and Heermann's gull have partly or entirely grey bodies. The wingtips of most species are black, which improves their resistance to wear and tear, usually with a diagnostic pattern of white markings. The head of a gull may be covered by a dark hood or be entirely white. The plumage of the head varies by breeding season; in nonbreeding dark-hooded gulls, the hood is lost, sometimes leaving a single spot behind the eye, and in white-headed gulls, nonbreeding heads may have streaking.
Distribution and habitat
Gulls have a worldwide cosmopolitan distribution. They breed on every continent, including the margins of Antarctica, and are even found in the high Arctic. They are less common in the tropics, although a few species do live on tropical islands such as the Galapagos and New Caledonia. Many species breed in coastal colonies, with a preference for islands; one particular species, the grey gull, breeds in the interior of dry deserts far from water. Considerable variety exists in the Laridae family, and species may breed and feed in marine, freshwater, or terrestrial habitats.
Most gull species are migratory, with birds moving to warmer habitats during the winter, but the extent to which they migrate varies by species. Some migrate long distances, notably Sabine's gull, which migrates from the Arctic coasts to wintering grounds off the west coasts of South America and southern Africa, and Franklin's gull, which migrates from Canada to winter off the west coast of South America. Other species move much shorter distances and may simply disperse along the coasts near their breeding sites.
A big influence on non-breeding gull distribution is the availability of food patches. Human fisheries especially have an impact, since they often provide an abundant and predictable food resource. Two species of gulls dependent on human fisheries are Audouin's gull (Ichthyaetus audouinii) and lesser black-backed gulls (Larus fuscus); their breeding distributions (especially the black-backed gull) are heavily impacted by human fishing discards and fishing ports.
Other environmental drivers that structure bird habitat and distribution are human activity and climate impacts. For example, waterbird distribution in Mediterranean wetlands is influenced by changes in salinity, water depth, water body isolation and hydroperiod, all of which have been observed to affect the bird community structure in both a species- and guild-specific way. Gulls in particular have high associations with salinity levels, which were found to be the main environmental predictor for waterbird assemblage.
Behaviour
Diet and feeding
Charadriiform birds drink salt water, as well as fresh water, as they possess exocrine glands located in supraorbital grooves of the skull by which salt can be excreted through the nostrils to assist the kidneys in maintaining electrolyte balance.
Gulls are highly adaptable feeders that take a wide range of prey opportunistically. The food taken by gulls includes fish, and marine and freshwater invertebrates, both alive and already dead; terrestrial arthropods and invertebrates such as insects and earthworms; rodents, eggs, carrion, offal, reptiles, amphibians, seeds, fruit, human refuse, and even other birds. No gull species is a single-prey specialist, and no gull species forages using only a single method. The type of food depends on circumstances; terrestrial prey, e.g. seeds, fruit and earthworms, is more common during the breeding season, while marine prey is more common in the nonbreeding season when birds spend more time on large bodies of water.
Gulls not only take a wide range of prey, they also display great versatility in how they obtain it; prey can be caught in the air, on water, or on land. A number of hooded species are able to hawk insects on the wing, although the larger species perform this feat more rarely. Gulls on the wing snatch items both off the water and off the ground, and they are able to plunge-dive into water to catch prey. Smaller species are more manoeuvrable and better able to hover-dip fish from the air. Dipping is common when birds are sitting on the water, and gulls may swim in tight circles or foot paddle to bring marine invertebrates up to the surface. Food is also obtained by searching the ground, often on the shore among sand, mud or rocks. Larger gulls tend to do more feeding in this way. In shallow water, gulls may also engage in foot paddling. One method of obtaining prey involves dropping heavy shells of clams and mussels onto hard surfaces. Gulls may fly some distance to find a suitable surface on which to drop shells, and there is evidently a learned component to the task because older birds are more successful than younger birds. While overall feeding success is a function of age, the diversity in both prey and feeding methods is not. The time taken to learn foraging skills may explain the delayed maturation in gulls.
Gulls have only a limited ability to dive below the water surface to feed on deeper prey. To obtain prey from a greater depth, many species of gulls feed in association with other animals, where marine hunters drive prey to the surface when hunting. Examples of such associations include four species of gulls that feed around plumes of mud brought to the surface by feeding grey whales, and also between orcas (the largest dolphin species) and kelp gulls (among other seabirds).
Looking at the effect of humans on gull diet, overfishing of target prey such as sardines have caused a shift in diet and behaviour. Analysis of the yellow-legged gull's (Larus michahellis) pellets off the northwest coast of Spain has revealed a shift from a sardine to crustacean-based diet. This shift was linked to higher fishing efficiency and thus overall fish stock depletion. Lastly, closure of nearby open-air landfills limited food availability for the gulls, further creating a stress on their shift in diet. From 1974 to 1994, yellow-legged gull populations on Berlenga Island, Portugal, increased from 2600 to 44,698 individuals. Analyzing both adult and chick remains, researchers found a mixture of both natural prey and human refuse. The gulls relied substantially on the Henslow's swimming crab (Polybius henslowii). Yet, in times when local prey availability is low, the gulls shift to human-related food. These temporal shifts from marine to terrestrial prey highlight the resilience of adult gulls and their ability to keep chick condition consistent. Human disturbance has also been shown to have an effect on gull breeding, in which hatching failure is directly proportional to the amount of disturbance in a given plot. Certain gull breeds have been known to feast on the eyeballs of baby seals and directly pilfer milk from the elephant seal's teat.
Breeding
Gulls are monogamous and colonial breeders that display mate fidelity which normally lasts for the life of the pair. Divorce of mated pairs does occur, but it apparently has a social cost that persists for a number of years after the break-up. Gulls also display high levels of site fidelity, returning to the same colony after breeding there once and even usually breeding at the same location within that colony. Gull colonies can vary from just a few pairs to over a hundred thousand pairs, and may be exclusive to that gull species or shared with other seabird species. A few species nest singly, and single pairs of band-tailed gulls may breed in colonies of other bird species. Within colonies, gull pairs are territorial, defending an area of varying size around the nesting site from others of their species. This area can be as large as a radius around the nest in the European herring gull to just a tiny area of cliff ledge in the kittiwakes.
Most gulls breed once a year and have predictable breeding seasons lasting for three to five months. Gulls begin to assemble around the colony for a few weeks prior to occupying it. Existing pairs re-establish their pair-bonds, and unpaired birds begin courting. Pairs then move back into their territories, and new males establish new territories and attempt to court females. Gulls defend their territories from rivals of both sexes using calls and aerial attacks.
Nest building is an important part of the pair-bonding process. Most gull nests are mats of herbaceous matter with a central nest cup. Nests are usually built on the ground, but a few species establish their nests on cliffs (the usual preference for kittiwakes), and some choose to nest in trees and high places (e.g. Bonaparte's gulls). Species that nest in marshes need to construct a nesting platform to keep the nest dry, particularly species that nest in tidal marshes. Both sexes gather nesting material and build the nest, but the division of labour is not always exactly equal. In coastal towns, many gulls nest on rooftops and can be observed by nearby human residents.
Clutch size is typically three eggs, although some of the smaller gulls only lay two, and the swallow-tailed gull produces a single egg. Birds synchronise their laying within colonies, with a higher level of synchronisation in larger colonies. The eggs of gulls are usually dark tan to brown or dark olive with dark splotches and scrawl markings, and they are well camouflaged. Both sexes incubate the eggs; incubation bouts last between one and four hours during the day, and one parent incubates through the night. Research on various bird species, including gulls, suggests that females form pair bonds with other females to obtain alloparental care for their dependent offspring, a behaviour seen in other animal species, such as elephants, wolves, and the fathead minnow.
Lasting between 22 and 26 days, incubation begins after the first egg is laid but is not continuous until after the second egg is laid, meaning that the first two chicks hatch at about the same time, and the third some time later. Young chicks are brooded by their parents for about one or two weeks, and often at least one parent stays behind to guard the chicks until they fledge. Although the chicks are fed by both parents, early on in the rearing period the male does most of the feeding and the female most of the brooding and guarding.
Taxonomy
The family Laridae was introduced (as Laridia) by the French polymath Constantine Samuel Rafinesque in 1815. The taxonomy of gulls is confused by their widespread distribution zones of hybridisation leading to gene flow. Some have traditionally been considered ring species, but research has suggested that this assumption is questionable. Before the 21st century, most gulls were placed in the genus Larus, but this arrangement is now known to be polyphyletic, leading to the resurrection of the genera Ichthyaetus, Chroicocephalus, Leucophaeus, Saundersilarus, and Hydrocoloeus. Some English names refer to species complexes within the group:
Large white-headed gull is used to describe the 18 or so herring gull-like species, from California gull to lesser black-backed gull in the taxonomic list below.
White-winged gull is used to describe the four pale-winged, high Arctic-breeding taxa within the former group; these are Iceland gull, glaucous gull, Thayer's gull, and Kumlien's gull.
In common usage, members of various gull species are often referred to as 'sea gulls' or 'seagulls'; however, this is a layperson's term and is not used by most ornithologists and biologists. The name is used informally to refer to a common local species (or all gulls in general) and has no fixed taxonomic meaning. In common usage, gull-like seabirds that are not technically gulls (e.g. albatrosses, fulmars, terns, and skuas) may also be referred to as 'seagulls' by the layperson.
The American Ornithologists' Union combines the Sternidae, Stercorariidae, and Rhynchopidae as subfamilies in the family Laridae, but early 21st-century research shows this to be incorrect.
A molecular phylogenetic study published in 2022 found the following relationships between the genera, including the most recent generic change: the placement of Saunders's gull in its own genus Saundersilarus.
List of species
This is a list of the 54 gull species, presented in taxonomic sequence.
Evolutionary history
The Laridae are known from not-yet-published fossil evidence since the Early Oligocene, some 30–33 million years ago. Three gull-like species were described by Alphonse Milne-Edwards from the early Miocene of Saint-Gérand-le-Puy, France. A fossil gull from the Middle to Late Miocene of Cherry County, Nebraska, US, is placed in the prehistoric genus Gaviota; apart from this and the undescribed Early Oligocene fossil, all prehistoric species were tentatively assigned to the modern genus Larus. Among those of them that have been confirmed as gulls, Milne-Edwards' "Larus" elegans and "L." totanoides from the Late Oligocene/Early Miocene of southeast France have since been separated in Laricola.
| Biology and health sciences | Charadriiformes | null |
210533 | https://en.wikipedia.org/wiki/Nymph%20%28biology%29 | Nymph (biology) | In biology, a nymph (from Ancient Greek νύμφα nūmphē meaning "bride") is the juvenile form of some invertebrates, particularly insects, which undergoes gradual metamorphosis (hemimetabolism) before reaching its adult stage. Unlike a typical larva, a nymph's overall form already resembles that of the adult, except for a lack of wings (in winged species) and the emergence of genitalia. In addition, while a nymph moults, it never enters a pupal stage. Instead, the final moult results in an adult insect. Nymphs undergo multiple stages of development called instars.
Species with nymph stages
Many species of arthropods have nymph stages. This includes the orders Orthoptera (crickets, grasshoppers and locusts), Hemiptera (cicadas, shield bugs, whiteflies, aphids, leafhoppers, froghoppers, treehoppers), mayflies, termites, cockroaches, mantises, stoneflies and Odonata (dragonflies and damselflies).
Nymphs of aquatic insects, as in the Odonata, Ephemeroptera, and Plecoptera orders, are also called naiads, an Ancient Greek name for mythological water nymphs. Some entomologists have said that the terms larva, nymph and naiad should be used according to the developmental mode classification (hemimetabolous, paurometabolous or holometabolous) but others have pointed out that there is no real confusion. In older literature, these were sometimes referred to as the heterometabolous insects, as their adult and immature stages live in different environments (terrestrial vs. aquatic).
Second Egg Hypothesis
In 1628, English physician William Harvey published An Anatomical Disquisition on the Motion of the Heart and Blood in Animals. In his writing, Harvey hypothesized that the pupal stage in insects was the result of imperfect eggs. While some eggs produced smaller versions of fully-matured insects known as nymphs, others created intermediate forms. Thus, these intermediate forms must go through a second egg stage to reach their adult form. This hypothesis attempts to explain the developmental differences between hemimetabolous and holometabolous metamorphosis. Though there is little evidence supporting Harvey's hypothesis, it is still significant to modern research in nymphs.
Relationship with humans
In fly fishing with artificial flies, this stage of aquatic insects is the basis for an entire series of representative patterns for trout. They account for over half of the fishing fly patterns regularly used in the United States.
| Biology and health sciences | Animal ontogeny | null |
210555 | https://en.wikipedia.org/wiki/Waste%20management | Waste management | Waste management or waste disposal includes the processes and actions required to manage waste from its inception to its final disposal.
This includes the collection, transport, treatment, and disposal of waste, together with monitoring and regulation of the waste management process and waste-related laws, technologies, and economic mechanisms.
Waste can either be solid, liquid, or gases and each type has different methods of disposal and management. Waste management deals with all types of waste, including industrial, biological, household, municipal, organic, biomedical, radioactive wastes. In some cases, waste can pose a threat to human health. Health issues are associated with the entire process of waste management. Health issues can also arise indirectly or directly: directly through the handling of solid waste, and indirectly through the consumption of water, soil, and food. Waste is produced by human activity, for example, the extraction and processing of raw materials. Waste management is intended to reduce the adverse effects of waste on human health, the environment, planetary resources, and aesthetics.
The aim of waste management is to reduce the dangerous effects of such waste on the environment and human health. A big part of waste management deals with municipal solid waste, which is created by industrial, commercial, and household activity.
Waste management practices are not the same across countries (developed and developing nations); regions (urban and rural areas), and residential and industrial sectors can all take different approaches.
Proper management of waste is important for building sustainable and liveable cities, but it remains a challenge for many developing countries and cities. A report found that effective waste management is relatively expensive, usually comprising 20%–50% of municipal budgets. Operating this essential municipal service requires integrated systems that are efficient, sustainable, and socially supported. A large portion of waste management practices deal with municipal solid waste (MSW) which is the bulk of the waste that is created by household, industrial, and commercial activity. According to the Intergovernmental Panel on Climate Change (IPCC), municipal solid waste is expected to reach approximately 3.4 Gt by 2050; however, policies and lawmaking can reduce the amount of waste produced in different areas and cities of the world. Measures of waste management include measures for integrated techno-economic mechanisms of a circular economy, effective disposal facilities, export and import control and optimal sustainable design of products that are produced.
In the first systematic review of the scientific evidence around global waste, its management, and its impact on human health and life, authors concluded that about a fourth of all the municipal solid terrestrial waste is not collected and an additional fourth is mismanaged after collection, often being burned in open and uncontrolled fires – or close to one billion tons per year when combined. They also found that broad priority areas each lack a "high-quality research base", partly due to the absence of "substantial research funding", which motivated scientists often require. Electronic waste (ewaste) includes discarded computer monitors, motherboards, mobile phones and chargers, compact discs (CDs), headphones, television sets, air conditioners and refrigerators. According to the Global E-waste Monitor 2017, India generates ~ 2 million tonnes (Mte) of e-waste annually and ranks fifth among the e-waste producing countries, after the United States, the People's Republic of China, Japan and Germany.
Effective 'Waste Management' involves the practice of '7R' - 'R'efuse, 'R'educe', 'R'euse, 'R'epair, 'R'epurpose, 'R'ecycle and 'R'ecover. Amongst these '7R's, the first two ('Refuse' and 'Reduce') relate to the non-creation of waste - by refusing to buy non-essential products and by reducing consumption. The next two ('Reuse' and 'Repair') refer to increasing the usage of the existing product, with or without the substitution of certain parts of the product. 'Repurpose' and 'Recycle' involve maximum usage of the materials used in the product, and 'Recover' is the least preferred and least efficient waste management practice involving the recovery of embedded energy in the waste material. For example, burning the waste to produce heat (and electricity from heat). Certain non-biodegradable products are also dumped away as 'Disposal', and this is not a "waste-'management'" practice.
Principles of waste management
Waste hierarchy
The waste hierarchy refers to the "3 Rs" Reduce, Reuse and Recycle, which classifies waste management strategies according to their desirability in terms of waste minimisation. The waste hierarchy is the bedrock of most waste minimization strategies. The aim of the waste hierarchy is to extract the maximum practical benefits from products and to generate the minimum amount of end waste; see: resource recovery. The waste hierarchy is represented as a pyramid because the basic premise is that policies should promote measures to prevent the generation of waste. The next step or preferred action is to seek alternative uses for the waste that has been generated, i.e., by re-use. The next is recycling which includes composting. Following this step is material recovery and waste-to-energy. The final action is disposal, in landfills or through incineration without energy recovery. This last step is the final resort for waste that has not been prevented, diverted, or recovered. The waste hierarchy represents the progression of a product or material through the sequential stages of the pyramid of waste management. The hierarchy represents the latter parts of the life-cycle for each product.
Life-cycle of a product
The life-cycle of a product, often referred to as the product lifecycle, encompasses several key stages that begin with the design phase and proceed through manufacture, distribution, and primary use. After these initial stages, the product moves through the waste hierarchy's stages of reduce, reuse, and recycle. Each phase in this lifecycle presents unique opportunities for policy intervention, allowing stakeholders to rethink the necessity of the product, redesign it to minimize its waste potential, and extend its useful life.
During the design phase, considerations can be made to ensure that products are created with fewer resources, are more durable, and are easier to repair or recycle. This stage is critical for embedding sustainability into the product from the outset. Designers can select materials that have lower environmental impacts and create products that require less energy and resources to produce.
Manufacturing offers another crucial point for reducing waste and conserving resources. Innovations in production processes can lead to more efficient use of materials and energy, while also minimizing the generation of by-products and emissions. Adopting cleaner production techniques and improving manufacturing efficiency can significantly reduce the environmental footprint of a product.
Distribution involves the logistics of getting the product from the manufacturer to the consumer. Optimizing this stage can involve reducing packaging, choosing more sustainable transportation methods, and improving supply chain efficiencies to lower the overall environmental impact. Efficient logistics planning can also help in reducing fuel consumption and greenhouse gas emissions associated with the transport of goods.
The primary use phase of a product's lifecycle is where consumers interact with the product. Policies and practices that encourage responsible use, regular maintenance, and the proper functioning of products can extend their lifespan, thus reducing the need for frequent replacements and decreasing overall waste.
Once the product reaches the end of its primary use, it enters the waste hierarchy's stages. The first stage, reduction, involves efforts to decrease the volume and toxicity of waste generated. This can be achieved by encouraging consumers to buy less, use products more efficiently, and choose items with minimal packaging.
The reuse stage encourages finding alternative uses for products, whether through donation, resale, or repurposing. Reuse extends the life of products and delays their entry into the waste stream.
Recycling, the final preferred stage, involves processing materials to create new products, thus closing the loop in the material lifecycle. Effective recycling programs can significantly reduce the need for virgin materials and the environmental impacts associated with extracting and processing those materials.
Product life-cycle analysis (LCA) is a comprehensive method for evaluating the environmental impacts associated with all stages of a product's life. By systematically assessing these impacts, LCA helps identify opportunities to improve environmental performance and resource efficiency. Through optimizing product designs, manufacturing processes, and end-of-life management, LCA aims to maximize the use of the world's limited resources and minimize the unnecessary generation of waste.
In summary, the product lifecycle framework underscores the importance of a holistic approach to product design, use, and disposal. By considering each stage of the lifecycle and implementing policies and practices that promote sustainability, it is possible to significantly reduce the environmental impact of products and contribute to a more sustainable future.
Resource efficiency
Resource efficiency reflects the understanding that global economic growth and development can not be sustained at current production and consumption patterns. Globally, humanity extracts more resources to produce goods than the planet can replenish. Resource efficiency is the reduction of the environmental impact from the production and consumption of these goods, from final raw material extraction to the last use and disposal.
Polluter-pays principle
The polluter-pays principle mandates that the polluting parties pay for the impact on the environment. With respect to waste management, this generally refers to the requirement for a waste generator to pay for appropriate disposal of the unrecoverable materials.
History
Throughout most of history, the amount of waste generated by humans was insignificant due to low levels of population density and exploitation of natural resources. Common waste produced during pre-modern times was mainly ashes and human biodegradable waste, and these were released back into the ground locally, with minimum environmental impact. Tools made out of wood or metal were generally reused or passed down through the generations.
However, some civilizations have been more profligate in their waste output than others. In particular, the Maya of Central America had a fixed monthly ritual, in which the people of the village would gather together and burn their rubbish in large dumps.
Modern era
Following the onset of the Industrial Revolution, industrialisation, and the sustained urban growth of large population centres in England, the buildup of waste in the cities caused a rapid deterioration in levels of sanitation and the general quality of urban life. The streets became choked with filth due to the lack of waste clearance regulations. Calls for the establishment of municipal authority with waste removal powers occurred as early as 1751, when Corbyn Morris in London proposed that "... as the preservation of the health of the people is of great importance, it is proposed that the cleaning of this city, should be put under one uniform public management, and all the filth be...conveyed by the Thames to proper distance in the country".
However, it was not until the mid-19th century, spurred by increasingly devastating cholera outbreaks and the emergence of a public health debate that the first legislation on the issue emerged. Highly influential in this new focus was the report The Sanitary Condition of the Labouring Population in 1842 of the social reformer, Edwin Chadwick, in which he argued for the importance of adequate waste removal and management facilities to improve the health and wellbeing of the city's population.
In the UK, the Nuisance Removal and Disease Prevention Act of 1846 began what was to be a steadily evolving process of the provision of regulated waste management in London. The Metropolitan Board of Works was the first citywide authority that centralized sanitation regulation for the rapidly expanding city, and the Public Health Act 1875 made it compulsory for every household to deposit their weekly waste in "moveable receptacles" for disposal—the first concept for a dustbin. In the Ashanti Empire by the 19th century, there existed a Public Works Department that was responsible for sanitation in Kumasi and its suburbs. They kept the streets clean daily and commanded civilians to keep their compounds clean and weeded.
The dramatic increase in waste for disposal led to the creation of the first incineration plants, or, as they were then called, "destructors". In 1874, the first incinerator was built in Nottingham by Manlove, Alliott & Co. Ltd. to the design of Alfred Fryer. However, these were met with opposition on account of the large amounts of ash they produced and which wafted over the neighbouring areas.
Similar municipal systems of waste disposal sprung up at the turn of the 20th century in other large cities of Europe and North America. In 1895, New York City became the first U.S. city with public-sector garbage management.
Early garbage removal trucks were simply open-bodied dump trucks pulled by a team of horses. They became motorized in the early part of the 20th century and the first closed-body trucks to eliminate odours with a dumping lever mechanism were introduced in the 1920s in Britain. These were soon equipped with 'hopper mechanisms' where the scooper was loaded at floor level and then hoisted mechanically to deposit the waste in the truck. The Garwood Load Packer was the first truck in 1938, to incorporate a hydraulic compactor.
Waste handling and transport
Waste collection methods vary widely among different countries and regions. Domestic waste collection services are often provided by local government authorities, or by private companies for industrial and commercial waste. Some areas, especially those in less developed countries, do not have formal waste-collection systems.
Waste handling and transport
Curbside collection is the most common method of disposal in most European countries, Canada, New Zealand, the United States, and many other parts of the developed world in which waste is collected at regular intervals by specialised trucks. This is often associated with curb-side waste segregation. In rural areas, waste may need to be taken to a transfer station. Waste collected is then transported to an appropriate disposal facility.
In some areas, vacuum collection is used in which waste is transported from the home or commercial premises by vacuum along small bore tubes. Systems are in use in Europe and North America.
In some jurisdictions, unsegregated waste is collected at the curb-side or from waste transfer stations and then sorted into recyclables and unusable waste. Such systems are capable of sorting large volumes of solid waste, salvaging recyclables, and turning the rest into bio-gas and soil conditioners.
In San Francisco, the local government established its Mandatory Recycling and Composting Ordinance in support of its goal of "Zero waste by 2020", requiring everyone in the city to keep recyclables and compostables out of the landfill. The three streams are collected with the curbside "Fantastic 3" bin system – blue for recyclables, green for compostables, and black for landfill-bound materials – provided to residents and businesses and serviced by San Francisco's sole refuse hauler, Recology. The city's "Pay-As-You-Throw" system charges customers by the volume of landfill-bound materials, which provides a financial incentive to separate recyclables and compostables from other discards. The city's Department of the Environment's Zero Waste Program has led the city to achieve 80% diversion, the highest diversion rate in North America. Other businesses such as Waste Industries use a variety of colors to distinguish between trash and recycling cans. In addition, in some areas of the world the disposal of municipal solid waste can cause environmental strain due to official not having benchmarks that help measure the environmental sustainability of certain practices.
Waste segregation
This is the separation of wet waste and dry waste. The purpose is to recycle dry waste easily and to use wet waste as compost. When segregating waste, the amount of waste that gets landfilled reduces considerably, resulting in lower levels of air and water pollution. Importantly, waste segregation should be based on the type of waste and the most appropriate treatment and disposal. This also makes it easier to apply different processes to the waste, like composting, recycling, and incineration. It is important to practice waste management and segregation as a community. One way to practice waste management is to ensure there is awareness. The process of waste segregation should be explained to the community.
Segregated waste is also often cheaper to dispose of because it does not require as much manual sorting as mixed waste. There are a number of important reasons why waste segregation is important such as legal obligations, cost savings, and protection of human health and the environment. Institutions should make it as easy as possible for their staff to correctly segregate their waste. This can include labelling, making sure there are enough accessible bins, and clearly indicating why segregation is so important. Labeling is especially important when dealing with nuclear waste due to how much harm to human health the excess products of the nuclear cycle can cause.
Hazards of waste management
There are multiple facets of waste management that all come with hazards, both for those around the disposal site and those who work within waste management. Exposure to waste of any kind can be detrimental to the health of the individual, primary conditions that worsen with exposure to waste are asthma and tuberculosis. The exposure to waste on an average individual is highly dependent on the conditions around them, those in less developed or lower income areas are more susceptible to the effects of waste product, especially though chemical waste. The range of hazards due to waste is extremely large and covers every type of waste, not only chemical. There are many different guidelines to follow for disposing different types of waste.
The hazards of incineration are a large risk to many variable communities, including underdeveloped countries and countries or cities with little space for landfills or alternatives. Burning waste is an easily accessible option for many people around the globe, it has even been encouraged by the World Health Organization when there is no other option. Because burning waste is rarely paid attention to, its effects go unnoticed. The release of hazardous materials and CO2 when waste is burned is the largest hazard with incineration.
Financial models
In most developed countries, domestic waste disposal is funded from a national or local tax which may be related to income, or property values. Commercial and industrial waste disposal is typically charged for as a commercial service, often as an integrated charge which includes disposal costs. This practice may encourage disposal contractors to opt for the cheapest disposal option such as landfill rather than the environmentally best solution such as re-use and recycling.
Financing solid waste management projects can be overwhelming for the city government, especially if the government see it as an important service they should render to the citizen. Donors and grants are a funding mechanism that is dependent on the interest of the donor organization. As much as it is a good way to develop a city's waste management infrastructure, attracting and utilizing grants is solely reliant on what the donor considers important. Therefore, it may be a challenge for a city government to dictate how the funds should be distributed among the various aspect of waste management.
An example of a country that enforces a waste tax is Italy. The tax is based on two rates: fixed and variable. The fixed rate is based on the size of the house while the variable is determined by the number of people living in the house.
The World Bank finances and advises on solid waste management projects using a diverse suite of products and services, including traditional loans, results-based financing, development policy financing, and technical advisory. World Bank-financed waste management projects usually address the entire lifecycle of waste right from the point of generation to collection and transportation, and finally treatment and disposal.
Disposal methods
Landfill
Incineration
Incineration is a disposal method in which solid organic wastes are subjected to combustion so as to convert them into residue and gaseous products. This method is useful for the disposal of both municipal solid waste and solid residue from wastewater treatment. This process reduces the volume of solid waste by 80 to 95 percent. Incineration and other high-temperature waste treatment systems are sometimes described as "thermal treatment". Incinerators convert waste materials into heat, gas, steam, and ash.
Incineration is carried out both on a small scale by individuals and on a large scale by industry. It is used to dispose of solid, liquid, and gaseous waste. It is recognized as a practical method of disposing of certain hazardous waste materials (such as biological medical waste). Incineration is a controversial method of waste disposal, due to issues such as the emission of gaseous pollutants including substantial quantities of carbon dioxide.
Incineration is common in countries such as Japan where land is more scarce, as the facilities generally do not require as much area as landfills. Waste-to-energy (WtE) or energy-from-waste (EfW) are broad terms for facilities that burn waste in a furnace or boiler to generate heat, steam, or electricity. Combustion in an incinerator is not always perfect and there have been concerns about pollutants in gaseous emissions from incinerator stacks. Particular concern has focused on some very persistent organic compounds such as dioxins, furans, and PAHs, which may be created and which may have serious environmental consequences and some heavy metals such as mercury and lead which can be volatilised in the combustion process..
Recycling
Recycling is a resource recovery practice that refers to the collection and reuse of waste materials such as empty beverage containers. This process involves breaking down and reusing materials that would otherwise be gotten rid of as trash. There are numerous benefits of recycling, and with so many new technologies making even more materials recyclable, it is possible to clean up the Earth. Recycling not only benefits the environment but also positively affects the economy. The materials from which the items are made can be made into new products. Materials for recycling may be collected separately from general waste using dedicated bins and collection vehicles, a procedure called kerbside collection. In some communities, the owner of the waste is required to separate the materials into different bins (e.g. for paper, plastics, metals) prior to its collection. In other communities, all recyclable materials are placed in a single bin for collection, and the sorting is handled later at a central facility. The latter method is known as "single-stream recycling".
The most common consumer products recycled include aluminium such as beverage cans, copper such as wire, steel from food and aerosol cans, old steel furnishings or equipment, rubber tyres, polyethylene and PET bottles, glass bottles and jars, paperboard cartons, newspapers, magazines and light paper, and corrugated fiberboard boxes.
PVC, LDPE, PP, and PS (see resin identification code) are also recyclable. These items are usually composed of a single type of material, making them relatively easy to recycle into new products. The recycling of complex products (such as computers and electronic equipment) is more difficult, due to the additional dismantling and separation required.
The type of material accepted for recycling varies by city and country. Each city and country has different recycling programs in place that can handle the various types of recyclable materials. However, certain variation in acceptance is reflected in the resale value of the material once it is reprocessed. Some of the types of recycling include waste paper and cardboard, plastic recycling, metal recycling, electronic devices, wood recycling, glass recycling, cloth and textile and so many more. In July 2017, the Chinese government announced an import ban of 24 categories of recyclables and solid waste, including plastic, textiles and mixed paper, placing tremendous impact on developed countries globally, which exported directly or indirectly to China.
Re-use
Biological reprocessing
Recoverable materials that are organic in nature, such as plant material, food scraps, and paper products, can be recovered through composting and digestion processes to decompose the organic matter. The resulting organic material is then recycled as mulch or compost for agricultural or landscaping purposes. In addition, waste gas from the process (such as methane) can be captured and used for generating electricity and heat (CHP/cogeneration) maximising efficiencies. There are different types of composting and digestion methods and technologies. They vary in complexity from simple home compost heaps to large-scale industrial digestion of mixed domestic waste. The different methods of biological decomposition are classified as aerobic or anaerobic methods. Some methods use the hybrids of these two methods. The anaerobic digestion of the organic fraction of solid waste is more environmentally effective than landfill, or incineration. The intention of biological processing in waste management is to control and accelerate the natural process of decomposition of organic matter. (See resource recovery).
Energy recovery
Energy recovery from waste is the conversion of non-recyclable waste materials into usable heat, electricity, or fuel through a variety of processes, including combustion, gasification, pyrolyzation, anaerobic digestion, and landfill gas recovery. This process is often called waste-to-energy. Energy recovery from waste is part of the non-hazardous waste management hierarchy. Using energy recovery to convert non-recyclable waste materials into electricity and heat, generates a renewable energy source and can reduce carbon emissions by offsetting the need for energy from fossil sources as well as reduce methane generation from landfills. Globally, waste-to-energy accounts for 16% of waste management.
The energy content of waste products can be harnessed directly by using them as a direct combustion fuel, or indirectly by processing them into another type of fuel. Thermal treatment ranges from using waste as a fuel source for cooking or heating and the use of the gas fuel (see above), to fuel for boilers to generate steam and electricity in a turbine. Pyrolysis and gasification are two related forms of thermal treatment where waste materials are heated to high temperatures with limited oxygen availability. The process usually occurs in a sealed vessel under high pressure. Pyrolysis of solid waste converts the material into solid, liquid, and gas products. The liquid and gas can be burnt to produce energy or refined into other chemical products (chemical refinery). The solid residue (char) can be further refined into products such as activated carbon. Gasification and advanced Plasma arc gasification are used to convert organic materials directly into a synthetic gas (syngas) composed of carbon monoxide and hydrogen. The gas is then burnt to produce electricity and steam.
An alternative to pyrolysis is high-temperature and pressure supercritical water decomposition (hydrothermal monophasic oxidation).
Pyrolysis
Pyrolysis is often used to convert many types of domestic and industrial residues into a recovered fuel. Different types of waste input (such as plant waste, food waste, tyres) placed in the pyrolysis process potentially yield an alternative to fossil fuels. Pyrolysis is a process of thermo-chemical decomposition of organic materials by heat in the absence of stoichiometric quantities of oxygen; the decomposition produces various hydrocarbon gases. During pyrolysis, the molecules of an object vibrate at high frequencies to the extent that molecules start breaking down. The rate of pyrolysis increases with temperature. In industrial applications, temperatures are above 430 °C (800 °F).
Slow pyrolysis produces gases and solid charcoal. Pyrolysis holds promise for conversion of waste biomass into useful liquid fuel. Pyrolysis of waste wood and plastics can potentially produce fuel. The solids left from pyrolysis contain metals, glass, sand, and pyrolysis coke which does not convert to gas. Compared to the process of incineration, certain types of pyrolysis processes release less harmful by-products that contain alkali metals, sulphur, and chlorine. However, pyrolysis of some waste yields gases which impact the environment such as HCl and SO2.
Resource recovery
Resource recovery is the systematic diversion of waste, which was intended for disposal, for a specific next use. It is the processing of recyclables to extract or recover materials and resources, or convert to energy. These activities are performed at a resource recovery facility. Resource recovery is not only environmentally important, but it is also cost-effective. It decreases the amount of waste for disposal, saves space in landfills, and conserves natural resources.
Resource recovery, an alternative approach to traditional waste management, utilizes life cycle analysis (LCA) to evaluate and optimize waste handling strategies. Comprehensive studies focusing on mixed municipal solid waste (MSW) have identified a preferred pathway for maximizing resource efficiency and minimizing environmental impact, including effective waste administration and management, source separation of waste materials, efficient collection systems, reuse and recycling of non-organic fractions, and processing of organic material through anaerobic digestion.
As an example of how resource recycling can be beneficial, many items thrown away contain metals that can be recycled to create a profit, such as the components in circuit boards. Wood chippings in pallets and other packaging materials can be recycled into useful products for horticulture. The recycled chips can cover paths, walkways, or arena surfaces.
Application of rational and consistent waste management practices can yield a range of benefits including:
Economic – Improving economic efficiency through the means of resource use, treatment, and disposal and creating markets for recycles can lead to efficient practices in the production and consumption of products and materials resulting in valuable materials being recovered for reuse and the potential for new jobs and new business opportunities.
Social – By reducing adverse impacts on health through proper waste management practices, the resulting consequences are more appealing to civic communities. Better social advantages can lead to new sources of employment and potentially lift communities out of poverty, especially in some of the developing poorer countries and cities.
Environmental – Reducing or eliminating adverse impacts on the environment through reducing, reusing, recycling, and minimizing resource extraction can result in improved air and water quality and help in the reduction of greenhouse gas emissions.
Inter-generational Equity – Following effective waste management practices can provide subsequent generations a more robust economy, a fairer and more inclusive society and a cleaner environment.
Waste valorization
Liquid waste-management
Liquid waste is an important category of waste management because it is so difficult to deal with. Unlike solid wastes, liquid wastes cannot be easily picked up and removed from an environment. Liquid wastes spread out, and easily pollute other sources of liquid if brought into contact. This type of waste also soaks into objects like soil and groundwater. This in turn carries over to pollute the plants, the animals in the ecosystem, as well as the humans within the area of the pollution.
Industrial wastewater
Sewage sludge treatment
Avoidance and reduction methods
An important method of waste management is the prevention of waste material being created, also known as waste reduction. Waste Minimization is reducing the quantity of hazardous wastes achieved through a thorough application of innovative or alternative procedures. Methods of avoidance include reuse of second-hand products, repairing broken items instead of buying new ones, designing products to be refillable or reusable (such as cotton instead of plastic shopping bags), encouraging consumers to avoid using disposable products (such as disposable cutlery), removing any food/liquid remains from cans and packaging, and designing products that use less material to achieve the same purpose (for example, lightweighting of beverage cans).
International waste trade
Challenges in developing countries
Areas with developing economies often experience exhausted waste collection services and inadequately managed and uncontrolled dumpsites. The problems are worsening. Problems with governance complicate the situation. Waste management in these countries and cities is an ongoing challenge due to weak institutions, chronic under-resourcing, and rapid urbanization. All of these challenges, along with the lack of understanding of different factors that contribute to the hierarchy of waste management, affect the treatment of waste.
In developing countries, waste management activities are usually carried out by the poor, for their survival. It has been estimated that 2% of the population in Asia, Latin America, and Africa are dependent on waste for their livelihood. Family organized, or individual manual scavengers are often involved with waste management practices with very little supportive network and facilities with increased risk of health effects. Additionally, this practice prevents their children from further education. The participation level of most citizens in waste management is very low, residents in urban areas are not actively involved in the process of waste management.
Technologies
Traditionally, the waste management industry has been a late adopter of new technologies such as RFID (Radio Frequency Identification) tags, GPS and integrated software packages which enable better quality data to be collected without the use of estimation or manual data entry. This technology has been used widely by many organizations in some industrialized countries. Radiofrequency identification is a tagging system for automatic identification of recyclable components of municipal solid waste streams.
Smart waste management has been implemented in several cities, including San Francisco, Varde or Madrid. Waste containers are equipped with level sensors. When the container is almost full, the sensor warns the pickup truck, which can thus trace its route servicing the fullest containers and skipping the emptiest ones.
Statistics and trends
The "Global Waste Management Outlook 2024," supported by the Environment Fund - UNEP’s core financial fund, and jointly published with the International Solid Waste Association (ISWA), provides a comprehensive update on the trajectory of global waste generation and the escalating costs of waste management since 2018. The report predicts municipal solid waste to rise from 2.3 billion tonnes in 2023 to 3.8 billion tonnes by 2050. The direct global cost of waste management was around USD 252 billion in 2020, which could soar to USD 640.3 billion annually by 2050 if current practices continue without reform. Incorporating life cycle assessments, the report contrasts scenarios from maintaining the status quo to fully adopting zero waste and circular economy principles. It indicates that effective waste prevention and management could cap annual costs at USD 270.2 billion by 2050, while a circular economy approach could transform the sector into a net positive, offering a potential annual gain of USD 108.5 billion. To prevent the direst outcomes, the report calls for immediate action across multiple sectors, including development banks, governments, municipalities, producers, retailers, and citizens, providing targeted strategies for waste reduction and improved management practices.
Waste management by region
China
Municipal solid waste generation shows spatiotemporal variation. In spatial distribution, the point sources in eastern coastal regions are quite different. Guangdong, Shanghai and Tianjin produced MSW of 30.35, 7.85 and 2.95 Mt, respectively. In temporal distribution, during 2009–2018, Fujian province showed a 123% increase in MSW generation while Liaoning province showed only 7% increase, whereas Shanghai special zone had a decline of −11% after 2013. MSW composition characteristics are complicated. The major components such as kitchen waste, paper and rubber & plastics in different eastern coastal cities have fluctuation in the range of 52.8–65.3%, 3.5–11.9%, and 9.9–19.1%, respectively. Treatment rate of consumption waste is up to 99% with a sum of 52% landfill, 45% incineration, and 3% composting technologies, indicating that landfill still dominates MSW treatment.
Morocco
Morocco has seen benefits from implementing a $300 million sanitary landfill system. While it might appear to be a costly investment, the country's government predicts that it has saved them another $440 million in damages, or consequences of failing to dispose of waste properly.
San Francisco
San Francisco started to make changes to their waste management policies in 2009 with the expectation to be zero waste by 2030. Council made changes such as making recycling and composting a mandatory practice for businesses and individuals, banning Styrofoam and plastic bags, putting charges on paper bags, and increasing garbage collection rates. Businesses are fiscally rewarded for correct disposal of recycling and composting and taxed for incorrect disposal. Besides these policies, the waste bins were manufactured in various sizes. The compost bin is the largest, the recycling bin is second, and the garbage bin is the smallest. This encourages individuals to sort their waste thoughtfully with respect to the sizes. These systems are working because they were able to divert 80% of waste from the landfill, which is the highest rate of any major U.S. city. Despite all these changes, Debbie Raphael, director of the San Francisco Department of the Environment, states that zero waste is still not achievable until all products are designed differently to be able to be recycled or compostable.
Turkey
United Kingdom
Waste management policy in England is the responsibility of the Department of the Environment, Food and Rural Affairs (DEFRA). In England, the "Waste Management Plan for England" presents a compilation of waste management policies. In the devolved nations such as Scotland Waste management policy is a responsibility of their own respective departments.
Zambia
In Zambia, ASAZA is a community-based organization whose principal purpose is to complement the efforts of the Government and cooperating partners to uplift the standard of living for disadvantaged communities. The project's main objective is to minimize the problem of indiscriminate littering which leads to land degradation and pollution of the environment. ASAZA is also at the same time helping alleviate the problems of unemployment and poverty through income generation and payment of participants, women, and unskilled youths.
E-waste
A record 53.6 million metric tonnes (Mt) of electronic waste was generated worldwide in 2019, up 21 percent in just five years, according to the UN's Global E-waste Monitor 2020, released today. The new report also predicts global e-waste – discarded products with a battery or plug – will reach 74 Mt by 2030, almost a doubling of e-waste in just 16 years. This makes e-waste the world's fastest-growing domestic waste stream, fueled mainly by higher consumption rates of electric and electronic equipment, short life cycles, and few options for repair. Only 17.4 percent of 2019's e-waste was collected and recycled. This means that gold, silver, copper, platinum, and other high-value, recoverable materials conservatively valued at US$57 billion – a sum greater than the Gross Domestic Product of most countries – were mostly dumped or burned rather than being collected for treatment and reuse. E-wasteis predicted to double by 2050.
Transboundary movement of e-waste
The Transboundary E-waste Flows Monitor quantified that 5.1 Mt (just below 10 percent of the total amount of global e-waste, 53.6 Mt) crossed country borders in 2019. To better understand the implication of transboundary movement, this study categorizes the transboundary movement of e-waste into controlled and uncontrolled movements and also considers both the receiving and sending regions.
Scientific journals
Related scientific journals in this area include:
Environmental and Resource Economics
Environmental Monitoring and Assessment
Journal of Environmental Assessment Policy and Management
Journal of Environmental Economics and Management
| Technology | Basics_6 | null |
210596 | https://en.wikipedia.org/wiki/Irritable%20bowel%20syndrome | Irritable bowel syndrome | Irritable bowel syndrome (IBS) is a functional gastrointestinal disorder characterized by a group of symptoms that commonly include abdominal pain, abdominal bloating and changes in the consistency of bowel movements. These symptoms may occur over a long time, sometimes for years. IBS can negatively affect quality of life and may result in missed school or work or reduced productivity at work. Disorders such as anxiety, major depression, and myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) are common among people with IBS.
The cause of IBS is not known but multiple factors have been proposed to lead to the condition. Theories include combinations of "gut–brain axis" problems, alterations in gut motility, visceral hypersensitivity, infections including small intestinal bacterial overgrowth, neurotransmitters, genetic factors, and food sensitivity. Onset may be triggered by a stressful life event, or an intestinal infection. In the latter case, it is called post-infectious irritable bowel syndrome.
Diagnosis is based on symptoms in the absence of worrisome features and once other potential conditions have been ruled out. Worrisome or "alarm" features include onset at greater than 50 years of age, weight loss, blood in the stool, or a family history of inflammatory bowel disease. Other conditions that may present similarly include celiac disease, microscopic colitis, inflammatory bowel disease, bile acid malabsorption, and colon cancer.
Treatment of IBS is carried out to improve symptoms. This may include dietary changes, medication, probiotics, and counseling. Dietary measures include increasing soluble fiber intake, or a diet low in fermentable oligosaccharides, disaccharides, monosaccharides, and polyols (FODMAPs). The "low FODMAP" diet is meant for short to medium term use and is not intended as a life-long therapy. The medication loperamide may be used to help with diarrhea while laxatives may be used to help with constipation. There is strong clinical-trial evidence for the use of antidepressants, often in lower doses than that used for depression or anxiety, even in patients without comorbid mood disorder. Tricyclic antidepressants such as amitriptyline or nortriptyline and medications from the selective serotonin reuptake inhibitor (SSRI) group may improve overall symptoms and reduce pain. Patient education and a good doctor–patient relationship are an important part of care.
About 10–15% of people in the developed world are believed to be affected by IBS. The prevalence varies according to country (from 1.1% to 45.0%) and criteria used to define IBS; however the average global prevalence is 11.2%. It is more common in South America and less common in Southeast Asia. In the Western world, it is twice as common in women as men and typically occurs before age 45. However, women in East Asia are not more likely than their male counterparts to have IBS, indicating much lower rates among East Asian women. Similarly, men from South America, South Asia and Africa are just as likely to have IBS as women in those regions, if not more so. The condition appears to become less common with age. IBS does not affect life expectancy or lead to other serious diseases. The first description of the condition was in 1820, while the current term irritable bowel syndrome came into use in 1944.
Signs and symptoms
The primary symptoms of IBS are abdominal pain or discomfort in association with frequent diarrhea or constipation and a change in bowel habits. Symptoms usually are experienced as acute attacks that subside within one day, but recurrent attacks are likely. There may also be urgency for bowel movements, a feeling of incomplete evacuation (tenesmus) or bloating. In some cases, the symptoms are relieved by bowel movements.
People with IBS, more commonly than others, have gastroesophageal reflux, symptoms relating to the genitourinary system, fibromyalgia, headache, backache, and psychiatric symptoms such as depression, sleep disorders, and anxiety. About a third of adults who have IBS also report sexual dysfunction, typically in the form of a reduction in libido.
Cause
While the causes of IBS are still unknown, it is believed that the entire gut–brain axis is affected. Recent findings suggest that an allergy triggered peripheral immune mechanism may underlie the symptoms associated with abdominal pain in patients with irritable bowel syndrome. IBS is more prevalent in obese patients.
Risk factors
People who are younger than 50, women, and those with a family history of the condition are more likely to develop IBS. Further risk factors are anxiety, depression, and stress. The risk of developing IBS increases six-fold after having a gastrointestinal infection (gastroenteritis). This is also called post-infectious IBS. The risk of developing IBS following an infection is further increased in those who also had a prolonged fever during the illness. Antibiotic use also appears to increase the risk of developing IBS. Genetic defects in innate immunity and epithelial homeostasis increase the risk of developing both post-infectious as well as other forms of IBS.
Stress
The role of the brain–gut axis in IBS has been suggested since the 1990s and childhood physical and psychological abuse is often associated with the development of IBS. It is believed that psychological stress may trigger IBS in predisposed individuals.
Given the high levels of anxiety experienced by people with IBS and the overlap with conditions such as fibromyalgia and chronic fatigue syndrome, a potential explanation for IBS involves a disruption of the stress system. The stress response in the body involves the hypothalamic–pituitary–adrenal axis (HPA) and the sympathetic nervous system, both of which have been shown to operate abnormally in people with IBS. Psychiatric illness or anxiety precedes IBS symptoms in two-thirds of people with IBS, and psychological traits predispose previously healthy people to developing IBS after gastroenteritis. Individuals with IBS also report high rates of sleep disturbances such as trouble falling asleep and frequent arousal throughout the night.
Gastroenteritis
Approximately 10 percent of IBS cases are triggered by an acute gastroenteritis infection. The CdtB toxin is produced by bacteria causing gastroenteritis and the host may develop an autoimmunity when host antibodies to CdtB cross-react with vinculin. Genetic defects relating to the innate immune system and epithelial barrier as well as high stress and anxiety levels appear to increase the risk of developing post-infectious IBS. Post-infectious IBS usually manifests itself as the diarrhea-predominant subtype. Evidence has demonstrated that the release of high levels of proinflammatory cytokines during acute enteric infection causes increased gut permeability leading to translocation of the commensal bacteria across the epithelial barrier; this in turn can result in significant damage to local tissues, which can develop into chronic gut abnormalities in sensitive individuals. However, increased gut permeability is strongly associated with IBS regardless of whether IBS was initiated by an infection or not. A link between small intestinal bacterial overgrowth and tropical sprue has been proposed to be involved as a cause of post-infectious IBS.
Bacteria
Small intestinal bacterial overgrowth (SIBO) occurs with greater frequency in people who have been diagnosed with IBS compared to healthy controls. SIBO is most common in diarrhea-predominant IBS but also occurs in constipation-predominant IBS more frequently than healthy controls. Symptoms of SIBO include bloating, abdominal pain, diarrhea or constipation among others. IBS may be the result of the immune system interacting abnormally with gut microbiota resulting in an abnormal cytokine signalling profile.
Certain bacteria are found in lower or higher abundance when compared with healthy individuals. Generally Bacteroidota, Bacillota, and Pseudomonadota are increased and Actinomycetota, Bifidobacteria, and Lactobacillus are decreased. Within the human gut, there are common phyla found. The most common is Bacillota. This includes Lactobacillus, which is found to have a decrease in people with IBS, and Streptococcus, which is shown to have an increase in abundance. Within this phylum, species in the class Clostridia are shown to have an increase, specifically Ruminococcus and Dorea. The family Lachnospiraceae presents an increase in IBS-D patients. The second most common phylum is Bacteroidota. In people with IBS, the Bacteroidota phylum has been shown to have an overall decrease, but an increase in the genus Bacteroides. IBS-D shows a decrease for the phylum Actinomycetota and an increase in Pseudomonadota, specifically in the family Enterobacteriaceae.
Gut microbiota
Alterations of gut microbiota (dysbiosis) are associated with the intestinal manifestations of IBS, but also with the psychiatric morbidity that coexists in up to 80% of people with IBS.
Protozoa
Protozoal infections can cause symptoms that mirror specific IBS subtypes, e.g., infection by certain substypes of Blastocystis hominis (blastocystosis). Many people regard these organisms as incidental findings, and unrelated to symptoms of IBS.
Blastocystis and Dientamoeba fragilis colonisation occurs more commonly in IBS affected individuals but their role in the condition is unclear.
Vitamin D
Vitamin D deficiency is more common in individuals affected by IBS. Vitamin D is involved in regulating triggers for IBS including the gut microbiome, inflammatory processes and immune responses, as well as psychosocial factors.
Genetics
SCN5A mutations are found in a small number of people who have IBS, particularly the constipation-predominant variant (IBS-C). The resulting defect leads to disruption in bowel function, by affecting the Nav1.5 channel, in smooth muscle of the colon and pacemaker cells.
Mechanism
Genetic, environmental, and psychological factors seem to be important in the development of IBS. The condition also has a genetic component even though there is a predominant influence of environmental factors.
Dysregulated brain-gut axis, abnormal serotonin/5-hydroxytryptamine (5-HT) metabolism, and high density of mucosal nerve fibers in the intestines have been implicated in the mechanisms of IBS. A number of 5-HT receptor subtypes were involved in the IBS symptoms, including 5-HT3, 5-HT4, and 5-HT7 receptors. High levels of 5-HT7 receptor-expressing mucosal nerve fibers were observed in the colon of IBS patients. A role of 5-HT7 receptor in intestinal hyperalgesia was demonstrated in mouse models with visceral hypersensitivity, of which a novel 5-HT7 receptor antagonist administered by mouth reduced intestinal pain levels.
Abnormalities occur in the gut flora of individuals who have IBS, such as reduced diversity, a decrease in bacteria belonging to the phylum Bacteroidota, and an increase in those belonging to the phylum Bacillota. The changes in gut flora are most profound in individuals who have diarrhoea-predominant IBS. Antibodies against common components (namely flagellin) of the commensal gut flora are a common occurrence in IBS affected individuals.
Chronic low-grade inflammation commonly occurs in IBS affected individuals with abnormalities found including increased enterochromaffin cells, intraepithelial lymphocytes, and mast cells resulting in chronic immune-mediated inflammation of the gut mucosa. IBS has been reported in greater quantities in multigenerational families with IBS than in the regular population. It is believed that psychological stress can induce increased inflammation and thereby cause IBS to develop in predisposed individuals.
Diagnosis
No specific laboratory or imaging tests can diagnose irritable bowel syndrome. Diagnosis should be based on symptoms, the exclusion of worrisome features, and the performance of specific investigations to rule out organic diseases that may present similar symptoms.
The recommendations for physicians are to minimize the use of medical investigations. The Rome criteria are typically used for diagnosis. They allow the diagnosis to be based only on symptoms, but no criteria based solely on symptoms is sufficiently accurate to diagnose IBS. Worrisome features include onset at greater than 50 years of age, weight loss, blood in the stool, iron-deficiency anemia, or a family history of colon cancer, celiac disease, or inflammatory bowel disease. The criteria for selecting tests and investigations also depends on the level of available medical resources.
The Rome IV criteria for diagnosing IBS include recurrent abdominal pain, on average, at least one day/week in the last three months, associated with additional stool- or defecation-related criteria. The algorithm may include additional tests to guard against misdiagnosis of other diseases as IBS. Such "red flag" symptoms that may indicate other diseases as well include weight loss, gastrointestinal bleeding, anemia, or nocturnal symptoms. However, red flag conditions may not always contribute to accuracy in diagnosis; for instance, as many as 31% of people with IBS have blood in their stool, many possibly from hemorrhoidal bleeding.
Investigations
Investigations are performed to exclude other conditions:
Stool microscopy and culture (to exclude infectious conditions)
Blood tests: full blood examination, liver function tests, erythrocyte sedimentation rate, and serological testing for coeliac disease
Abdominal ultrasound (to exclude gallstones and other biliary tract diseases)
Endoscopy and biopsies (to exclude peptic ulcer disease, coeliac disease, inflammatory bowel disease, and malignancies)
Hydrogen breath testing (to exclude fructose and lactose malabsorption)
Differential diagnosis
Colon cancer, inflammatory bowel disease, thyroid disorders (hyperthyroidism or hypothyroidism), and giardiasis can all feature abnormal defecation and abdominal pain. Less common causes of this symptom profile are carcinoid syndrome, microscopic colitis, bacterial overgrowth, and eosinophilic gastroenteritis; IBS is, however, a common presentation, and testing for these conditions would yield low numbers of positive results, so it is considered difficult to justify the expense. Conditions that may present similarly include celiac disease, bile acid malabsorption, colon cancer, and dyssynergic defecation.
Ruling out parasitic infections, lactose intolerance, small intestinal bacterial overgrowth, and celiac disease is recommended before a diagnosis of IBS is made. An upper endoscopy with small bowel biopsies is necessary to identify the presence of celiac disease. An ileocolonoscopy with biopsies is useful to exclude Crohn's disease and ulcerative colitis (Inflammatory bowel disease).
Some people, managed for years for IBS, may have non-celiac gluten sensitivity (NCGS). Gastrointestinal symptoms of IBS are clinically indistinguishable from those of NCGS, but the presence of any of the following non-intestinal manifestations suggest a possible NCGS: headache or migraine, "foggy mind", chronic fatigue, fibromyalgia, joint and muscle pain, leg or arm numbness, tingling of the extremities, dermatitis (eczema or skin rash), atopic disorders, allergy to one or more inhalants, foods or metals (such as mites, graminaceae, parietaria, cat or dog hair/dander, shellfish, or nickel), depression, anxiety, anemia, iron-deficiency anemia, folate deficiency, asthma, rhinitis, eating disorders, neuropsychiatric disorders (such as schizophrenia, autism, peripheral neuropathy, ataxia, attention deficit hyperactivity disorder) or autoimmune diseases. An improvement with a gluten-free diet of immune-mediated symptoms, including autoimmune diseases, once having reasonably ruled out celiac disease and wheat allergy, is another way to realize a differential diagnosis.
Misdiagnosis
People with IBS are at increased risk of being given inappropriate surgeries such as appendectomy, cholecystectomy, and hysterectomy due to being misdiagnosed as other medical conditions. Some common examples of misdiagnosis include infectious diseases, coeliac disease, Helicobacter pylori, parasites (non-protozoal). The American College of Gastroenterology recommends all people with symptoms of IBS be tested for coeliac disease.
Bile acid malabsorption is also sometimes missed in people with diarrhea-predominant IBS. SeHCAT tests suggest around 30% of people with D-IBS have this condition, and most respond to bile acid sequestrants.
Comorbidities
Several medical conditions, or comorbidities, appear with greater frequency in people with IBS.
Neurological/psychiatric: A study of 97,593 individuals with IBS identified comorbidities such as headache, fibromyalgia, and depression. IBS occurs in 51% of people with chronic fatigue syndrome and 49% of people with fibromyalgia, and psychiatric disorders occur in 94% of people with IBS.
Channelopathy and muscular dystrophy: IBS and functional GI diseases are comorbidities of genetic channelopathies that cause cardiac conduction defects and neuromuscular dysfunction, and result also in alterations in GI motility, secretion, and sensation. Similarly, IBS and FBD are highly prevalent in myotonic muscle dystrophies. Digestive symptoms may be the first sign of dystrophic disease and may precede the musculo-skeletal features by up to 10 years.
Inflammatory bowel disease: IBS may be marginally associated with inflammatory bowel disease. Researchers have found some correlation between IBS and IBD, noting that people with IBD experience IBS-like symptoms when their IBD is in remission. A three-year study found that patients diagnosed with IBS were 16.3 times more likely to be diagnosed with IBD during the study period, although this is likely due to an initial misdiagnosis.
Abdominal surgery: People with IBS were at increased risk of having unnecessary gall bladder removal surgery not due to an increased risk of gallstones, but rather to abdominal pain, awareness of having gallstones, and inappropriate surgical indications. These people also are 87% more likely to undergo abdominal and pelvic surgery and three times more likely to undergo gallbladder surgery. Also, people with IBS were twice as likely to undergo hysterectomy.
Endometriosis: One study reported a statistically significant link between migraine headaches, IBS, and endometriosis.
Other chronic disorders: Interstitial cystitis may be associated with other chronic pain syndromes, such as irritable bowel syndrome and fibromyalgia. The connection between these syndromes is unknown.
Classification
IBS can be classified as diarrhea-predominant (IBS-D), constipation-predominant (IBS-C), with mixed/alternating stool pattern (IBS-M/IBS-A) or pain-predominant. In some individuals, IBS may have an acute onset and develop after an infectious illness characterized by two or more of: fever, vomiting, diarrhea, or positive stool culture. This post-infective syndrome has consequently been termed "post-infectious IBS" (IBS-PI).
Management
A number of treatments have been found to be effective, including fiber, talk therapy, antispasmodic and antidepressant medication, and peppermint oil.
Diet
FODMAP
FODMAPs are short-chain carbohydrates that are poorly absorbed in the small intestine. A 2018 systematic review found that although there is evidence of improved IBS symptoms with a low-FODMAP diet, the evidence is of very low quality. Symptoms most likely to improve on this type of diet include urgency, flatulence, bloating, abdominal pain, and altered stool output. One national guideline advises a low FODMAP diet for managing IBS when other dietary and lifestyle measures have been unsuccessful. The diet restricts various carbohydrates which are poorly absorbed in the small intestine, as well as fructose and lactose, which are similarly poorly absorbed in those with intolerances to them. Reduction of fructose and fructan has been shown to reduce IBS symptoms in a dose-dependent manner in people with fructose malabsorption and IBS.
FODMAPs are fermentable oligo-, di-, monosaccharides and polyols, which are poorly absorbed in the small intestine and subsequently fermented by the bacteria in the distal small and proximal large intestine. This is a normal phenomenon, common to everyone. The resultant production of gas potentially results in bloating and flatulence. Although FODMAPs can produce certain digestive discomfort in some people, not only do they not cause intestinal inflammation, but they help avoid it, because they produce beneficial alterations in the intestinal flora that contribute to maintaining the good health of the colon. FODMAPs are not the cause of irritable bowel syndrome nor other functional gastrointestinal disorders, but rather a person develops symptoms when the underlying bowel response is exaggerated or abnormal.
A low-FODMAP diet consists of restricting them from the diet. They are globally trimmed, rather than individually, which is more successful than for example restricting only fructose and fructans, which are also FODMAPs, as is recommended for those with fructose malabsorption.
A low-FODMAP diet might help to improve short-term digestive symptoms in adults with irritable bowel syndrome, but its long-term follow-up can have negative effects because it causes a detrimental impact on the gut microbiota and metabolome. It should only be used for short periods of time and under the advice of a specialist. A low-FODMAP diet is highly restrictive in various groups of nutrients and can be impractical to follow in the long-term. More studies are needed to assess the true impact of this diet on health.
In addition, the use of a low-FODMAP diet without verifying the diagnosis of IBS may result in misdiagnosis of other conditions such as celiac disease. Since the consumption of gluten is suppressed or reduced with a low-FODMAP diet, the improvement of the digestive symptoms with this diet may not be related to the withdrawal of the FODMAPs, but of gluten, indicating the presence of unrecognized celiac disease, avoiding its diagnosis and correct treatment, with the consequent risk of several serious health complications, including various types of cancer.
Fiber
Soluble fiber supplementation (e.g., psyllium/ispagula husk) may be effective in improving symptoms. However soluble fiber does not appear to reduce pain. It acts as a bulking agent, and for many people with IBS-D, allows for a more consistent stool. For people with IBS-C, it seems to allow for a softer, moister, more easily passable stool.
However, insoluble fiber (e.g., bran) is not effective for IBS. In some people, insoluble fiber supplementation may aggravate symptoms.
Fiber might be beneficial in those who have a predominance of constipation. In people who have IBS-C, soluble fiber can reduce overall symptoms but will not reduce pain. The research supporting dietary fiber contains conflicting small studies complicated by the heterogeneity of types of fiber and doses used.
Physical activity
Physical activity can have beneficial effects on irritable bowel syndrome. In light of this, the latest British Society of Gastroenterology guidelines on the management of IBS have stated that all patients with IBS should be advised to take regular exercise (strong recommendation, weak certainty evidence), whereas the American College of Gastroenterology guidelines have suggested with a lower certainty of evidence. Physical activity could significantly improve people’s adherence and, consequently, lead to a significant clinical benefit for symptoms of irritable bowel syndrome.
Medication
Medications that may be useful include antispasmodics such as dicyclomine and antidepressants. Both H1-antihistamines and mast cell stabilizers have shown efficacy in reducing pain associated with visceral hypersensitivity in IBS.
Serotonergic agents
A number of 5-HT3 antagonists or 5-HT4 agonists were proposed clinically to treat diarrhea-predominant IBS and constipation-predominant IBS, respectively. However, severe side effects have resulted in its withdrawal by food and drug administration and are now prescribed under emergency investigational drug protocol. Other 5-HT receptor subtypes, such as 5-HT7 receptor, have yet to be developed.
Laxatives
For people who do not adequately respond to dietary fiber, osmotic laxatives such as polyethylene glycol, sorbitol, and lactulose can help avoid "cathartic colon" which has been associated with stimulant laxatives. Lubiprostone is a gastrointestinal agent used for the treatment of constipation-predominant IBS.
Antispasmodics
The use of antispasmodic drugs (e.g., anticholinergics such as hyoscyamine or dicyclomine) may help people who have cramps or diarrhea. A meta-analysis by the Cochrane Collaboration concludes that one out of seven people benefit from treatment with antispasmodics. Antispasmodics can be divided into two groups: neurotropics and musculotropics. Musculotropics, such as mebeverine, act directly at the smooth muscle of the gastrointestinal tract, relieving spasm without affecting normal gut motility. Since this action is not mediated by the autonomic nervous system, the usual anticholinergic side effects are absent. The antispasmodic otilonium may also be useful.
Discontinuation of proton pump inhibitors
Proton-pump inhibitors (PPIs) used to suppress stomach acid production may cause small intestinal bacterial overgrowth (SIBO) leading to IBS symptoms. Discontinuation of PPIs in selected individuals has been recommended as it may lead to an improvement or resolution of IBS symptoms.
Antidepressants
Evidence is conflicting about the benefit of antidepressants in IBS. Some meta-analyses have found a benefit, while others have not. There is good evidence that low doses of tricyclic antidepressants (TCAs) can be effective for IBS. With TCAs, about one in three people improve.
However, the evidence is less robust for the effectiveness of other antidepressant classes such as selective serotonin reuptake inhibitor antidepressants (SSRIs). Because of their serotonergic effect, SSRIs have been studied in IBS, especially for people who are constipation predominant. As of 2015, the evidence indicates that SSRIs do not help. Antidepressants are not effective for IBS in people with depression, possibly because lower doses of antidepressants than the doses used to treat depression are required for relief of IBS.
Other agents
Magnesium aluminum silicates and alverine citrate drugs can be effective for IBS.
Rifaximin may be useful as a treatment for IBS symptoms, including abdominal bloating and flatulence, although relief of abdominal distension is delayed. It is especially useful where small intestinal bacterial overgrowth is involved.
In individuals with IBS and low levels of vitamin D, supplementation is recommended. Some evidence suggests that vitamin D supplementation may improve symptoms of IBS, but further research is needed before it can be recommended as a specific treatment for IBS.
Psychological therapies
There is inconsistent evidence from studies with poor methodological quality that psychological therapies can be effective in the treatment of IBS. Preliminary research shows that psychotherapeutic interventions are correlated with reductions in both autonomic nervous system dysregulation and gastrointestinal symptoms. Reducing stress may also reduce the frequency and severity of IBS symptoms. Techniques that may be helpful include regular exercise, such as swimming, walking, or running.
Probiotics
Probiotics can be beneficial in the treatment of IBS; taking 10 billion to 100 billion beneficial bacteria per day is recommended for beneficial results. However, further research is needed on individual strains of beneficial bacteria for more refined recommendations. Probiotics have positive effects such as enhancing the intestinal mucosal barrier, providing a physical barrier, bacteriocin production (resulting in reduced numbers of pathogenic and gas-producing bacteria), reducing intestinal permeability and bacterial translocation, and regulating the immune system both locally and systemically among other beneficial effects. Probiotics may also have positive effects on the gut–brain axis by their positive effects countering the effects of stress on gut immunity and gut function.
A number of probiotics have been found to be effective, including Lactobacillus plantarum, and Bifidobacteria infantis; but one review found only Bifidobacteria infantis showed efficacy. B. infantis may have effects beyond the gut via it causing a reduction of proinflammatory cytokine activity and elevation of blood tryptophan levels, which may cause an improvement in symptoms of depression. Some yogurt is made using probiotics that may help ease symptoms of IBS. A probiotic yeast called Saccharomyces boulardii has some evidence of effectiveness in the treatment of irritable bowel syndrome.
Certain probiotics have different effects on certain symptoms of IBS. For example, Bifidobacterium breve, B. longum, and Lactobacillus acidophilus have been found to alleviate abdominal pain. B. breve, B. infantis, L. casei, or L. plantarum species alleviated distension symptoms. B. breve, B. infantis, L. casei, L. plantarum, B. longum, L. acidophilus, L. bulgaricus, and Streptococcus salivarius ssp. thermophilus have all been found to affect flatulence levels. Most clinical studies show probiotics do not improve straining, sense of incomplete evacuation, stool consistency, fecal urgency, or stool frequency, although a few clinical studies did find some benefit of probiotic therapy. The evidence is conflicting for whether probiotics improve overall quality of life scores.
Probiotics may exert their beneficial effects on IBS symptoms via preserving the gut microbiota, normalisation of cytokine blood levels, improving the intestinal transit time, decreasing small intestine permeability, and by treating small intestinal bacterial overgrowth of fermenting bacteria. A fecal transplant does not appear useful as of 2019.
Herbal remedies
Peppermint oil appears useful. In a meta-analysis it was found to be superior to placebo for improvement of IBS symptoms, at least in the short term. An earlier meta-analysis suggested the results of peppermint oil were tentative as the number of people studied was small and blinding of those receiving treatment was unclear. Safety during pregnancy has not been established, however, and caution is required not to chew or break the enteric coating; otherwise, gastroesophageal reflux may occur as a result of lower esophageal sphincter relaxation. Occasionally, nausea and perianal burning occur as side effects. Iberogast, a multi-herbal extract, was found to be superior in efficacy to placebo. A comprehensive meta-analysis using twelve random trials resulted that the use of peppermint oil is an effective therapy for adults with irritable bowel syndrome.
Research into cannabinoids as treatment for IBS is limited. GI propulsion, secretion, and inflammation in the gut are all modulated by the ECS (Endocannabinoid system), providing a rationale for cannabinoids as treatment candidates for IBS.
Only limited evidence exists for the effectiveness of other herbal remedies for IBS. As with all herbs, it is wise to be aware of possible drug interactions and adverse effects.
Alternative medicine
There are no benefits of acupuncture compared to placebo for IBS symptom severity or IBS-related quality of life.
Epidemiology
The prevalence of IBS varies by country and by age range examined. The bar graph at right shows the percentage of the population reporting symptoms of IBS in studies from various geographic regions (see table below for references). The following table contains a list of studies performed in different countries that measured the prevalence of IBS and IBS-like symptoms:
Gender
In Western countries, women are around two to three times more likely to be diagnosed with IBS and four to five times more likely to seek specialty care for it than men. However, women in East Asian countries are not more likely than men to have irritable bowel syndrome, and there are conflicting reports about the female predominance of the disease in Africa and other parts of Asia. People diagnosed with IBS are usually younger than 45 years old. Studies of females with IBS show symptom severity often fluctuates with the menstrual cycle, suggesting hormonal differences may play a role. Endorsement of gender-related traits has been associated with quality of life and psychological adjustment in IBS. The increase in gastrointestinal symptoms during menses or early menopause may be related to declining or low estrogen and progesterone, suggesting that estrogen withdrawal may play a role in IBS. Gender differences in healthcare-seeking may also play a role. Gender differences in trait anxiety may contribute to lower pain thresholds in women, putting them at greater risk for a number of chronic pain disorders. Finally, sexual trauma is a major risk factor for IBS, as are other forms of abuse. Because women are at higher risk of sexual abuse than men, sex-related risk of abuse may contribute to the higher rate of IBS in women.
History
The concept of an "irritable bowel" was introduced by P. W. Brown, first in The Journal of the Kansas Medical Society in 1947 and later in the Rocky Mountain Medical Journal in 1950. The term was used to categorize people who developed symptoms of diarrhea, abdominal pain, and constipation, but where no well-recognized infective cause could be found. Early theories suggested the irritable bowel was caused by a psychosomatic or mental disorder.
Society and culture
Economics
United States
The aggregate cost of irritable bowel syndrome in the United States has been estimated at $1.7–10 billion in direct medical costs, with an additional $20 billion in indirect costs, for a total of $21.7–30 billion. A study by a managed care company comparing medical costs for people with IBS to non-IBS controls identified a 49% annual increase in medical costs associated with a diagnosis of IBS. People with IBS incurred average annual direct costs of $5,049 and $406 in out-of-pocket expenses in 2007. A study of workers with IBS found that they reported a 34.6% loss in productivity, corresponding to 13.8 hours lost per 40 hour week. A study of employer-related health costs from a Fortune 100 company conducted with data from the 1990s found people with IBS incurred US$4527 in claims costs vs. $3276 for controls. A study on Medicaid costs conducted in 2003 by the University of Georgia College of Pharmacy and Novartis found IBS was associated in an increase of $962 in Medicaid costs in California, and $2191 in North Carolina. People with IBS had higher costs for physician visits, outpatients visits, and prescription drugs. The study suggested the costs associated with IBS were comparable to those found for people with asthma.
Research
Individuals with IBS have been found to have decreased diversity and numbers of Bacteroidota microbiota. Preliminary research into the effectiveness of fecal microbiota transplant in the treatment of IBS has been very favourable with a 'cure' rate of between 36 percent and 60 percent with remission of core IBS symptoms persisting at 9 and 19 months follow up. Treatment with probiotic strains of bacteria has shown to be effective, though not all strains of microorganisms confer the same benefit and adverse side effects have been documented in a minority of cases.
There is increasing evidence for the effectiveness of mesalazine (5-aminosalicylic acid) in the treatment of IBS. Mesalazine is a drug with anti-inflammatory properties that has been reported to significantly reduce immune mediated inflammation in the gut of IBS affected individuals with mesalazine therapy resulting in improved IBS symptoms as well as feelings of general wellness in IBS affected people. It has also been observed that mesalazine therapy helps to normalise the gut flora which is often abnormal in people who have IBS. The therapeutic benefits of mesalazine may be the result of improvements to the epithelial barrier function. Treatment based on "abnormally" high IgG antibodies cannot be recommended.
Differences in visceral sensitivity and intestinal physiology have been noted in IBS. Mucosal barrier reinforcement in response to oral 5-HTP was absent in IBS compared to controls. IBS/IBD individuals are less often HLA DQ2/8 positive than in upper functional gastrointestinal disease and healthy populations.
Efficacy of mast cell directed therapies in irritable bowel syndrome is an area of ongoing research.
In other species
A similar syndrome is found in rats (Rattus spp.). In rats a short-chain fatty acid receptor is involved, a free fatty acid receptor 2 subtype that is expressed in both enteroendocrine cells and mucosal mast cells. These cells then respond in an exaggerated way to the IBS rat's own large quantity of maldigestion products.
| Biology and health sciences | Specific diseases | Health |
210648 | https://en.wikipedia.org/wiki/Declarative%20programming | Declarative programming | In computer science, declarative programming is a programming paradigm—a style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow.
Many languages that apply this style attempt to minimize or eliminate side effects by describing what the program must accomplish in terms of the problem domain, rather than describing how to accomplish it as a sequence of the programming language primitives (the how being left up to the language's implementation). This is in contrast with imperative programming, which implements algorithms in explicit steps.
Declarative programming often considers programs as theories of a formal logic, and computations as deductions in that logic space. Declarative programming may greatly simplify writing parallel programs.
Common declarative languages include those of database query languages (e.g., SQL, XQuery), regular expressions, logic programming (e.g. Prolog, Datalog, answer set programming), functional programming, configuration management, and algebraic modeling systems.
Definition
Declarative programming is often defined as any style of programming that is not imperative. A number of other common definitions attempt to define it by simply contrasting it with imperative programming. For example:
A high-level program that describes what a computation should perform.
Any programming language that lacks side effects (or more specifically, is referentially transparent).
A language with a clear correspondence to mathematical logic.
These definitions overlap substantially.
Declarative programming is a non-imperative style of programming in which programs describe their desired results without explicitly listing commands or steps that must be performed. Functional and logic programming languages are characterized by a declarative programming style. In logic programming, programs consist of sentences expressed in logical form, and computation uses those sentences to solve problems, which are also expressed in logical form.
In a pure functional language, such as Haskell, all functions are without side effects, and state changes are only represented as functions that transform the state, which is explicitly represented as a first-class object in the program. Although pure functional languages are non-imperative, they often provide a facility for describing the effect of a function as a series of steps. Other functional languages, such as Lisp, OCaml and Erlang, support a mixture of procedural and functional programming.
Some logic programming languages, such as Prolog, and database query languages, such as SQL, while declarative in principle, also support a procedural style of programming.
Subparadigms
Declarative programming is an umbrella term that includes a number of better-known programming paradigms.
Constraint programming
Constraint programming states relations between variables in the form of constraints that specify the properties of the target solution. The set of constraints is solved by giving a value to each variable so that the solution is consistent with the maximum number of constraints. Constraint programming often complements other paradigms: functional, logical, or even imperative programming.
Domain-specific languages
Well-known examples of declarative domain-specific languages (DSLs) include the yacc parser generator input language, QML, the Make build specification language, Puppet's configuration management language, regular expressions, Datalog, answer set programming and a subset of SQL (SELECT queries, for example). DSLs have the advantage of being useful while not necessarily needing to be Turing-complete, which makes it easier for a language to be purely declarative.
Many markup languages such as HTML, MXML, XAML, XSLT or other user-interface markup languages are often declarative. HTML, for example, only describes what should appear on a webpage - it specifies neither control flow for rendering a page nor the page's possible interactions with a user.
, some software systems combine traditional user-interface markup languages (such as HTML) with declarative markup that defines what (but not how) the back-end server systems should do to support the declared interface. Such systems, typically using a domain-specific XML namespace, may include abstractions of SQL database syntax or parameterized calls to web services using representational state transfer (REST) and SOAP.
Functional programming
Functional programming languages such as Haskell, Scheme, and ML evaluate expressions via function application. Unlike the related but more imperative paradigm of procedural programming, functional programming places little emphasis on explicit sequencing. Instead, computations are characterised by various kinds of recursive higher-order function application and composition, and as such can be regarded simply as a set of mappings between domains and codomains. Many functional languages, including most of those in the ML and Lisp families, are not purely functional, and thus allow the introduction of stateful effects in programs.
Hybrid languages
Makefiles, for example, specify dependencies in a declarative fashion, but include an imperative list of actions to take as well. Similarly, yacc specifies a context free grammar declaratively, but includes code snippets from a host language, which is usually imperative (such as C).
Logic programming
Logic programming languages, such as Prolog, Datalog and answer set programming, compute by proving that a goal is a logical consequence of the program, or by showing that the goal is true in a model defined by the program. Prolog computes by reducing goals to subgoals, top-down using backward reasoning, whereas most Datalog systems compute bottom-up using forward reasoning. Answer set programs typically use SAT solvers to generate a model of the program.
Modeling
Models, or mathematical representations, of physical systems may be implemented in computer code that is declarative. The code contains a number of equations, not imperative assignments, that describe ("declare") the behavioral relationships. When a model is expressed in this formalism, a computer is able to perform algebraic manipulations to best formulate the solution algorithm. The mathematical causality is typically imposed at the boundaries of the physical system, while the behavioral description of the system itself is declarative or acausal. Declarative modeling languages and environments include Analytica, Modelica and Simile.
Examples
Lisp
Lisp is a family of programming languages loosely inspired by mathematical notation and Alonzo Church's lambda calculus. Some dialects, such as Common Lisp, are primarily imperative but support functional programming. Others, such as Scheme, are designed for functional programming.
In Scheme, the factorial function can be defined as follows:
(define (factorial n)
(if (= n 0)
1 ;;; 0! = 1
(* n (factorial (- n 1))))) ;;; n! = n*(n-1)!
This defines the factorial function using its recursive definition. In contrast, it is more typical to define a procedure for an imperative language.
In lisps and lambda calculus, functions are generally first-class citizens. Loosely, this means that functions can be inputs and outputs for other functions. This can simplify the definition of some functions.
For example, writing a function to output the first n square numbers in Racket can be done accordingly:
(define (first-n-squares n)
(map
(lambda (x) (* x x)) ;;; A function mapping x -> x^2
(range n))) ;;; List of the first n non-negative integers
The map function accepts a function and a list; the output is a list of results of the input function on each element of the input list.
ML
ML (1973) stands for "Meta Language." ML is statically typed, and function arguments and return types may be annotated.
ML is not as bracket-centric as Lisp, and instead uses a wider variety of syntax to codify the relationship between code elements, rather than appealing to list ordering and nesting to express everything. The following is an application of times_10:
times_10 2
It returns "20 : int", that is, 20, a value of type int.
Like Lisp, ML is tailored to process lists, though all elements of a list must be the same type.
Prolog
Prolog (1972) stands for "PROgramming in LOGic." It was developed for natural language question answering, using SL resolution both to deduce answers to queries and to parse and generate natural language sentences.
The building blocks of a Prolog program are facts and rules. Here is a simple example:
cat(tom). % tom is a cat
mouse(jerry). % jerry is a mouse
animal(X) :- cat(X). % each cat is an animal
animal(X) :- mouse(X). % each mouse is an animal
big(X) :- cat(X). % each cat is big
small(X) :- mouse(X). % each mouse is small
eat(X,Y) :- mouse(X), cheese(Y). % each mouse eats each cheese
eat(X,Y) :- big(X), small(Y). % each big being eats each small being
Given this program, the query eat(tom,jerry) succeeds, while eat(jerry,tom) fails. Moreover, the query eat(X,jerry) succeeds with the answer substitution X=tom.
Prolog executes programs top-down, using SLD resolution to reason backwards, reducing goals to subgoals. In this example, it uses the last rule of the program to reduce the goal of answering the query eat(X,jerry) to the subgoals of first finding an X such that big(X) holds and then of showing that small(jerry) holds. It repeatedly uses rules to further reduce subgoals to other subgoals, until it eventually succeeds in unifying all subgoals with facts in the program. This backward reasoning, goal-reduction strategy treats rules in logic programs as procedures, and makes Prolog both a declarative and procedural programming language.
The broad range of Prolog applications is highlighted in the Year of Prolog Book, celebrating the 50 year anniversary of Prolog.
Datalog
The origins of Datalog date back to the beginning of logic programming, but it was identified as a separate area around 1977. Syntactically and semantically, it is a subset of Prolog. But because it does not have compound terms, it is not Turing-complete.
Most Datalog systems execute programs bottom-up, using rules to reason forwards, deriving new facts from existing facts, and terminating when there are no new facts that can be derived, or when the derived facts unify with the query. In the above example, a typical Datalog system would first derive the new facts:
animal(tom).
animal(jerry).
big(tom).
small(jerry).
Using these facts, it would then derive the additional fact:
eats(tom, jerry).
It would then terminate, both because no new, additional facts can be derived, and because the newly derived fact unifies with the query eats(X, jerry).
Datalog has been applied to such problems as data integration, information extraction, networking, security, cloud computing and machine learning.
Answer Set Programming
Answer set programming (ASP) evolved in the late 1990s, based on the stable model (answer set) semantics of logic programming. Like Datalog, it is a subset of Prolog; and, because it does not have compound terms, it is not Turing-complete.
Most implementations of ASP execute a program by first "grounding" the program, replacing all variables in rules by constants in all possible ways, and then using a propositional SAT solver, such as the DPLL algorithm to generate one or more models of the program.
Its applications are oriented towards solving difficult search problems and knowledge representation.
| Technology | Software development: General | null |
210755 | https://en.wikipedia.org/wiki/Hydrogen%20chloride | Hydrogen chloride | The compound hydrogen chloride has the chemical formula and as such is a hydrogen halide. At room temperature, it is a colorless gas, which forms white fumes of hydrochloric acid upon contact with atmospheric water vapor. Hydrogen chloride gas and hydrochloric acid are important in technology and industry. Hydrochloric acid, the aqueous solution of hydrogen chloride, is also commonly given the formula HCl.
Reactions
Hydrogen chloride is a diatomic molecule, consisting of a hydrogen atom H and a chlorine atom Cl connected by a polar covalent bond. The chlorine atom is much more electronegative than the hydrogen atom, which makes this bond polar. Consequently, the molecule has a large dipole moment with a negative partial charge (δ−) at the chlorine atom and a positive partial charge (δ+) at the hydrogen atom. In part because of its high polarity, HCl is very soluble in water (and in other polar solvents).
Upon contact, and HCl combine to form hydronium cations and chloride anions through a reversible chemical reaction:
The resulting solution is called hydrochloric acid and is a strong acid. The acid dissociation or ionization constant, Ka, is large, which means HCl dissociates or ionizes practically completely in water. Even in the absence of water, hydrogen chloride can still act as an acid. For example, hydrogen chloride can dissolve in certain other solvents such as methanol:
Hydrogen chloride can protonate molecules or ions and can also serve as an acid-catalyst for chemical reactions where anhydrous (water-free) conditions are desired.
Because of its acidic nature, hydrogen chloride is a corrosive substance, particularly in the presence of moisture.
Structure and properties
Frozen HCl undergoes a phase transition at . X-ray powder diffraction of the frozen material shows that the material changes from an orthorhombic structure to a cubic one during this transition. In both structures the chlorine atoms are in a face-centered array. However, the hydrogen atoms could not be located. Analysis of spectroscopic and dielectric data, and determination of the structure of DCl (deuterium chloride) indicates that HCl forms zigzag chains in the solid, as does HF (see figure on right).
The infrared spectrum of gaseous hydrogen chloride, shown on the left, consists of a number of sharp absorption lines grouped around 2886 cm−1 (wavelength ~3.47 μm). At room temperature, almost all molecules are in the ground vibrational state v = 0. Including anharmonicity the vibrational energy can be written as:
To promote an HCl molecule from the v = 0 to the v = 1 state, we would expect to see an infrared absorption about νo = νe + 2xeνe = 2880 cm−1. However, this absorption corresponding to the Q-branch is not observed due to it being forbidden by symmetry. Instead, two sets of signals (P- and R-branches) are seen owing to a simultaneous change in the rotational state of the molecules. Because of quantum mechanical selection rules, only certain rotational transitions are permitted. The states are characterized by the rotational quantum number J = 0, 1, 2, 3, ... selection rules state that ΔJ is only able to take values of ±1.
The value of the rotational constant B is much smaller than the vibrational one νo, such that a much smaller amount of energy is required to rotate the molecule; for a typical molecule, this lies within the microwave region. However, the vibrational energy of HCl molecule places its absorptions within the infrared region, allowing a spectrum showing the rovibrational transitions of this molecule to be easily collected using an infrared spectrometer with a gas cell. The latter can even be made of quartz as the HCl absorption lies in a window of transparency for this material.
Naturally abundant chlorine consists of two isotopes, 35Cl and 37Cl, in a ratio of approximately 3:1. While the spring constants are nearly identical, the disparate reduced masses of H35Cl and H37Cl cause measurable differences in the rotational energy, thus doublets are observed on close inspection of each absorption line, weighted in the same ratio of 3:1.
Production
Most hydrogen chloride produced on an industrial scale is used for hydrochloric acid production.
Historical routes
In the 17th century, Johann Rudolf Glauber from Karlstadt am Main, Germany used sodium chloride salt and sulfuric acid for the preparation of sodium sulfate in the Mannheim process, releasing hydrogen chloride. Joseph Priestley of Leeds, England prepared pure hydrogen chloride in 1772, and by 1808 Humphry Davy of Penzance, England had proved that the chemical composition included hydrogen and chlorine.
Direct synthesis
Hydrogen chloride is produced by combining chlorine and hydrogen:
As the reaction is exothermic, the installation is called an HCl oven or HCl burner. The resulting hydrogen chloride gas is absorbed in deionized water, resulting in chemically pure hydrochloric acid. This reaction can give a very pure product, e.g. for use in the food industry.
The reaction can also be triggered by blue light.
Organic synthesis
The industrial production of hydrogen chloride is often integrated with the formation of chlorinated and fluorinated organic compounds, e.g., Teflon, Freon, and other CFCs, as well as chloroacetic acid and PVC. Often this production of hydrochloric acid is integrated with captive use of it on-site. In the chemical reactions, hydrogen atoms on the hydrocarbon are replaced by chlorine atoms, whereupon the released hydrogen atom recombines with the spare atom from the chlorine molecule, forming hydrogen chloride. Fluorination is a subsequent chlorine-replacement reaction, producing again hydrogen chloride:
RCl + HF → RF + HCl
The resulting hydrogen chloride is either reused directly or absorbed in water, resulting in hydrochloric acid of technical or industrial grade.
Laboratory methods
Small amounts of hydrogen chloride for laboratory use can be generated in an HCl generator by dehydrating hydrochloric acid with either sulfuric acid or anhydrous calcium chloride. Alternatively, HCl can be generated by the reaction of sulfuric acid with sodium chloride:
This reaction occurs at room temperature. Provided there is NaCl remaining in the generator and it is heated above 200 °C, the reaction proceeds further:
For such generators to function, the reagents should be dry.
Hydrogen chloride can also be prepared by the hydrolysis of certain reactive chloride compounds such as phosphorus chlorides, thionyl chloride (), and acyl chlorides. For example, cold water can be gradually dripped onto phosphorus pentachloride () to give HCl:
Applications
Most hydrogen chloride is consumed in the production of hydrochloric acid. It is also used in the production of vinyl chloride and many alkyl chlorides. Trichlorosilane, a precursor to ultrapure silicon, is produced by the reaction of hydrogen chloride and silicon at around 300 °C.
History
Around 900, the authors of the Arabic writings attributed to Jabir ibn Hayyan (Latin: Geber) and the Persian physician and alchemist Abu Bakr al-Razi (c. 865–925, Latin: Rhazes) were experimenting with sal ammoniac (ammonium chloride), which when it was distilled together with vitriol (hydrated sulfates of various metals) produced hydrogen chloride. It is possible that in one of his experiments, al-Razi stumbled upon a primitive method to produce hydrochloric acid. However, it appears that in most of these early experiments with chloride salts, the gaseous products were discarded, and hydrogen chloride may have been produced many times before it was discovered that it can be put to chemical use.
One of the first such uses was the synthesis of mercury(II) chloride (corrosive sublimate), whose production from the heating of mercury either with alum and ammonium chloride or with vitriol and sodium chloride was first described in the De aluminibus et salibus ("On Alums and Salts"), an eleventh- or twelfth century Arabic text falsely attributed to Abu Bakr al-Razi and translated into Latin by Gerard of Cremona (1144–1187).
Another important development was the discovery by pseudo-Geber (in the De inventione veritatis, "On the Discovery of Truth", after c. 1300) that by adding ammonium chloride to nitric acid, a strong solvent capable of dissolving gold (i.e., aqua regia) could be produced.
After the discovery in the late sixteenth century of the process by which unmixed hydrochloric acid can be prepared, it was recognized that this new acid (then known as spirit of salt or acidum salis) released vaporous hydrogen chloride, which was called marine acid air. In the 17th century, Johann Rudolf Glauber used salt (sodium chloride) and sulfuric acid for the preparation of sodium sulfate, releasing hydrogen chloride gas (see production, above). In 1772, Carl Wilhelm Scheele also reported this reaction and is sometimes credited with its discovery. Joseph Priestley prepared hydrogen chloride in 1772, and in 1810 Humphry Davy established that it is composed of hydrogen and chlorine.
During the Industrial Revolution, demand for alkaline substances such as soda ash increased, and Nicolas Leblanc developed a new industrial-scale process for producing the soda ash. In the Leblanc process, salt was converted to soda ash, using sulfuric acid, limestone, and coal, giving hydrogen chloride as by-product. Initially, this gas was vented to air, but the Alkali Act of 1863 prohibited such release, so then soda ash producers absorbed the HCl waste gas in water, producing hydrochloric acid on an industrial scale. Later, the Hargreaves process was developed, which is similar to the Leblanc process except sulfur dioxide, water, and air are used instead of sulfuric acid in a reaction which is exothermic overall. In the early 20th century the Leblanc process was effectively replaced by the Solvay process, which did not produce HCl. However, hydrogen chloride production continued as a step in hydrochloric acid production.
Historical uses of hydrogen chloride in the 20th century include hydrochlorinations of alkynes in producing the chlorinated monomers chloroprene and vinyl chloride, which are subsequently polymerized to make polychloroprene (Neoprene) and polyvinyl chloride (PVC), respectively. In the production of vinyl chloride, acetylene () is hydrochlorinated by adding the HCl across the triple bond of the molecule, turning the triple into a double bond, yielding vinyl chloride.
The "acetylene process", used until the 1960s for making chloroprene, starts out by joining two acetylene molecules, and then adds HCl to the joined intermediate across the triple bond to convert it to chloroprene as shown here:
This "acetylene process" has been replaced by a process which adds to the double bond of ethylene instead, and subsequent elimination produces HCl instead, as well as chloroprene.
Safety
Hydrogen chloride forms corrosive hydrochloric acid on contact with water found in body tissue. Inhalation of the fumes can cause coughing, choking, inflammation of the nose, throat, and upper respiratory tract, and in severe cases, pulmonary edema, circulatory system failure, and death. Skin contact can cause redness, pain, and severe chemical burns. Hydrogen chloride may cause severe burns to the eye and permanent eye damage.
The U.S. Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health have established occupational exposure limits for hydrogen chloride at a ceiling of 5 ppm (7 mg/m3), and compiled extensive information on hydrogen chloride workplace safety concerns.
| Physical sciences | Hydrogen compounds | Chemistry |
210782 | https://en.wikipedia.org/wiki/Tern | Tern | Terns are seabirds in the family Laridae, subfamily Sterninae, that have a worldwide distribution and are normally found near the sea, rivers, or wetlands. Terns are treated in eleven genera in a subgroup of the family Laridae, which also includes several genera of gulls and the skimmers (Rynchops). They are slender, lightly built birds with long, forked tails, narrow wings, long bills, and relatively short legs. Most species are pale grey above and white below with a contrasting black cap to the head, but the marsh terns, the black-bellied tern, the Inca tern, and some noddies have dark body plumage for at least part of the year. The sexes are identical in appearance, but young birds are readily distinguishable from adults. Terns have a non-breeding plumage, which usually involves a white forehead and much-reduced black cap.
Terns are long-lived birds and are relatively free from natural predators and parasites; most species are declining in numbers due directly or indirectly to human activities, including habitat loss, pollution, disturbance, and predation by introduced mammals. The Chinese crested tern is critically endangered and three other species are classed as endangered. International agreements provide a measure of protection, but adults and eggs of some species are still used for food in the tropics.
Description
Terns range in size from the least tern, at in length and weighing , to the Caspian tern at , . They are longer-billed, lighter-bodied, and more streamlined than gulls, and their long tails and long narrow wings give them an elegance in flight. Male and female plumages are identical, although the male can be 2–5% larger than the female and often has a relatively larger bill. Sea terns have deeply forked tails, and at least a shallow "V" is shown by all other species. The noddies (genus Anous) have unusual notched-wedge shaped tails, the longest tail feathers being the middle-outer, rather than the central or outermost. Although their legs are short, terns can run well. They rarely swim, despite having webbed feet, usually landing on water only to bathe.
The majority of sea terns have light grey or white body plumage as adults, with a black cap to the head. The legs and bill are various combinations of red, orange, yellow, or black depending on species. The pale plumage is conspicuous from a distance at sea, and may attract other birds to a good feeding area for these fish-eating species. When seen against the sky, the white underparts also help to hide the hunting bird from its intended prey. The Inca tern has mainly dark plumage, and three species that mainly eat insects, black tern, white-winged tern, and black-bellied tern, have black underparts in the breeding season. Three of the noddies (brown noddy, black noddy, and lesser noddy) have dark plumage with a pale head cap, while the other two noddies (blue noddy and grey noddy, both of which were formerly placed in the genus Procelsterna) have paler grey plumage. The reason for their dark plumage is unknown, but it has been suggested that in tropical areas, where food resources are scarce, the less conspicuous colouration makes it harder for other noddies to detect a feeding bird. Plumage type, especially the head pattern, is linked to the phylogeny of the terns, and the pale-capped, dark-bodied noddies are believed to have diverged earlier than the other genera from an ancestral white-headed gull, followed by the partially black-headed Onychoprion and Sternula groupings.
Juvenile terns typically have brown- or yellow-tinged upperparts, and the feathers have dark edges that give the plumage a scaly appearance. They have dark bands on the wings and short tails. In most species, the subsequent moult does not start until after migration, the plumage then becoming more like the adult, but with some retained juvenile feathers and a white forehead with only a partial dark cap. By the second summer, the appearance is very like the adult, and full mature plumage is usually attained by the third year. After breeding, terns moult into a winter plumage, typically showing a white forehead. Heavily worn or aberrant plumages such as melanism and albinism are much rarer in terns than in gulls.
Voice
Terns have a wide repertoire of vocalisations. For example, the common tern has a distinctive alarm, kee-yah, also used as a warning to intruders, and a shorter kyar, given as an individual takes flight in response to a more serious threat; this quietens the usually noisy colony while its residents assess the danger. Other calls include a down-slurred keeur given when an adult is approaching the nest with a fish, and a kip uttered during social contact. Parents and chicks can locate one another by call, and siblings also recognise each other's vocalisations from about the twelfth day after hatching, which helps to keep the brood together.
Vocal differences reinforce species separation between closely related birds such as the least and little terns, and can help humans distinguish similar species, such as common and arctic terns, since flight calls are unique to each species.
Taxonomy
The bird order Charadriiformes contains 18 coastal seabird and wader families. Within the order, the terns form a lineage with the gulls, and, less closely, with the skimmers, skuas, and auks. Early authors such as Conrad Gessner, Francis Willughby, and William Turner did not clearly separate terns from gulls, but Linnaeus recognised the distinction in his 1758 Systema Naturae, placing the gulls in the genus Larus and the terns in Sterna. He gave Sterna the description rostrum subulatum, "awl-shaped bill", referring to the long, pointed bills typical of this group of birds, a feature that distinguishes them from the thicker-billed gulls. Behaviour and morphology suggest that the terns are more closely related to the gulls than to the skimmers or skuas, and although Charles Lucien Bonaparte created the family Sternidae for the terns in 1838, for many years they were considered to be a subfamily, Sterninae, of the gull family, Laridae. Relationships between various tern species, and between the terns and the other Charadriiformes, were formerly difficult to resolve because of a poor fossil record and the misidentification of some finds.
Following genetic research in the early twenty-first century, the terns were historically treated as a separate family, Sternidae. Most terns were formerly treated as belonging to one large genus, Sterna, with just a few dark species placed in other genera; in one 1959 paper, only the noddies and the Inca tern were excluded from Sterna. A recent analysis of DNA sequences supported the splitting of Sterna into several smaller genera. One study of part of the cytochrome b gene sequence found a close relationship between terns and a group of waders in the suborder Thinocori. These results are in disagreement with other molecular and morphological studies, and have been interpreted as showing either a large degree of molecular convergent evolution between the terns and these waders, or the retention of an ancient genotype.
Research in 2007 had suggested that the noddies were not terns at all, but were basal to all the other genera in Laridae, a taxonomy that was followed by the IOC World Bird List for several years up to 2023, but more comprehensive analysis has now shown that the noddies are basal to only the other terns, not the whole family; this has now been followed by the IOC World Bird List version 14.1 in 2024.
Etymology
The word "stearn" was used for these birds in Old English as early as the eighth century, and appears in the poem The Seafarer, written in the ninth century or earlier. Variants such as "tearn" occurred by the eleventh century, although the older form lingered on in Norfolk dialect for several centuries. As now, the term was used for the inland black tern as well as the marine species. Some authorities consider "tearn" and similar forms to be variants of "stearn", while others derive the English words from Scandinavian equivalents such as Danish and Norwegian terne or Swedish tärna, and ultimately from Old Norse þerna. Linnaeus adopted "stearn" or "sterna" (which the naturalist William Turner had used in 1544 as a Latinisation of an English word, presumably "stern", for the black tern) or a North Germanic equivalent for his genus name Sterna.
Species
The cladogram shows the relationships between the tern genera, and the currently recognised species, based on mitochondrial DNA studies, are listed below:
In addition to extant species, the fossil record includes a Miocene palaeospecies, Sterna milne-edwardsii.
The birds in the genus Anous are known as noddies, the Chlidonias species are the marsh terns, and all other species comprise the sea terns.
Distribution and habitat
Terns have a worldwide distribution, breeding on all continents including Antarctica. The northernmost and southernmost breeders are the Arctic tern and Antarctic tern respectively. Many terns breeding in temperate zones are long-distance migrants, and the Arctic tern sees more annual daylight than any other animal as it migrates from its northern breeding grounds to Antarctic waters, a return journey of more than . A common tern that hatched in Sweden and was found dead five months later on Stewart Island, New Zealand, must have flown at least . Actual flight distances are, of course, much greater than the shortest possible route. Arctic terns from Greenland were shown by radio geolocation to average on their annual migrations, while another from the Farne Islands in Northumberland tagged 'G82' covered a staggering 96,000 km in just 10 months from the end of one breeding season to the start of the next, travelling not just the length of the Atlantic Ocean and the width of the Indian Ocean, but also half way across the South Pacific to the boundary between the Ross and Amundsen Seas before returning back west.
Most terns breed on open sandy or rocky areas on coasts and islands. The yellow-billed, large-billed, and black-fronted terns breed only on rivers, and common, least and little terns also sometimes use inland locations. The marsh terns, Trudeau's tern and some Forster's terns nest in inland marshes. The black noddy and the white tern nest above ground level on cliffs or in trees. Migratory terns move to the coast after breeding, and most species winter near land, although some marine species, like the Aleutian tern, may wander far from land. The sooty tern is entirely oceanic when not breeding, and healthy young birds are not seen on land for up to five years after fledging until they return to breed. They lack waterproof plumage, so they cannot rest on the sea. Where they spend the years prior to breeding is unknown.
Behaviour
The terns are birds of open habitats that typically breed in noisy colonies and lay their eggs on bare ground with little or no nest material. Marsh terns construct floating nests from the vegetation in their wetland habitats, and a few species build simple nests in trees, on cliffs or in crevices. The white tern, uniquely, lays its single egg on a bare tree branch. Depending on the species, one to three eggs make up the clutch. Most species feed on fish caught by diving from flight, but the marsh terns are insect-eaters, and some large terns will supplement their diet with small land vertebrates. Many terns are long-distance migrants, and the Arctic tern may see more daylight in a year than any other animal.
Breeding
Terns are normally monogamous, although trios or female-female pairings have been observed in at least three species. Most terns breed annually and at the same time of year, but some tropical species may nest at intervals shorter than 12 months or asynchronously. Most terns become sexually mature when aged three, although some small species may breed in their second year. Some large sea terns, including the sooty and bridled terns, are four or older when they first breed. Terns normally breed in colonies, and are site-faithful if their habitat is sufficiently stable. A few species nest in small or dispersed groups, but most breed in colonies of up to a few hundred pairs, often alongside other seabirds such as gulls or skimmers. Large tern species tend to form larger colonies, which in the case of the sooty tern can contain up to two million pairs. Large species nest very close together and sit tightly, making it difficult for aerial predators to land among them. Smaller species are less closely packed and mob intruders. Peruvian and Damara terns have small dispersed colonies and rely on the cryptic plumage of the eggs and young for protection.
The male selects a territory, which he defends against conspecifics, and re-establishes a pair bond with his mate or attracts a new female if necessary. Courtship involves ritualised flight and ground displays, and the male often presents a fish to his partner. Most species have little or no nest, laying the eggs onto bare ground, but Trudeau's tern, Forster's tern and the marsh terns construct floating nests from the vegetation in their wetland habitats. Black and lesser noddies build nests of twigs, feathers and excreta on tree branches, and brown, blue, and grey noddies make rough platforms of grass and seaweed on cliff ledges, in cavities or on other rocky surfaces. The Inca tern nests in crevices, caves and disused burrows, such as that of a Humboldt penguin. The white tern is unique in that it lays its single egg on a bare tree branch.
Tropical species usually lay just one egg, but two or three is typical in cooler regions if there is an adequate food supply. The time taken to complete the clutch varies, but for temperate species incubation takes 21–28 days. The eggs of most gulls and terns are brown
with dark splotches, so they are difficult for predators to spot on the beach. The precocial chicks fledge in about four weeks after hatching. Tropical species take longer because of the poorer food supply. Both parents incubate the eggs and feed the chicks, although the female does more incubating and less fishing than her partner. Young birds migrate with the adults. Terns are generally long-lived birds, with individuals typically returning for 7–10 breeding seasons. Maximum known ages include 34 for an Arctic tern and 32 for a sooty. Although several other species are known to live in captivity for up to 20 years, their greatest recorded ages are underestimates because the birds can outlive their rings. Interbreeding between tern species is rare, and involves closely related species when it occurs. Hybrids recorded include common tern with roseate, Sandwich with lesser-crested, and black with white-winged.
Feeding
Most terns hunt fish by diving, often hovering first, and the particular approach technique used can help to distinguish similar species at a distance. Sea terns often hunt in association with porpoises or predatory fish, such as bluefish, tuna or bonitos, since these large marine animals drive the prey to the surface. Sooty terns feed at night as the fish rise to the surface, and are believed to sleep on the wing since they become waterlogged easily. Terns of several species will feed on invertebrates, following the plough or hunting on foot on mudflats. The marsh terns normally catch insects in the air or pick them off the surface of fresh water. Other species will sometimes use these techniques if the opportunity arises. An individual tern's foraging efficiency increases with its age.
The gull-billed tern is an opportunist predator, taking a wide variety of prey from marine, freshwater and terrestrial habitats. Depending on what is available it will eat small crabs, fish, crayfish, grasshoppers and other large insects, lizards and amphibians. Warm-blooded prey includes mice and the eggs and chicks of other beach-breeding birds; least terns, little terns and members of its own species may be victims. The greater crested tern will also occasionally catch unusual vertebrate species such as agamid lizards and green sea turtle hatchlings, and follows trawlers for discards.
The eyes of terns cannot accommodate under water, so they rely on accurate sighting from the air before they plunge-dive. Like other seabirds that feed at the surface or dive for food, terns have red oil droplets in the cones of their retinas; birds that have to look through an air/water interface have more deeply coloured carotenoid pigments in the oil drops than other species. The pigment also improves visual contrast and sharpens distance vision, especially in hazy conditions, and helps terns to locate shoals of fish, although it is uncertain whether they are sighting the phytoplankton on which the fish feed, or other feeding birds. The red colouring reduces ultraviolet sensitivity, which in any case is an adaptation more suited to terrestrial feeders like the gulls, and this protects the eye from UV damage.
Predators and parasites
The inaccessibility of many tern colonies gave them a measure of protection from mammalian predators, especially on islands, but introduced species brought by humans can seriously affect breeding birds. These can be predators such as foxes, raccoons, cats and rats, or animals that destroy the habitat, including rabbits, goats and pigs. Problems arise not only on formerly mammal-free islands, as in New Zealand, but also where an alien carnivore, such as the American mink in Scotland, presents an unfamiliar threat.
Adult terns may be hunted by owls and raptors, and their chicks and eggs may be taken by herons, crows or gulls. Less obvious nest predators include ruddy turnstones in the Arctic, and gull-billed terns in little tern colonies. Adults may be robbed of their catch by avian kleptoparasites such as frigatebirds, skuas, other terns or large gulls.
External parasites include chewing lice of the genus Saemundssonia, feather lice and fleas such as Ceratophyllus borealis. Lice are often host specific, and the closely related common and Arctic terns carry quite different species. Internal parasites include the crustacean Reighardia sternae, and tapeworms such as Ligula intestinalis and members of the genera Diphyllobothrium and Schistocephalus. Terns are normally free of blood parasites, unlike gulls that often carry Haemoproteus species. An exception is the brown noddy, which sometimes harbours protozoa of that genus. In 1961 the common tern was the first wild bird species identified as being infected with avian influenza, the H5N3 variant being found in an outbreak involving South African birds. Several species of terns have been implicated as carriers of West Nile virus.
Relationships with humans
Terns and their eggs have long been eaten by humans and island colonies were raided by sailors on long voyages since the eggs or large chicks were an easily obtained source of protein. Eggs are still illegally harvested in southern Europe, and adults of wintering birds are taken as food in West Africa and South America. The roseate tern is significantly affected by this hunting, with adult survival 10% lower than would otherwise be expected. In the West Indies, the eggs of roseate and sooty terns are believed to be aphrodisiacs, and are disproportionately targeted by egg collectors. Tern skins and feathers have long been used for making items of clothing such as capes and hats, and this became a large-scale activity in the second half of the nineteenth century when it became fashionable to use feathers in hatmaking. This trend started in Europe but soon spread to the Americas and Australia. White was the preferred colour, and sometimes wings or entire birds were used.
Terns have sometimes benefited from human activities, following the plough or fishing boats for easy food supplies, although some birds get trapped in nets or swallow plastic. Fishermen looked for feeding tern flocks, since the birds could lead them to fish shoals. Overfishing of small fish such as sand eels can lead to steep declines in the colonies relying on these prey items. More generally, the loss or disruption to tern colonies caused by human activities has caused declines in many species. Pollution has been a problem in some areas, and in the 1960s and 1970s DDT caused egg loss through thinning of the shells. In the 1980s, organochlorides caused severe declines in the Great Lakes area of the US. Because of their sensitivity to pollutants, terns are sometimes used as indicators of contamination levels.
Habitat enhancements used to increase the breeding success of terns include floating nest platforms for black, common and Caspian terns, and artificial islands created for a number of different species. More specialised interventions include providing nest boxes for roseate terns, which normally nest in the shelter of tallish vegetation, and using artificial eelgrass mats to encourage common terns to nest in areas not vulnerable to flooding.
Conservation status
A number of terns face serious threats, and the Chinese crested tern is classed as "critically endangered" by BirdLife International. It has a population of fewer than 50 birds and a breeding range of just . It is declining due to egg collection, human disturbance and the loss of coastal wetlands in China. Three other species are categorised as "endangered", with declining populations of less than 10,000 birds. The South Asian black-bellied tern is threatened by habitat loss, egg collecting for food, pollution and predation. In New Zealand, the black-fronted tern is facing a rapid fall in numbers due to predation by introduced mammals and Australian magpies. Disturbance by cattle and sheep and by human activities is also a factor. The Peruvian tern was initially damaged by the collapse of anchoveta stocks in 1972, but breeding colonies have subsequently been lost due to building, disturbance and pollution in their coastal wetlands.
The Australasian fairy tern is described as "vulnerable". Disturbance by humans, dogs and vehicles, predation by introduced species and inappropriate water level management in South Australia are the main reasons for its decline. Five species are "near threatened", indicating less severe concerns or only potential vulnerability. The elegant tern is so categorised because 95% of the population breeds on one island, Isla Rasa in the Gulf of California, and the Kerguelen tern has a population of less than 5,000 adults breeding on small and often stormy islands in the southern Indian Ocean. Three species, the Inca, Damara, and river terns, are expected to decline in the future due to habitat loss and disturbance. Some tern subspecies are endangered, including the California least tern and the Easter Island race of the grey noddy.
Most tern species are declining in numbers due to the loss or disturbance of breeding habitat, pollution and increased predation. Gull populations have increased over the last century because of reduced persecution and the availability of food from human activities, and terns have been forced out of many traditional nesting areas by the larger birds. A few species are defying the trend and showing local increases, including the Arctic tern in Scandinavia, Forster's tern around the Great Lakes, the Sandwich tern in eastern North America and its yellow-billed subspecies, the Cayenne tern, in the Caribbean.
Terns are protected by international legislation such as the Agreement on the Conservation of African-Eurasian Migratory Waterbirds (AEWA) and the US-Canada Migratory Bird Treaty Act of 1918. Parties to the AWEA agreement are required to engage in a wide range of conservation strategies described in a detailed action plan. The plan is intended to address key issues such as species and habitat conservation, management of human activities, research, education, and implementation. The North American legislation is similar, although there is a greater emphasis on protection.
| Biology and health sciences | Charadriiformes | null |
210792 | https://en.wikipedia.org/wiki/Skua | Skua | The skuas are a group of predatory seabirds with seven species forming the genus Stercorarius, the only genus in the family Stercorariidae. The three smaller skuas, the Arctic skua, the long-tailed skua, and the pomarine skua, are called jaegers in North American English.
The English word "skua" comes from the Faroese name for the great skua, , with the island of Skúvoy renowned for its colony of that bird. The general Faroese term for skuas is . The word "jaeger" or is German for "hunter". The genus name Stercorarius is Latin and means "of dung"; the food disgorged by other birds when pursued by skuas was once thought to be excrement.
Skuas nest on the ground in temperate, Antarctic, and Arctic regions, and are long-distance migrants. They have even been sighted at the South Pole.
Biology and habits
Outside the breeding season, skuas take fish, offal, and carrion. Many practice kleptoparasitism, which comprises up to 95% of the feeding methods of wintering skuas, by chasing gulls, terns and other seabirds to steal their catches, regardless of the size of the species attacked (up to three times heavier than the attacking skua). Larger species, such as the great skua, regularly kill and eat adult seabirds, such as puffins and gulls and have been observed killing birds as large as a grey heron. On the breeding grounds, the three, more slender northern breeding species commonly eat lemmings. Those species that breed in the southern oceans largely feed on fish that can be caught near their colonies. The eggs and chicks of other seabirds, primarily penguins, are an important food source for most skua species during the nesting season.
In the southern oceans and Antarctica region, some skua species (especially the south polar skua) will readily scavenge carcasses at breeding colonies of both penguins and pinnipeds. Skuas will also kill live penguin chicks and sick or injured adult penguins. In these areas, the skuas will often forfeit their catches to the considerably larger and very aggressive giant petrels. Skuas have also been observed to directly pilfer milk from the elephant seal's teats.
Skuas are medium to large birds, typically with grey or brown plumage, often with white markings on the wings. The skuas range in size from the long-tailed skua, Stercorarius longicauda, at , to the brown skua, Stercorarius antarcticus, at . On average, a skua is about long, and across the wings. They have longish bills with a hooked tip, and webbed feet with sharp claws. They look like large dark gulls, but have a fleshy cere above the upper mandible.
The skuas are strong, acrobatic fliers. They are generally aggressive in disposition. Potential predators approaching their nests will be quickly attacked by the parent birds, which usually target the heads of intruders – a practice known as 'divebombing'.
Taxonomy
The genus Stercorarius was introduced by the French zoologist Mathurin Jacques Brisson in 1760 with the parasitic jaeger (Stercorarius parasiticus) as the type species.
Skuas are related to gulls, waders, auks, and skimmers. In the three smaller species, all nesting exclusively in the Holarctic, breeding adults have the two central tail feathers obviously elongated, and at least some adults have white on the underparts and pale yellow on the neck. These characteristics are not shared by the larger species, all native to the Southern Hemisphere except for the great skua. Therefore, the skuas are often split into two genera, with only the smaller species retained in Stercorarius, and the large species placed in Catharacta. However, based on genetics, behavior, and feather lice, the overall relationship among the species is best expressed by placing all in a single genus. The pomarine and great skuas' mitochondrial DNA (inherited from the mother) is in fact more closely related to each other than it is to either Arctic or long-tailed skuas, or to the Southern Hemisphere species. Thus, hybridization must have played a considerable role in the evolution of the diversity of Northern Hemisphere skuas.
Species
The genus contains seven species:
| Biology and health sciences | Charadriiformes | Animals |
210799 | https://en.wikipedia.org/wiki/Sunbittern | Sunbittern | The sunbittern (Eurypyga helias) is a bittern-like bird of tropical regions of the Americas, and the sole member of the family Eurypygidae (sometimes spelled Eurypigidae) and genus Eurypyga. It is found in Central and South America, and has three subspecies. The sunbittern shows both morphological and molecular similarities with the kagu (Rhynochetos jubatus) of New Caledonia, indicating a Gondwanan origin, both species being placed in the clade Eurypygiformes.
Taxonomy
The sunbittern was traditionally placed in the Gruiformes, but this was always considered preliminary. Altogether, the bird is most similar to another bird that was provisionally placed in the Gruiformes, the kagu (Rhynochetos jubatus). Molecular studies seem to confirm that the kagu and sunbittern are each other's closest living relatives and have a similar wing display. They are probably not Gruiformes (though the proposed Metaves are just as weakly supported). Altogether, the two species seem to form a minor Gondwanan lineage which could also include the extinct adzebills and/or the mesites, and is of unclear relation to the Gruiformes proper. Notably, the kagu and mesites also have powder down.
An indeterminate fossil eurypygid has been documented from the Green River Formation of Wyoming, USA. This specimen, known from a full skeleton, is the oldest and only known fossil of the group, and suggests that eurypygids had a much more northernly range in the past. This specimen has been figured in several studies, and was given the unofficial name "Eoeurypyga olsoni" in a 2003 dissertation, but as-of-yet remains unnamed.
Subspecies
The sunbittern was formerly treated as two species (E. helias and E. major), but now they are treated as a single species with considerable variation between the subspecies. The three subspecies are recognised on the basis of plumage characters and size. The three subspecies are allopatric.
E. h. helias – Amazonian sunbittern
E. h. major – northern sunbittern
E. h. meridionalis – foothill sunbittern
Description
The bird has a generally subdued coloration, with fine linear patterns of black, grey and brown. Its remiges however have vividly colored middle webs, which with wings fully spread show bright eyespots in red, yellow, and black. These are shown to other sunbitterns in courtship and threat displays, or used to startle potential predators. Male and female adult sunbitterns can be differentiated by small differences in the feather patterns of the throat and head. Like some other birds, the sunbittern has powder down.
The sunbittern has a long and pointed bill that is black above, and a short hallux as in shorebirds and rails. In the South American subspecies found in lowlands east of the Andes, the upperparts are mainly brown, and the legs and lower mandible are orange-yellow. The two other subspecies are greyer above, and their legs and bill are sometimes redder.
Distribution and habitat
The sunbittern's range extends from Guatemala to Brazil. The nominate race, E. h. helias, is found east of the Andes in lowland tropical South America, from the Orinoco basin, through the Amazon basin and Pantanal. The subspecies E. h. meridionalis, has a more restricted distribution, being found along the East Andean slope in south-central Peru, in the lower subtropical zone at altitudes of . The final subspecies, E. h. major, is found at various altitudes ranging from southern Guatemala, through Central America and the Chocó to western Ecuador. This subspecies may also be present in southern Mexico. It has been traditionally reported from the Atlantic slope of Chiapas, but no specimens are known and there have been no recent records.
The species is found in the humid Neotropical forests, generally with an open understorey and near rivers, streams, ponds or lagoons.
Behaviour and ecology
They are cryptic birds that display their large wings, that exhibits a pattern that resemble eyes, when they feel threatened.
Feeding
The sunbittern consumes a wide range of animal prey. Insects form an important part of the diet, with cockroaches, dragonfly larvae, flies, katydids, water beetles and moths being taken. Other invertebrate prey includes crabs, spiders, shrimps and earthworms. They will also take vertebrate prey including fish, tadpoles, toads and frogs, eels and lizards.
Sunbitterns are one of 12 species of birds in five families that have been described as fishing using baits or lures to attract prey to within striking distance. This type of behaviour falls within the common definition of tool use. In sunbitterns this behaviour has only been observed in captive birds so far.
Breeding
Sunbitterns start nesting in the early wet season and before it starts they make flight displays high in the forest canopy. They build open nests in trees, and lay two eggs with blotched markings. The young are precocial, but remain in the nest for several weeks after hatching.
| Biology and health sciences | Basics | Animals |
210820 | https://en.wikipedia.org/wiki/Phasianidae | Phasianidae | The Phasianidae are a family of heavy, ground-living birds, which includes pheasants, partridges, junglefowl, chickens, turkeys, Old World quail, and peafowl. The family includes many of the most popular gamebirds. The family includes 185 species divided into 54 genera. It was formerly broken up into two subfamilies, the Phasianinae and the Perdicinae. However, this treatment is now known to be paraphyletic and polyphyletic, respectively, and more recent evidence supports breaking it up into two subfamilies: Rollulinae and Phasianinae, with the latter containing multiple tribes within two clades. The New World quail (Odontophoridae) and guineafowl (Numididae) were formerly sometimes included in this family, but are now typically placed in families of their own; conversely, grouse and turkeys, formerly often treated as distinct families (Tetraonidae and Meleagrididae, respectively), are now known to be deeply nested within Phasianidae, so they are now included in the present family.
Description
Phasianids are terrestrial. They range in weight from in the case of the king quail to in the case of the Indian peafowl. If turkeys are included, rather than classified as a separate family, then the considerably heavier wild turkey capably reaches a maximum weight of more than . Length in this taxonomic family can vary from in the king quail up to (including the elongated train) in green peafowl, thus they beat even the true parrots in length diversity within a family of birds. Generally, sexual dimorphism is greater in larger-sized birds, with males tending to be larger than females. They are generally plump, with broad, relatively short wings and powerful legs. Many have a spur on each leg, most prominently with junglefowl (including chickens), pheasants, turkeys, and peafowl. Some, like quails, partridges, and grouse, have reduced spurs to none at all. A few have two spurs on each of their legs instead of one, including peacock-pheasants and spurfowl. The bill is short and compact, particularly in species that dig deep in the earth for food such as the Mearns quail. Males of the bigger galliform species often boast brightly-coloured plumage, as well as facial ornaments such as combs, wattles, and/or crests.
Distribution and habitat
The Phasianidae are mostly an Old World family, with a distribution that includes most of Europe and Asia (except the far north), all of Africa except the driest deserts, and south into much of eastern Australia and (formerly) New Zealand. The Meleagridini (turkeys) are native to the New World, while the Tetraonini (grouse) are circumpolar; both of these are members of Phasianinae. The greatest diversity of species is in Southeast Asia and Africa. The Congo peacock is specific to the African Congo.
Overall, Rollulinae is restricted to the tropics of East and Southeast Asia and the mountains of Tanzania, Phasianinae have a circumpolar range in the temperate zones of both Eurasia and North America (but also range into the tropics of east and southeast Asia), and Pavoninae have a wide range across Africa, Eurasia, and Australasia in both temperate and tropical zones.
The family is generally sedentary and resident, although some members of the group undertake long migrations, like ptarmigans and Old World quail. Several species in the family have been widely introduced around the world, particularly pheasants, which have been introduced to Europe, Australia, and the Americas, specifically for hunting purposes. Captive populations of peafowl, domestic chickens, and turkeys have also escaped or been released and became feral.
Behaviour and ecology
The phasianids have a varied diet, with foods taken ranging from purely vegetarian diets of seeds, leaves, fruits, tubers, and roots, to small animals including insects, insect grubs, and even small reptiles. Most species either specialise in feeding on plant matter or are predatory, although the chicks of most species are insectivorous.
In addition to the variation in diet, a considerable amount of variation exists in breeding strategies among the Phasianidae. Compared to birds in general, a large number of species do not engage in monogamy (the typical breeding system of most birds). The francolins of Africa and some partridges are reportedly monogamous, but polygamy has been reported in the pheasants and junglefowl, some quail, and the breeding displays of peacocks have been compared to those of a lek. Nesting usually occurs on the ground; only the tragopans nest higher up in trees or stumps of bushes. Nests can vary from mounds of vegetation to slight scrapes in the ground. As many as 20 eggs can be laid in the nest, although 7-12 are the more usual numbers, with smaller numbers in tropical species. Incubation times can range from 14–30 days depending on the species, and is almost always done solely by the hen, although a few involve the male partaking in caring for the eggs and chicks, like the willow ptarmigan and bobwhite quail.
Relationship with humans
The red junglefowl of Southeast Asia is the wild ancestor of the domesticated chicken, the most important bird in agriculture, and the wild turkey similarly is the ancestor of the domestic turkey. Several species of pheasants and partridges are extremely important to humans. Ring-necked pheasants, several partridge and quail species, and some francolins have been widely introduced and managed as game birds for hunting. Several species are threatened by human activities.
Systematics and evolution
The clade Phasianidae is the largest of the branch Galliformes, comprising 185 species divided into 54 genera. This group includes the pheasants and partridges, junglefowl chickens, quail, and peafowl. Turkeys and grouse have also been recognized as having their origins in the pheasant- and partridge-like birds.
Until the early 1990s, this family was broken up into two subfamilies: the Phasianinae, including pheasants, tragopans, junglefowls, and peafowls; and the Perdicinae, including partridges, Old World quails, and francolins. Molecular phylogenies have shown that these two subfamilies are not each monophyletic, but actually constitute only one lineage with one common ancestor. For example, some partridges (genus Perdix) are more closely affiliated to pheasants, whereas Old World quails and partridges from the genus Alectoris are closer to junglefowls.
The earliest fossil records of phasianids date to the late Oligocene epoch, about 30 million years ago.
Recent genera
Taxonomy and ordering is based on Kimball et al., 2021, which was accepted by the International Ornithological Congress. Tribes and subfamily names are based on the 4th edition of the Howard and Moore Complete Checklist of the Birds of the World. Genera without a tribe are considered to belong to tribe incertae sedis.
Subfamily Rollulinae
Xenoperdix Dinesen et al., 1994 (forest partridges)
Caloperdix Blyth, 1861 (ferruginous partridge)
Rollulus Bonnaterre, 1791 (crested partridges)
Melanoperdix Jerdon, 1864 (black partridge)
Arborophila Hodgson, 1837 (hill partridges)
Subfamily Phasianinae
Phasianinae "Erectile clade"
Lerwa Hodgson, 1837 (snow partridge)
Ithaginis Wagler, 1832 (blood pheasant)
Tribe Lophophorini
Tragopan Cuvier, 1829 non Gray 1841 (tragopans)
Tetraophasis Elliot, 1871 (monal-partridges)
Lophophorus Temminck, 1813 non Agassiz 1846 (monals)
Pucrasia Gray, 1841 (koklass pheasant)
Tribe Tetraonini
Meleagris Linnaeus, 1758 (turkeys)
Bonasa Stephens, 1819 (ruffed grouse)
Tetrastes Keyserling & Blasius, 1840 (hazel grouse)
Centrocercus Swainson, 1832 (sage-grouse)
Dendragapus Elliot, 1864 (blue grouse)
Tympanuchus Gloger, 1841 (prairie-chickens and sharp-tailed grouse)
Lagopus Brisson, 1760 (ptarmigans)
Falcipennis Elliot, 1864 (Siberian grouse)
Canachites Stejneger, 1885 (spruce grouse)
Tetrao Linnaeus, 1758 (capercaillies)
Lyrurus Swainson, 1832 (black grouse)
Rhizothera Gray, 1841 (long-billed partridges)
Perdix Brisson, 1760 (true partridges)
Tribe Phasianini
Syrmaticus Wagler, 1832 (long-tailed pheasants)
Chrysolophus Gray, 1834 (ruffed pheasants)
Phasianus Linnaeus, 1758 (true pheasants)
Catreus Cabanis, 1851 (cheer pheasant)
Crossoptilon Hodgson, 1838 (eared pheasants)
Lophura Fleming, 1822 non Gray, 1827 non Walker, 1856 (gallopheasants)
Phasianinae "Nonerectile clade"
Tribe Pavonini
Rheinardia Maingonnat, 1882 (crested arguses)
Argusianus Rafinesque, 1815 (great argus)
Afropavo Chapin, 1936 (African peafowl)
Pavo Linnaeus, 1758 (Asiatic peafowl)
Polyplectron Temminck, 1807 (peacock-pheasants)
Galloperdix Blyth, 1845 (Indian spurfowls)
Tropicoperdix Blyth, 1859 (chestnut-necklaced and green-legged partridges)
Haematortyx Sharpe, 1879 (crimson-headed partridge)
Tribe Gallini
Bambusicola Gould, 1863 (bamboo partridges)
Gallus Brisson, 1760 (junglefowl, including the domestic chicken)
Peliperdix Bonaparte, 1856 (Latham's francolin)
Ortygornis Reichenbach, 1852 (certain francolins)
Francolinus Stephens, 1819 (certain francolins)
Campocolinus Crowe et al., 2020 (certain francolins)
Scleroptila Blyth, 1852 (certain francolins)
Tribe Coturnicini
Tetraogallus Gray, 1832 (snowcocks)
Ammoperdix Gould, 1851 (sand and see-see partridges)
Synoicus Bosc, 1792 (certain quails)
Margaroperdix Reichenbach, 1853 (Madagascar partridge)
Coturnix Garsault, 1764 (typical Old World quails)
Alectoris Kaup, 1829 (rock partridges)
Perdicula Hodgson, 1837 (bush-quails)
Ophrysia Bonaparte, 1856 (Himalayan quail)
Pternistis Wagler, 1832 (partridge-francolins; African spurfowls)
Past taxonomy
This is the paraphyletic former ordering of Phasianidae, which primarily grouped genera based on appearance and body plans.
Subfamily Perdicinae Horsfield, 1821
Xenoperdix Dinesen et al., 1994 (forest partridges)
Caloperdix Blyth, 1861
Rollulus Bonnaterre, 1791 (crested partridges)
Melanoperdix Jerdon, 1864
Arborophila Hodgson, 1837 (hill partridges)
Rhizothera Gray, 1841
Lerwa Hodgson, 1837
Tropicoperdix Blyth, 1859
Ammoperdix Gould 1851 (see-see and sand partridges)
Synoicus Bosc 1792
Margaroperdix Reichenbach 1853
Coturnix Garsault 1764 (typical Old World quails)
Tetraogallus Gray 1832 (snowcocks)
Alectoris Kaup 1829 (rock partridges)
Pternistis Wagler 1832 (partridge-francolins; African spurfowls)
Ophrysia Bonaparte 1856
Perdicula Hodgson 1837 (bush-quails)
Bambusicola Gould 1863 (bamboo partridges)
Scleroptila Blyth 1852
Peliperdix Bonaparte 1856
Francolinus Stephens 1819 (true francolins)
Ortygornis Reichenbach, 1852
Campocolinus Crowe et al 2020
Perdix Brisson, 1760 (true partridges)
Haematortyx Sharpe, 1879
Galloperdix Blyth, 1845 (Indian spurfowls)
Tetraophasis Elliot, 1871 (monal-partridges)
Subfamily Meleagridinae
Meleagris Linnaeus, 1758 (turkeys)
Subfamily Phasianinae (pheasants, peafowl, junglefowl, monals, and tragopans)
Polyplectron Temminck, 1807 (peacock-pheasants)
Gallus Brisson, 1760 (junglefowl, including the domestic chicken)
Ithaginis Wagler, 1832
Pucrasia Gray, 1841 (koklass pheasant)
Tragopan Cuvier, 1829 non Gray 1841 (tragopans)
Lophophorus Temminck, 1813 non Agassiz, 1846 (monals)
Rheinardia Maingonnat 1882
Argusianus Rafinesque 1815 (argus pheasants)
Afropavo Chapin, 1936 (African peafowl)
Pavo Linnaeus, 1758 (Asiatic peafowl)
Syrmaticus Wagler, 1832 (long-tailed pheasants)
Phasianus Linnaeus, 1758 (true pheasants)
Chrysolophus Gray, 1834 (ruffed pheasants)
Lophura Fleming, 1822 non Gray, 1827 non Walker, 1856 (gallopheasants)
Catreus Cabanis, 1851
Crossoptilon Hodgson, 1838 (eared pheasants)
Subfamily Tetraoninae (grouse)
Bonasa Stephens, 1819 (ruffed grouse)
Tetrastes Keyserling & Blasius, 1840 (hazel grouse)
Centrocercus Swainson 1832 (sage-grouse)
Dendragapus Elliot, 1864 (blue grouse)
Tympanuchus Gloger, 1841 (prairie-chickens and sharp-tailed grouse)
Lagopus Brisson, 1760 (ptarmigans)
Falcipennis Elliot, 1864 (Siberian grouse)
Canachites Stejneger, 1885 (spruce grouse)
Tetrao Linnaeus, 1758 (capercaillies)
Lyrurus Swainson, 1832 (black grouse)
Fossil genera
Extinct genus assignment follows the Mikko's Phylogeny Archive and Paleofile.com websites.
†Alectoris” pliocaena Tugarinov, 1940
†Bantamyx Kuročkin, 1982
†Centuriavis Ksepka, Early, Dzikiewicz & Balanoff, 2022
†Diangallus Hou, 1985
†“Gallus” beremendensis Jánossy, 1976
†“Gallus” europaeus Harrison, 1978
†Lophogallus Zelenkov & Kuročkin, 2010
†Megalocoturnix Sánchez Marco, 2009
†Miophasianus Brodkorb, 1952 [Miophasianus Lambrecht 1933 nomen nudum ; Miogallus Lambrecht 1933]
†Palaeocryptonyx Depéret, 1892 [Chauvireria Boev 1997; Pliogallus Tugarinov 1940b non Gaillard 1939; Lambrechtia Janossy, 1974 ]
†Palaeortyx Milne-Edwards, 1869 [Palaeoperdix Milne-Edwards, 1869]
†Panraogallus Li et al., 2018
†Plioperdix Kretzoi, 1955 [Pliogallus Tugarinov 1940 non Gaillard 1939]
†Rustaviornis Burchak-Abramovich & Meladze, 1972
†Schaubortyx Brodkorb, 1964
†Shandongornis Yeh, 1997
†Shanxiornis Wang et al., 2006
†Tologuica Zelenkov & Kuročkin, 2009
Tribe Tetraonini (grouse)
†Palaealectoris Wetmore, 1930
†Proagriocharis Martin & Tate, 1970
†Rhegminornis Wetmore, 1943
Phylogeny
Cladogram based on a 2021 study by De Chen and collaborators that sequenced DNA flanking ultra-conserved elements. The extinct Himalayan quail (genus Ophrysia) was not included in the study. The species numbers and the inclusion of the genera Canachites, Ortygornis, Campocolinus and Synoicus follows the list maintained by Frank Gill, Pamela Rasmussen and David Donsker on behalf of the International Ornithologists' Union.
| Biology and health sciences | Galliformes | Animals |
210845 | https://en.wikipedia.org/wiki/Woodpecker | Woodpecker | Woodpeckers are part of the bird family Picidae, which also includes the piculets, wrynecks and sapsuckers. Members of this family are found worldwide, except for Australia, New Guinea, New Zealand, Madagascar and the extreme polar regions. Most species live in forests or woodland habitats, although a few species are known that live in treeless areas, such as rocky hillsides and deserts, and the Gila woodpecker specialises in exploiting cacti.
Members of this family are chiefly known for their characteristic behaviour. They mostly forage for insect prey on the trunks and branches of trees, and often communicate by drumming with their beaks, producing a reverberatory sound that can be heard at some distance. Some species vary their diet with fruits, birds' eggs, small animals, tree sap, human scraps, and carrion. They usually nest and roost in holes that they excavate in tree trunks, and their abandoned holes are of importance to other cavity-nesting birds. They sometimes come into conflict with humans when they make holes in buildings or feed on fruit crops, but perform a useful service by their removal of insect pests on trees.
The Picidae are one of nine living families in the order Piciformes, the others being barbets (comprising three families), toucans, toucan-barbets, and honeyguides, which (along with woodpeckers) comprise the clade Pici, and the jacamars and puffbirds in the clade Galbuli. DNA sequencing has confirmed the sister relationships of these two groups. The family Picidae includes about 240 species arranged in 35 genera. Almost 20 species are threatened with extinction due to loss of habitat or habitat fragmentation, with one, the Bermuda flicker, being extinct and a further two possibly being so.
General characteristics
Woodpeckers include the tiny piculets, the smallest of which appears to be the bar-breasted piculet at in length and a weight of . Some of the largest woodpeckers can be more than in length. The largest surviving species is the great slaty woodpecker, which weighs on average and up to , and measures , but the extinct imperial woodpecker, at , and ivory-billed woodpecker, around and , were probably both larger.
The plumage of woodpeckers varies from drab to conspicuous. The colours of many species are based on olive and brown and some are pied, suggesting a need for camouflage; others are boldly patterned in black, white, and red, and many have a crest or tufted feathers on their crowns. Woodpeckers tend to be sexually dimorphic, but differences between the sexes are generally small; exceptions to this are Williamson's sapsucker and the orange-backed woodpecker, which differ markedly. The plumage is moulted fully once a year apart from the wrynecks, which have an additional partial moult before breeding.
Woodpeckers, piculets, and wrynecks all possess characteristic zygodactyl feet, consisting of four toes, the first (hallux) and the fourth facing backward and the second and third facing forward. This foot arrangement is good for grasping the limbs and trunks of trees. Members of this family can walk vertically up tree trunks, which is beneficial for activities such as foraging for food or nest excavation. In addition to their strong claws and feet, woodpeckers have short, strong legs. This is typical of birds that regularly forage on trunks. Exceptions are the black-backed woodpecker and the American and Eurasian three-toed woodpeckers, which have only three toes on each foot. The tails of all woodpeckers, except the piculets and wrynecks, are stiffened, and when the bird perches on a vertical surface, the tail and feet work together to support it.
Woodpeckers have strong bills that they use for drilling and drumming on trees, and long, sticky tongues for extracting food (insects and larvae). Woodpecker bills are typically longer, sharper, and stronger than the bills of piculets and wrynecks, but their morphology is very similar. The bill's chisel-like tip is kept sharp by the pecking action in birds that regularly use it on wood. The beak consists of three layers; an outer sheath called rhamphotheca, made of scales formed from keratin proteins, an inner layer of bone which has a large cavity and mineralised collagen fibers, and a middle layer made of porous bone which connects the two other layers.
Furthermore, the tongue bone (or hyoid bone) of the woodpecker is very long, and winds around the skull through a special cavity, thereby cushioning the brain. Combined, this anatomy helps the beak absorb mechanical stress. Species of woodpecker and flicker that use their bills in soil or for probing as opposed to regular hammering tend to have longer and more decurved bills. Due to their smaller bill size, many piculets and wrynecks forage in decaying wood more often than woodpeckers. Their long, sticky tongues, which possess barbs, aid these birds in grabbing and extracting insects from deep within a hole in a tree. The tongue was reported to be used to spear grubs, but more detailed studies published in 2004 have shown that the tongue instead wraps around the prey before being pulled out.
Many of the foraging, breeding, and signaling behaviors of woodpeckers involve drumming and hammering using their bills. To prevent brain damage from the rapid and repeated powerful impacts, woodpeckers have a number of physical features that protect their brains. These include a relatively small and smooth brain, narrow subdural space, little cerebrospinal fluid surrounding it to prevent it from moving back and forth inside the skull during pecking, the orientation of the brain within the skull (which maximises the contact area between the brain and the skull) and the short duration of contact. The skull consists of strong but compressible, sponge-like bone, which is most concentrated in the forehead and the back of the skull. Another anatomical adaptation of woodpeckers is the enormously elongated hyoid bone which subdivides, passes on either side of the spinal column and wraps around the brain case, before ending in the right nostril cavity. It plays the role of safety-belt.
Computer simulations have shown that 99.7% of the energy generated in pecking is stored in the form of strain energy, which is distributed throughout the bird's body, with only a small remaining fraction of the energy going into the brain. The pecking also causes the woodpecker's skull to heat up, which is part of the reason why they often peck in short bursts with brief breaks in between, giving the head some time to cool. During the millisecond before contact with wood, a thickened nictitating membrane closes, protecting the eye from flying debris. These membranes also prevent the retina from tearing. Their nostrils are also protected; they are often slit-like and have special feathers to cover them. Woodpeckers are capable of repeated pecking on a tree at high decelerations on the order of (1000 g).
Some large woodpeckers such as Dryocopus have a fast, direct form of flight, but the majority of species have a typical undulating flight pattern consisting of a series of rapid flaps followed by a swooping glide. Many birds in the genus Melanerpes have distinctive, rowing wing-strokes while the piculets engage in short bursts of rapid direct flight.
Distribution, habitat, and movements
Global distribution
Woodpeckers have a mostly cosmopolitan distribution, although they are absent from Australasia, Madagascar, and Antarctica. They are also absent from some of the world's oceanic islands, although many insular species are found on continental islands. The true woodpeckers, subfamily Picinae, are distributed across the entire range of the family. The Picumninae piculets have a pantropical distribution, with species in Southeast Asia, Africa, and the Neotropics, with the greatest diversity being in South America. The second piculet subfamily, the Sasiinae, contains the African piculet and two species in the genus Sasia that are found in Southeast Asia. The wrynecks (Jynginae) are found exclusively in the Old World, with the two species occurring in Europe, Asia, and Africa.
Most woodpeckers are sedentary, but a few examples of migratory species are known, such as the rufous-bellied woodpecker, yellow-bellied sapsucker, and Eurasian wryneck, which breeds in Europe and west Asia and migrates to the Sahel in Africa in the winter. More northerly populations of Lewis's woodpecker, northern flicker, Williamson's sapsucker, red-breasted sapsucker, and red-naped sapsucker all move southwards in the fall in North America. Most woodpecker movements can be described as dispersive, such as when young birds seek territories after fledging, or eruptive, to escape harsh weather conditions. Several species are altitudinal migrants, for example the grey-capped pygmy woodpecker, which moves to lowlands from hills during winter. The woodpeckers that do migrate, do so during the day.
Habitat requirements
Overall, woodpeckers are arboreal birds of wooded habitats. They reach their greatest diversity in tropical rainforests, but occur in almost all suitable habitats, including woodlands, savannahs, scrublands, and bamboo forests. Even grasslands and deserts have been colonised by various species. These habitats are more easily occupied where a small number of trees exist, or in the case of desert species like the Gila woodpecker, tall cacti are available for nesting. Some are specialists and are associated with coniferous or deciduous woodlands, or even, like the acorn woodpecker, with individual tree genera (oaks in this case). Other species are generalists and are able to adapt to forest clearance by exploiting secondary growth, plantations, orchards, and parks. In general, forest-dwelling species need rotting or dead wood on which to forage.
Several species are adapted to spending a portion of their time feeding on the ground, and a very small minority have abandoned trees entirely and nest in holes in the ground. The ground woodpecker is one such species, inhabiting the rocky and grassy hills of South Africa, and the Andean flicker is another.
The Swiss Ornithological Institute has set up a monitoring program to record breeding populations of woodland birds. This has shown that deadwood is an important habitat requirement for the black woodpecker, great spotted woodpecker, middle spotted woodpecker, lesser spotted woodpecker, European green woodpecker, and Eurasian three-toed woodpecker. Populations of all these species increased by varying amounts from 1990 to 2008. During this period, the amount of deadwood in the forest increased and the range of the white-backed woodpecker enlarged as it extended eastwards. With the exception of the green and middle-spotted woodpeckers, the increase in the amount of deadwood is likely to be the major factor explaining the population increase of these species.
Behavior
Most woodpeckers live solitary lives, but their behavior ranges from highly antisocial species that are aggressive towards their own kind, to species that live in groups. Solitary species defend such feeding resources as a termite colony or fruit-laden tree, driving away other conspecifics and returning frequently until the resource is exhausted. Aggressive behaviors include bill pointing and jabbing, head shaking, wing flicking, chasing, drumming, and vocalizations. Ritual actions do not usually result in contact, and birds may "freeze" for a while before they resume their dispute. The colored patches may be flouted, and in some instances, these antagonistic behaviors resemble courtship rituals.
Group-living species tend to be communal group breeders. In addition to these species, a number of species may join mixed-species foraging flocks with other insectivorous birds, although they tend to stay at the edges of these groups. Joining these flocks allows woodpeckers to decrease their anti-predator vigilance and increase their feeding rate. Woodpeckers are diurnal, roosting at night inside holes and crevices. In many species the roost will become the nest-site during the breeding season, but in some species they have separate functions; the grey-and-buff woodpecker makes several shallow holes for roosting which are quite distinct from its nesting site. Most birds roost alone and will oust intruders from their chosen site, but the Magellanic woodpecker and acorn woodpecker are cooperative roosters.
Drumming
Drumming is a form of nonvocal communication used by most species of woodpeckers, and involves the bill being repeatedly struck on a hard surface with great rapidity. After a pause, the drum roll is repeated, with each species having a pattern that is unique in the number of beats in the roll, the length of the roll, the length of the gap between rolls, and the cadence. The drumming is mainly a territorial call, equivalent to the song of a passerine. Woodpeckers choose a surface that resonates, such as a hollow tree, and may use man-made structures such as gutters and downpipes. Drumming serves for the mutual recognition of conspecifics and plays a part in courtship rituals. Individual birds are thought to be able to distinguish the drumming of their mates and those of their neighbors. Drumming can be reliably used to distinguish between multiple species in a region, even if those species are phenotypically similar. Cadence (or the mean number of drum beats per second) is heavily conserved within species. Comparative analyses within species between distant geographic populations have shown that cadence is heavily conserved across species' respective ranges, indicating that there likely are not 'dialects' as seen in passerine song. Drumming in woodpeckers is controlled by a set of nuclei in the forebrain that closely resemble the brain regions that underlie song learning and production in many songbirds. A 2023 study revealed a strong association between extractive foraging and relative brain size across the Family Picidae, indicating that a larger brain does not necessarily result in more powerful drumming abilities, but is implicated in foraging behaviors, as the act of sensing and retrieving wood-boring larvae from woody substrates likely requires an increase in sensory and motor control capabilities.
Calls
Woodpeckers do not have such a wide range of songs and calls as do passerine birds, and the sounds they make tend to be simpler in structure. Calls produced include brief, high-pitched notes, trills, rattles, twittering, whistling, chattering, nasal churrs, screams, and wails. These calls are used by both sexes in communication and are related to the circumstances of the occasion; these include courtship, territorial disputes, and alarm calls. Each species has its own range of calls, which tend to be in the 1.0 to 2.5 kHz range for efficient transmission through forested environments. Mated couples may exchange muted, low-pitched calls, and nestlings often issue noisy begging calls from inside their nest cavity. The wrynecks have a more musical song, and in some areas, the song of the newly arrived Eurasian wryneck is considered to be the harbinger of spring. The piculets either have a song consisting of a long, descending trill, or a descending series of two to six (sometimes more) individual notes, and this song alerts ornithologists to the presence of the birds, as they are easily overlooked.
Diet and feeding
Most woodpecker species feed on insects and other invertebrates living under bark and in wood, but overall, the family is characterized by its dietary flexibility, with many species being both highly omnivorous and opportunistic. The diet includes ants, termites, beetles and their larvae, caterpillars, spiders, other arthropods, bird eggs, nestlings, small rodents, lizards, fruit, nuts, and sap. Many insects and their grubs are taken from living and dead trees by excavation. The bird may hear sounds from inside the timber indicating where creating a hole would be productive. Crustaceans, molluscs, and carrion may be eaten by some species, including the great spotted woodpecker, and bird feeders are visited for suet and domestic scraps.
Other means are also used to garner prey. Some species, such as the red-naped sapsucker, sally into the air to catch flying insects, and many species probe into crevices and under bark, or glean prey from leaves and twigs. The rufous woodpecker specialises in attacking the nests of arboreal ants, and the buff-spotted woodpecker feeds on and nests in termite mounds. Other species, such as the wrynecks and the Andean flicker, feed wholly or partly on the ground.
Ecologically, woodpeckers help to keep trees healthy by keeping them from suffering mass infestations. The family is noted for its ability to acquire wood-boring grubs from the trunks and branches, whether the timber is alive or dead. Having hammered a hole into the wood, the prey is extracted by use of a long, barbed tongue. Woodpeckers consume beetles that burrow into trees, removing as many as 85% of emerald ash borer larvae from individual ash trees.
The ability to excavate allows woodpeckers to obtain tree sap, an important source of food for some species. Most famously, the sapsuckers (genus Sphyrapicus) feed in this fashion, but the technique is not restricted to these, and others such as the acorn woodpecker and white-headed woodpecker also feed on sap. The technique was once thought to be restricted to the New World, but Old World species, such as the Arabian woodpecker and great spotted woodpecker, also feed in this way.
Breeding
All members of the family Picidae nest in cavities, nearly always in the trunks and branches of trees, well away from the foliage. Where possible, an area of rotten wood surrounded by sound timber is used. Where trees are in short supply, the gilded flicker and ladder-backed woodpecker excavate holes in cactus, and the Andean flicker and ground woodpecker dig holes in earth banks. The campo flicker sometimes chooses termite mounds, the rufous woodpecker prefers to use ants' nests in trees and the bamboo woodpecker specialises in bamboos. Woodpeckers also excavate nest holes in residential and commercial structures and wooden utility poles.
Woodpeckers and piculets excavate their own nests, but wrynecks do not, and need to find pre-existing cavities. A typical nest has a round entrance hole that just fits the bird, leading to an enlarged vertical chamber below. No nesting material is used, apart from some wood chips produced during the excavation; other wood chips are liberally scattered on the ground, thus providing visual evidence of the site of the nest. Many species of woodpeckers excavate one hole per breeding season, sometimes after multiple attempts. It takes around a month to finish the job and abandoned holes are used by other birds and mammals that are cavity nesters unable to excavate their own holes.
Cavities are in great demand for nesting by other cavity nesters, so woodpeckers face competition for the nesting sites they excavate from the moment the hole becomes usable. This may come from other species of woodpecker, or other cavity-nesting birds such as swallows and starlings. Woodpeckers may aggressively harass potential competitors, and also use other strategies to reduce the chance of being usurped from their nesting sites; for example, the red-crowned woodpecker digs its nest in the underside of a small branch, which reduces the chance that a larger species will take it over and expand it.
Members of Picidae are typically monogamous, with a few species breeding cooperatively and some polygamy reported in a few others. Polyandry, where a female raises two broods with two separate males, has also been reported in the West Indian woodpecker. Another unusual social system is that of the acorn woodpecker, which is a polygynandrous cooperative breeder where groups of up to 12 individuals breed and help to raise the young. Young birds from previous years may stay behind to help raise the group's young, and studies have found reproductive success for the group goes up with group size, but individual success goes down. Birds may be forced to remain in groups due to a lack of habitat to which to disperse.
A pair works together to help build the nest, incubate the eggs, and raise their altricial young. In most species, though, the male does most of the nest excavation and takes the night shift while incubating the eggs. A clutch usually consists of two to five round, white eggs. Since these birds are cavity nesters, their eggs do not need to be camouflaged and the white color helps the parents to see them in dim light. The eggs are incubated for about 11–14 days before they hatch. About 18–30 days are then needed before the chicks are fully fledged and ready to leave the nest. In most species, soon after this, the young are left to fend for themselves, exceptions being the various social species, and the Hispaniolan woodpecker, where adults continue to feed their young for several months. In general, cavity nesting is a successful strategy and a higher proportion of young is reared than is the case with birds that nest in the open. In Africa, several species of honeyguide are brood parasites of woodpeckers.
Systematics and evolutionary history
The Picidae are just one of nine living families in the order Piciformes. Other members of this group, such as the jacamars, puffbirds, barbets, toucans, and honeyguides, have traditionally been thought to be closely related to the woodpecker family (true woodpeckers, piculets, wrynecks, and sapsuckers). The clade Pici (woodpeckers, barbets, toucans, and honeyguides) is well supported and shares a zygodactyl foot with the Galbuli (puffbirds and jacamars). More recently, several DNA sequence analyses have confirmed that Pici and Galbuli are sister groups.
The phylogenetic relationship between the woodpeckers and the eight other families in the order Piciformes is shown in the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
The name Picidae for the family was introduced by English zoologist William Elford Leach in a guide to the contents of the British Museum published in 1819. The phylogeny has been updated according to new knowledge about convergence patterns and evolutionary history. Most notably, the relationship of the Picinae genera has been largely clarified, and the Antillean piculet was found to be a surviving offshoot of protowoodpeckers. Genetic analysis supports the monophyly of the Picidae, which seem to have originated in the Old World, but the geographic origins of the Picinae is unclear. The Picumninae are returned as paraphyletic. Morphological and behavioural characters, in addition to DNA evidence, highlights genus Hemicircus as the sister group of all remaining true woodpeckers, besides a sister-group relationship between the true woodpecker tribes Dendropicini and Malarpicini.
The evolutionary history of this group is not well documented, but the known fossils allow some preliminary conclusions; the earliest known modern picids were piculet-like forms of the Late Oligocene, about 25 million years ago (Mya). By that time, however, the group was already present in the Americas and Europe, and they actually may have evolved much earlier, maybe as early as the Early Eocene (50 Mya). The modern subfamilies appear to be rather young by comparison; until the mid-Miocene (10–15 Mya), all picids seem to have been small or mid-sized birds similar to a mixture between a piculet and a wryneck. A feather enclosed in fossil amber from the Dominican Republic, dated to about 25 Mya, however, seems to indicate that the Nesoctitinae were already a distinct lineage by then.
Stepwise adaptations for drilling, tapping, and climbing head first on vertical surfaces have been suggested. The last common ancestor of woodpeckers (Picidae) was incapable of climbing up tree trunks or excavating nest cavities by drilling with its beak. The first adaptations for drilling (including reinforced rhamphotheca, frontal overhang, and processus dorsalis pterygoidei) evolved in the ancestral lineage of piculets and true woodpeckers. Additional adaptations for drilling and tapping (enlarged condylus lateralis of the quadrate and fused lower mandible) have evolved in the ancestral lineage of true woodpeckers (Hemicircus excepting). The inner rectrix pairs became stiffened, and the pygostyle lamina was enlarged in the ancestral lineage of true woodpeckers (Hemicircus included), which facilitated climbing head first up tree limbs. Genus Hemicircus excepting, the tail feathers were further transformed for specialized support, the pygostyle disc became greatly enlarged, and the ectropodactyl toe arrangement evolved. These latter characters may have facilitated enormous increases in body size in some lineages.
Prehistoric representatives of the extant Picidae genera are treated in the genus articles. An enigmatic form based on a coracoid, found in Pliocene deposits of New Providence in the Bahamas, has been described as Bathoceleus hyphalus and probably also is a woodpecker.
The following cladogram is based on the comprehensive molecular phylogenetic study of the woodpeckers published in 2017 together with the list of bird species maintained by Frank Gill, Pamela Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC). The Cuban green woodpecker in the monotypic genus Xiphidiopicus was not included in the study. The relative positions of Picumninae, Sasiinae and Picinae in the cladogram are uncertain. In the 2017 study the results depended upon which of two different statistical procedures were used to analyse the DNA sequence data. One method found that Sasiinae was sister to Picinae (as shown below), the other method found that Sasiinae was sister to a clade containing both Picumninae and Picinae.
List of genera
The woodpecker family Picidae contains 37 genera. For more detail, see list of woodpecker species.
Family: Picidae
Subfamily: Jynginae – wrynecks
Jynx (2 species)
Subfamily: Picumninae – piculets
Picumnus – piculets (25 species)
Subfamily: Sasiinae
Verreauxia – African piculet
Sasia – Asian piculets (2 species)
Subfamily: Picinae – true woodpeckers
Tribe Nesoctitini
Nesoctites – monotypic: Antillean piculet
Tribe Hemicircini
Hemicircus – 2 species
Tribe Picini
Micropternus – monotypic: rufous woodpecker
Meiglyptes – 4 species
Gecinulus – 3 species
Dinopium – 5 species (flamebacks)
Picus – 14 species
Chrysophlegma – 3 species
Pardipicus – 2 species
Geocolaptes – monotypic: ground woodpecker
Campethera – 11 species
Mulleripicus – 4 species
Dryocopus – 6 species
Celeus – 13 species
Piculus – 7 species
Colaptes – 14 species
Tribe Campephilini
Campephilus – 12 species
Blythipicus – 2 species
Reinwardtipicus – monotypic: orange-backed woodpecker
Chrysocolaptes – 10 species (flamebacks)
Tribe Melanerpini
Sphyrapicus – 4 species (sapsuckers)
Melanerpes – 24 species
Picoides – 3 species
Yungipicus – 7 species
Leiopicus – monotypic: yellow-crowned woodpecker
Dendrocoptes – 3 species
Chloropicus – 3 species
Dendropicos – 12 species
Dendrocopos – 12 species
Dryobates – 5 species
Leuconotopicus – 6 species
Veniliornis – 14 species
Xiphidiopicus – monotypic: Cuban green woodpecker
Incertae sedis fossils
Genus: †Palaeopicus (Late Oligocene of France)
†Picidae gen. et sp. indet. (Middle Miocene of New Mexico, US)
†Picidae gen. et sp. indet. (Late Miocene of Gargano Peninsula, Italy)
Genus: †Palaeonerpes (Ogallala Early Pliocene of Hitchcock County, US) – possibly dendropicine
Genus: †Pliopicus (Early Pliocene of Kansas, US) – possibly dendropicine
cf. Colaptes DMNH 1262 (Early Pliocene of Ainsworth, US) – malarpicine?
Relationship with humans
In general, humans consider woodpeckers in a favourable light; they are viewed as interesting birds and fascinating to watch as they drum or forage, but their activities are not universally appreciated. Many woodpecker species are known to excavate holes in buildings, fencing, and utility poles, creating health and/or safety issues for affected structures. Such activity is very difficult to discourage and can be costly to repair.
Woodpeckers also drum on various reverberatory structures on buildings such as gutters, downspouts, chimneys, vents, and aluminium sheeting. Drumming is a less-forceful type of pecking that serves to establish territory and attract mates. Houses with shingles or wooden boarding are also attractive as possible nesting or roosting sites, especially when close to large trees or woodland. Several exploratory holes may be made, especially at the junctions of vertical boards or at the corners of tongue-and-groove boarding. The birds may also drill holes in houses as they forage for insect larvae and pupae hidden behind the woodwork.
Woodpeckers sometimes cause problems when they raid fruit crops, but their foraging activities are mostly beneficial as they control forest insect pests such as the woodboring beetles that create galleries behind the bark and can kill trees. They also eat ants, which may be tending sap-sucking pests such as mealybugs, as is the case with the rufous woodpecker in coffee plantations in India. Woodpeckers can serve as indicator species, demonstrating the quality of the habitat. Their hole-making abilities make their presence in an area an important part of the ecosystem, because these cavities are used for breeding and roosting by many bird species that are unable to excavate their own holes, as well as being used by various mammals and invertebrates.
The spongy bones of the woodpecker's skull and the flexibility of its beak, both of which provide protection for the brain when drumming, have provided inspiration to engineers; a black box needs to survive intact when a plane falls from the sky, and modelling the black box with regard to a woodpecker's anatomy has increased the resistance of this device to damage 60-fold. The design of protective helmets is another field being influenced by the study of woodpeckers.
One of the accounts of the founding of Rome, preserved in the work known as Origo Gentis Romanae (unknown), refers to a legend of a woodpecker bringing food to the boys Romulus and Remus during the time they were abandoned in the wild, thus enabling them to survive and play their part in history.
Popular culture
Woody Woodpecker is an animated character that appeared in theatrical short films produced between 1940 and 1972.
The Pokémon Pikipek was introduced in the seventh generation games Pokémon Sun and Moon. In addition to being a visual homage to a pileated woodpecker, entries in the game's Pokédex encyclopedia describes the small Flying-type as analogous to its real-world counterpart. Its later forms (called "evolutions" in the series) Trumbeak and Toucannon resemble a honeyguide and toucan, respectively, perhaps as a tongue-in-cheek reference to the phylogenetic relationship woodpeckers share with these Piciformes families.
Status and conservation
In a global survey of the risk of extinction faced by the various bird families, woodpeckers were the only bird family to have significantly fewer species at risk than would be expected.
Nevertheless, several woodpeckers are under threat as their habitats are destroyed. Being woodland birds, deforestation and clearance of land for agriculture and other purposes can reduce populations dramatically. Some species adapt to living in plantations and secondary growth, or to open countryside with forest remnants and scattered trees, but some do not. A few species have even flourished when they have adapted to man-made habitats. There are few conservation projects directed primarily at woodpeckers, but they benefit whenever their habitat is conserved. The red-cockaded woodpecker has been the focus of much conservation effort in the southeastern United States, with artificial cavities being constructed in the longleaf pines they favour as nesting sites.
Two species of woodpeckers in the Americas, the ivory-billed woodpecker is critically endangered and the imperial woodpecker is classified as extinct in the wild, with some authorities believing them extinct, though possible but disputed ongoing sightings of ivory-billed woodpeckers have been made in the United States and a small population may survive in Cuba. A critically endangered species is the Okinawa woodpecker from Japan, with a single declining population of a few hundred birds. It is threatened by deforestation, golf course, dam, and helipad construction, road building, and agricultural development.
Brain impact research
Anatomy
Woodpeckers possess many sophisticated shock-absorption mechanisms that help protect them from head injury. Micro-CT scans show that plate-like spongy bones are in the skull with an uneven distribution, highly accumulated in the forehead and occiput but not in other regions. Along with the long hyoid bone “safety belt” the woodpecker has uneven beak lengths which drastically reduce strains when compared to equal length. Models have shown that pecking force is changed to strain energy and stored into the body at around 99% absorption while 1% is in the head. The head also has many factors that reduce strain to the brain and small portions of energy are dissipated into the form of heat; therefore the pecks are always intermittent. Others dispute shock-absorption in the head (which reduces the force of pecking) but instead point to adaptations within the brain itself.
Tau protein accumulation is associated with chronic traumatic encephalopathy (CTE), and thus has been studied in sports where athletes suffer repeated concussions. Tau is important as it helps hold together and stabilize brain neurons. Woodpeckers' brains share similarities to humans with CTE showing most build-up in the frontal and temporal lobes of the brain. It is not yet known whether these accumulations are pathological or the result of behavioral changes. More research is being done on the subject and the woodpecker is a suitable animal model to study. The orientation of the brain within the skull increases the area of contact when pecking to reduce stress on the brain, and their small size helps, given the acceleration speeds.
Mechanical properties
Straight-line trajectory was theorized to be the reason why woodpeckers do not injure themselves, since centripetal forces were the cause of concussion, but they do not always peck in straight lines, so they produce and resist centripetal forces. Laboratory tests show that the woodpeckers' cranial bone produces a significantly higher Young's modulus and ultimate strength scores compared to other birds its size. The cranial bone has a high bone mineral density with plate-like structures that are thick with high numbers of trabeculae that are spaced closely together which all may lead to lower deformation while pecking.
The jaw apparatus was studied, looking into its cushioning effects. When comparing the same impact to the beak and to the forehead, the forehead experiences an impact force 1.72 times that of the beak, due to the contact time being 3.25 ms in the forehead and 4.9 ms in the beak. This is impulse momentum where impulse is the integral of force over time. The quadrate bone and joints play an important role in extending impact time, which decreases impact load to brain tissue.
Bio-inspired ideas
Beams
Bio-inspired honeycomb sandwich beams are inspired by the woodpecker's skull design; this beam's goal is to withstand continuous impacts without the need of replacement. The BHSB is composed of carbon fiber-reinforced plastic (CFRP), this is to mimic the high-strength beak. Next is a rubber layer core for the hyoid bone for absorbing and spreading impact, a second core layer of aluminum honeycomb that is porous and light like the woodpecker's spongey bone for impact cushioning. The final layer is the same as the first a CFRP to act as the skull bone. Bio-inspired honeycomb sandwich beams when compared to conventional beams reduced area damage by 50–80% and carried 40 to 5% of the level of stresses in the bottom layer while having an impact-resistance efficiency 1.65 to 16.22 times higher.
| Biology and health sciences | Piciformes | null |
210880 | https://en.wikipedia.org/wiki/Crater%20%28constellation%29 | Crater (constellation) | Crater is a small constellation in the southern celestial hemisphere. Its name is the latinization of the Greek krater, a type of cup used to water down wine. One of the 48 constellations listed by the second-century astronomer Ptolemy, it depicts a cup that has been associated with the god Apollo and is perched on the back of Hydra the water snake.
There is no star brighter than third magnitude in the constellation. Its two brightest stars, Delta Crateris of magnitude 3.56 and Alpha Crateris of magnitude 4.07, are ageing orange giant stars that are cooler and larger than the Sun. Beta Crateris is a binary star system composed of a white giant star and a white dwarf. Seven star systems have been found to host planets. A few notable galaxies, including Crater 2 and NGC 3981, and a famous quasar lie within the borders of the constellation.
Mythology
In the Babylonian star catalogues dating from at least 1100 BC, the stars of Crater were possibly incorporated with those of the crow Corvus in the Babylonian Raven (MUL.UGA.MUSHEN). British scientist John H. Rogers observed that the adjoining constellation Hydra signified Ningishzida, the god of the underworld in the Babylonian compendium MUL.APIN. He proposed that Corvus and Crater (along with the water snake Hydra) were death symbols and marked the gate to the underworld. Corvus and Crater also featured in the iconography of Mithraism, which is thought to have been of middle-eastern origin before spreading into Ancient Greece and Rome.
Crater is identified with a story from Greek mythology in which a crow or raven serves Apollo, and is sent to fetch water, but it delays its journey as it finds some figs and waits for them to ripen before eating them. Finally, it retrieves the water in a cup and takes back a water snake, blaming it for drinking the water. According to the myth, Apollo saw through the fraud, and angrily cast the crow, cup, and snake into the sky. The three constellations were arranged in such a way that the crow was prevented from drinking from the cup, and hence seen as a warning against sinning against the gods.
Phylarchus wrote of a different origin for Crater. He told how the city of Eleusa near Troy was beset by plague. Its ruler Demiphon consulted an oracle which decreed that a maiden should be sacrificed each year. Demiphon declared that he would choose a maiden by lottery, but he did not include his own daughters. One nobleman, Mastusius, objected, forcing Demiphon to sacrify his daughter. Later, Mastusius killed Demiphon's daughters and tricked the ruler in drinking a cup containing a mixture of their blood and wine. Upon finding out the deed, the king ordered Mastusius and the cup to be thrown into the sea. Crater signifies the cup.
In other cultures
In Chinese astronomy, the stars of Crater are located within the constellation of the Vermillion Bird of the South (南方朱雀, Nán Fāng Zhū Què). They depict, along with some stars from Hydra, Yi, the Red Bird's wings. Yi also denotes the 27th lunar mansion. Alternatively, Yi depicts a heroic bowman; his bow composed of other stars in Hydra. In the Society Islands, Crater was recognized as a constellation called Moana-'ohu-noa-'ei-ha'a-moe-hara ("vortex-ocean-in-which-to-lose-crime").
Characteristics
Covering 282.4 square degrees and hence 0.685% of the sky, Crater ranks 53rd of the 88 constellations in area. It is bordered by Leo and Virgo to the north, Corvus to the east, Hydra to the south and west, and Sextans to the northwest. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Crt". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of six segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −6.66° and −25.20°. Its position in the southern celestial hemisphere means that the whole constellation is visible to observers south of 65°N.
Features
Stars
The German cartographer Johann Bayer used the Greek letters alpha through lambda to label the most prominent stars in the constellation. Bode added more, though only Psi Crateris remains in use. John Flamsteed gave 31 stars in Crater and the segment of Hydra immediately below Crater Flamsteed designations, naming the resulting constellation Hydra et Crater. Most of these stars lie in Hydra. The three brightest stars—Delta, Alpha, and Gamma Crateris—form a triangle located near the brighter star Nu Hydrae in Hydra. Within the constellation's borders, there are 33 stars brighter than or equal to apparent magnitude 6.5.
Delta Crateris is the brightest star in Crater at magnitude 3.6. Located 163 ± 4 light-years away, it is an orange giant star of spectral type K0III that is 1.0–1.4 times as massive as the Sun. An ageing star, it has cooled and expanded to times the Sun's radius. It is radiating as much power as the Sun from its outer envelope at an effective temperature of . Traditionally called Alkes "the cup", and marking the base of the cup is Alpha Crateris, an orange-hued star of magnitude 4.1, that is 141 ± 2 light-years from the Sun. With an estimated mass 1.75 ± 0.24 times that of the Sun, it has exhausted its core hydrogen and expanded to 13.2 ± 0.55 times the Sun's diameter, shining with 69 times its luminosity and an effective temperature of around 4600 K.
With a magnitude of 4.5, Beta Crateris is a binary star system, consisting of a white-hued giant star of spectral type A1III and a white dwarf of spectral type DA1.4, 296 ± 8 light-years from the Sun. Much smaller than the primary, the white dwarf cannot be seen as a separate object, even by the Hubble Space Telescope. Gamma Crateris is a double star, resolvable in small amateur telescopes. The primary is a white main sequence star of spectral type A7V, that is an estimated 1.81 times as massive as the Sun, while the secondary—of magnitude 9.6—has 75% the Sun's mass, and is likely an orange dwarf. The two stars take at least 1150 years to orbit each other. The system is 85.6 ± 0.8 light-years away from the Sun.
Epsilon and Zeta Crateris mark the Cup's rim. The largest naked eye star in the constellation, Epsilon Crateris is an evolved K-type giant star with a stellar classification of K5 III. It has about the same mass as the Sun, but has expanded to 44.7 times the Sun's radius. The star is radiating 391 times the solar luminosity. It is 366 ± 8 light-years distant from the Sun. Zeta Crateris is a binary star system. The primary, component A, is a magnitude 4.95 evolved giant star with a stellar classification of G8 III. It is a red clump star that is generating energy through the fusion of helium at its core. Zeta Crateris has expanded to 13 times the radius of the Sun, and shines with 157 times the luminosity of the Sun. The secondary, component B, is a magnitude 7.84 star. Zeta Crateris is a confirmed member of the Sirius supercluster and is a candidate member of the Ursa Major Moving Group, a collection of stars that share a similar motion through space and may have at one time been members of the same open cluster. The system is located 326 ± 9 light-years from the Sun.
Variable stars are popular targets for amateur astronomers. Their observations provide valuable contributions to understanding star behaviour. Located near Alkes is the red-hued R Crateris, a semiregular variable star of type SRb and a spectral classification of M7. It ranges from magnitude 9.8 to 11.2 over an optical period of 160 days. It is 770 ± 40 light-years distant from the Sun. TT Crateris is a cataclysmic variable; a binary system composed of a white dwarf around as massive as the Sun in close orbit with an orange dwarf of spectral type K5V. The two orbit each other every 6 hours 26 minutes. The white dwarf strips matter off its companion, forming an accretion disk which periodically ignites and erupts. The star system has a magnitude of 15.9 when quiescent, brightening to 12.7 in outburst. SZ Crateris is a magnitude 8.5 BY Draconis type variable star. It is a nearby star system located about 42.9 ± 1.0 light-years from the Sun, and is a member of the Ursa Major Moving Group.
HD 98800, also known as TV Crateris, is a quadruple star system around 7–10 million years old, made up of two pairs of stars in close orbit. One pair has a debris disk that contains dust and gas orbiting them both. Spanning the distance between 3 and 5 astronomical units from the stars, it is thought to be a protoplanetary disk. DENIS-P J1058.7-1548 is a brown dwarf less than 5.5% as massive as the Sun. With a surface temperature of between 1700 and 2000 K, it is cool enough for clouds to form. Variations in its brightness in visible and infrared spectra suggest it has some form of atmospheric cloud cover.
HD 96167 is a star 1.31 ± 0.09 times as massive as the Sun, that has most likely exhausted its core hydrogen and begun expanding and cooling into a yellow subgiant with a diameter 1.86 ± 0.07 times that of the Sun, and 3.4 ± 0.2 times its luminosity. Analysis of its radial velocity revealed it has a planet with a minimum mass 68% that of Jupiter, which takes 498.9 ± 1.0 days to complete an orbit. With the orbital separation varying between 0.38 and 2.22 astronomical units, the orbit is highly eccentric. The stellar system is 279 ± 1 light-years away from the Sun. HD 98649 is a yellow main sequence star, classified as a G4V, that has the same mass and diameter as the Sun, but has only 86% of its luminosity. In 2012, a long-period ( 4951 days) planet companion, at least 6.8 times as massive as Jupiter, was discovered by radial velocity method. Its orbit was calculated to be highly eccentric, swinging out to 10.6 astronomical units away from its star, and hence a candidate for direct imaging. BD-10°3166 is a metallic orange main sequence star of spectral type K3.0V, 268 ± 10 light-years distant from the Sun. It was found to have a hot Jupiter-type planet that has a minimum mass of 48% of Jupiter's, and takes only 3.49 days to complete an orbit. WASP-34 is a sun-like star of spectral type G5V that has 1.01 ± 0.07 times the mass and 0.93 ± 0.12 times the diameter of the Sun. It has a planet 0.59 ± 0.01 times as massive as Jupiter that takes 4.317 days to complete an orbit. The system is 432 ± 3 light-years distant from the Sun.
Deep-sky objects
The Crater 2 dwarf is a satellite galaxy of the Milky Way, located approximately 380,000 light-years from the Sun. NGC 3511 is a spiral galaxy seen nearly edge-on, of magnitude 11.0, located 2° west of Beta Crateris. Located 11' away is NGC 3513, a barred spiral galaxy. NGC 3981 is a spiral galaxy with two wide and perturbed spiral arms. It is a member of the NGC 4038 Group, which, along with NGC 3672 and NGC 3887, are part of a group of 45 galaxies known as the Crater Cloud within the Virgo Supercluster.
RX J1131 is a quasar located 6 billion light-years away from the Sun. The black hole in the center of the quasar was the first black hole whose spin has ever been directly measured. GRB 011211 was a gamma-ray burst (GRB) detected on December 11, 2001. The burst lasted 270 seconds, making it the longest burst that had ever been detected by X-ray astronomy satellite BeppoSAX up to that point. GRB 030323 lasted 26 seconds and was detected on 23 March 2003.
Meteor showers
The Eta Craterids are a faint meteor shower that takes place between 11 and 22 January, peaking around January 16 and 17, near Eta Crateris.
| Physical sciences | Other | Astronomy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.