id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
581888 | https://en.wikipedia.org/wiki/Luminous%20flux | Luminous flux | In photometry, luminous flux or luminous power is the measure of the perceived power of light. It differs from radiant flux, the measure of the total power of electromagnetic radiation (including infrared, ultraviolet, and visible light), in that luminous flux is adjusted to reflect the varying sensitivity of the human eye to different wavelengths of light.
Units
The SI unit of luminous flux is the lumen (lm). One lumen is defined as the luminous flux of light produced by a light source that emits one candela of luminous intensity over a solid angle of one steradian.
In other systems of units, luminous flux may have units of power.
Weighting
The luminous flux accounts for the sensitivity of the eye by weighting the power at each wavelength with the luminosity function, which represents the eye's response to different wavelengths. The luminous flux is a weighted sum of the power at all wavelengths in the visible band. Light outside the visible band does not contribute. The ratio of the total luminous flux to the radiant flux is called the luminous efficacy. This model of the human visual brightness perception, is standardized by the CIE and ISO.
Context
Luminous flux is often used as an objective measure of the useful light emitted by a light source, and is typically reported on the packaging for light bulbs, although it is not always prominent. Consumers commonly compare the luminous flux of different light bulbs since it provides an estimate of the apparent amount of light the bulb will produce, and a lightbulb with a higher ratio of luminous flux to consumed power is more efficient.
Luminous flux is not used to compare brightness, as this is a subjective perception which varies according to the distance from the light source and the angular spread of the light from the source.
Measurement
Luminous flux of artificial light sources is typically measured using an integrating sphere, or a goniophotometer outfitted with a photometer or a spectroradiometer.
Relationship to luminous intensity
Luminous flux (in lumens) is a measure of the total amount of light a lamp puts out. The luminous intensity (in candelas) is a measure of how bright the beam in a particular direction is. If a lamp has a 1 lumen bulb and the optics of the lamp are set up to focus the light evenly into a 1 steradian beam, then the beam would have a luminous intensity of 1 candela. If the optics were changed to concentrate the beam into 1/2 steradian then the source would have a luminous intensity of 2 candela. The resulting beam is narrower and brighter, however the luminous flux remains the same.
Examples
| Physical sciences | Optics | Physics |
582127 | https://en.wikipedia.org/wiki/Antenna%20tuner | Antenna tuner | An antenna tuner, a matchbox, transmatch, antenna tuning unit (ATU), antenna coupler, or feedline coupler is a device connected between a radio transmitter or receiver and its antenna to improve power transfer between them by matching the impedance of the radio to the antenna's feedline. Antenna tuners are particularly important for use with transmitters. Transmitters feed power into a resistive load, very often 50 ohms, for which the transmitter is optimally designed for power output, efficiency, and low distortion. If the load seen by the transmitter departs from this design value due to improper tuning of the antenna/feedline combination the power output will change, distortion may occur and the transmitter may overheat.
ATUs are a standard part of almost all radio transmitters; they may be a circuit included inside the transmitter itself or a separate piece of equipment connected between the transmitter and the antenna. In transmitters in which the antenna is mounted separate from the transmitter and connected to it by a transmission line (feedline), there may be a second ATU (or matching network) at the antenna to match the impedance of the antenna to the transmission line. In low power transmitters with attached antennas, such as cell phones and walkie-talkies, the ATU is fixed to work with the antenna. In high power transmitters like radio stations, the ATU is adjustable to accommodate changes in the antenna or transmitter, and adjusting the ATU to match the transmitter to the antenna is an important procedure done after any changes to these components have been made. This adjustment is done with an instrument called a SWR meter.
In radio receivers ATUs are not so important, because in the low frequency part of the radio spectrum the signal to noise ratio (SNR) is dominated by atmospheric noise. It does not matter if the impedance of the antenna and receiver are mismatched so some of the incoming power from the antenna is reflected and does not reach the receiver, because the signal can be amplified to make up for it. However in high frequency receivers the receiver's SNR is dominated by noise in the receiver's front end, so it is important that the receiving antenna is impedance-matched to the receiver to give maximum signal amplitude in the front end stages, to overcome noise.
Overview
An antenna's impedance is different at different frequencies. An antenna tuner matches a radio with a fixed impedance (typically 50 Ohms for modern transceivers) to the combination of the feedline and the antenna; useful when the impedance seen at the input end of the feedline is unknown, complex, or otherwise different from the transceiver. Coupling through an ATU allows the use of one antenna on a broad range of frequencies. However, despite its name, an antenna tuner ' actually matches the transmitter only to the complex impedance reflected back to the input end of the feedline. If both tuner and transmission line were lossless, tuning at the transmitter end would indeed produce a match at every point in the transmitter-feedline-antenna system. However, in practical systems feedline losses limit the ability of the antenna 'tuner' to match the antenna or change its resonant frequency.
If the loss of power is very low in the line carrying the transmitter's signal into the antenna, a tuner at the transmitter end can produce a worthwhile degree of matching and tuning for the antenna and feedline network as a whole. With lossy feedlines (such as commonly used 50 Ohm coaxial cable) maximum power transfer only occurs if matching is done at both ends of the line.
If there is still a high SWR (multiple reflections) in the feedline beyond the ATU, any loss in the feedline is multiplied several times by the transmitted waves reflecting back and forth between the tuner and the antenna, heating the wire instead of sending out a signal. Even with a matching unit at both ends of the feedline – the near ATU matching the transmitter to the feedline and the remote ATU matching the feedline to the antenna – losses in the circuitry of the two ATUs will reduce power delivered to the antenna. Therefore, operating an antenna far from its design frequency and compensating with a transmatch between the transmitter and the feedline is not as efficient as using a resonant antenna with a matched-impedance feedline, nor as efficient as a matched feedline from the transmitter to a remote antenna tuner attached directly to the antenna.
Broad band matching methods
Transformers, autotransformers, and baluns are sometimes incorporated into the design of narrow band antenna tuners and antenna cabling connections. They will all usually have little effect on the resonant frequency of either the antenna or the narrow band transmitter circuits, but can widen the range of impedances that the antenna tuner can match, and/or convert between balanced and unbalanced cabling where needed.
Ferrite transformers
Solid-state power amplifiers operating from 1–30 MHz typically use one or more wideband transformers wound on ferrite cores. MOSFETs and bipolar junction transistors are designed to operate into a low impedance, so the transformer primary typically has a single turn, while the 50 Ohm secondary will have 2 to 4 turns. This feedline system design has the advantage of reducing the retuning required when the operating frequency is changed. A similar design can match an antenna to a transmission line; For example, many TV antennas have a 300 Ohm impedance and feed the signal to the TV via a 75 Ohm coaxial line. A small ferrite core transformer makes the broad band impedance transformation. This transformer does not need, nor is it capable of adjustment. For receive-only use in a TV the small SWR variation with frequency is not a major problem.
It should be added that many ferrite based transformers perform a balanced to unbalanced transformation along with the impedance change. When the balanced to unbalanced function is present these transformers are called a balun (otherwise an unun). The most common baluns have either a 1:1 or a 1:4 impedance transformation.
Autotransformers
There are several designs for impedance matching using an autotransformer, which is a single-wire transformer with different connection points or taps spaced along the windings. They are distinguished mainly by their impedance transform ratio (1:1, 1:4, 1:9, etc., the square of the winding ratio), and whether the input and output sides share a common ground, or are matched from a cable that is grounded on one side (unbalanced) to an ungrounded (usually balanced) cable. When autotransformers connect balanced and unbalanced lines they are called baluns, just as two-winding transformers. When two differently-grounded cables or circuits must be connected but the grounds kept independent, a full, two-winding transformer with the desired ratio is used instead.
The circuit pictured at the right has three identical windings wrapped in the same direction around either an "air" core (for very high frequencies) or ferrite core (for middle, or low frequencies). The three equal windings shown are wired for a common ground shared by two unbalanced lines (so this design is called an unun), and can be used as 1:1, 1:4, or 1:9 impedance match, depending on the tap chosen. (The same windings could be connected differently to make a balun instead.)
For example, if the right-hand side is connected to a resistive load of 10 Ohms, the user can attach a source at any of the three ungrounded terminals on the left side of the autotransformer to get a different impedance. Notice that on the left side, the line with more windings measures greater impedance for the same 10 Ohm load on the right.
Narrow band design
The "narrow-band" methods described below cover a very much smaller span of frequencies, by comparison with the broadband methods described above.
Antenna matching methods that use transformers tend to cover a wide range of frequencies. A single, typical, commercially available balun can cover frequencies from 3.5–30.0 MHz, or nearly the entire shortwave radio band. Matching to an antenna using a cut segment of transmission line (described below) is perhaps the most efficient of all matching schemes in terms of electrical power, but typically can only cover a range about 3.5–3.7 MHz wide – a very small range indeed, compared to a broadband balun. Antenna coupling or feedline matching circuits are also narrowband for any single setting, but can be re-tuned more conveniently. However they are perhaps the least efficient in terms of power-loss (aside from having no impedance matching at all!).
Transmission line antenna tuning methods
The insertion of a special section of transmission line, whose characteristic impedance differs from that of the main line, can be used to match the main line to the antenna. An inserted line with the proper impedance and connected at the proper location can perform complicated matching effects with very high efficiency, but spans a very limited frequency range.
The simplest example this method is the quarter-wave impedance transformer formed by a section of mismatched transmission line. If a quarter-wavelength of 75 Ohm coaxial cable is linked to a 50 Ohm load, the SWR in the 75 Ohm quarter wavelength of line can be calculated as 75Ω / 50Ω = 1.5; the quarter-wavelength of line transforms the mismatched impedance to 112.5 Ohms (75 Ohms × 1.5 = 112.5 Ohms). Thus this inserted section matches a 112 Ohm antenna to a 50 Ohm main line.
The wavelength coaxial transformer is a useful way to match 50 to 75 Ohms using the same general method. The theoretical basis is discussion by the inventor, and wider application of the method is found here: Branham, P. (1959). A Convenient Transformer for matching Co-axial lines. Geneva: CERN.
A second common method is the use of a stub: A shorted, or open section of line is connected in parallel with the main line. With coax this is done using a 'T'-connector. The length of the stub and its location can be chosen so as to produce a matched line below the stub, regardless of the complex impedance or SWR of the antenna itself. The J-pole antenna is an example of an antenna with a built-in stub match.
Basic lumped circuit matching using the L network
The basic circuit required when lumped capacitances and inductors are used is shown below. This circuit is important in that many automatic antenna tuners use it, and also because more complex circuits can be analyzed as groups of L-networks.
This is called an L network not because it contains an inductor, (in fact some L-networks consist of two capacitors), but because the two components are at right angles to each other, having the shape of a rotated and sometimes reversed English letter 'L'. The 'T' ("Tee") network and the π ("Pi") network also have a shape similar to the English and Greek letters they are named after.
This basic network is able to act as an impedance transformer. If the output has an impedance consisting of resistance Rload and reactance j Xload, while the input is to be attached to a source which has an impedance of Rsource resistance and j Xsource reactance, then
and
.
In this example circuit, XL and XC can be swapped. All the ATU circuits below create this network, which exists between systems with different impedances.
For instance, if the source has a resistive impedance of 50 Ω and the load has a resistive impedance of 1000 Ω :
If the frequency is 28 MHz,
As,
then,
So,
While as,
then,
Theory and practice
A parallel network, consisting of a resistive element (1000 Ω) and a reactive element (−j 229.415 Ω), will have the same impedance and power factor as a series network consisting of resistive (50 Ω) and reactive elements (−j 217.94 Ω).
By adding another element in series (which has a reactive impedance of +j 217.94 Ω), the impedance is 50 Ω (resistive).
Types of L networks and their use
The L-network can have eight different configurations, six of which are shown here. The two missing configurations are the same as the bottom row, but with the parallel element (wires vertical) on the right side of the series element (wires horizontal), instead of on the left, as shown.
In discussion of the diagrams that follows the in connector comes from the transmitter or "source"; the out connector goes to the antenna or "load".
The general rule (with some exceptions, described below) is that the series element of an L-network goes on the side with the lowest impedance.
So for example, the three circuits in the left column and the two in the bottom row have the series (horizontal) element on the out side are generally used for stepping up from a low-impedance input (transmitter) to a high-impedance output (antenna), similar to the example analyzed in the section above. The top two circuits in the right column, with the series (horizontal) element on the in side, are generally useful for stepping down from a higher input to a lower output impedance.
The general rule only applies to loads that are mainly resistive, with very little reactance. In cases where the load is highly reactive – such as an antenna fed with a signals whose frequency is far away from any resonance – the opposite configuration may be required. If far from resonance, the bottom two step down (high-in to low-out) circuits would instead be used to connect for a step up (low-in to high-out that is mostly reactance).
The low- and high-pass versions of the four circuits shown in the top two rows use only one inductor and one capacitor. Normally, the low-pass would be preferred with a transmitter, in order to attenuate harmonics, but the high-pass configuration may be chosen if the components are more conveniently obtained, or if the radio already contains an internal low-pass filter, or if attenuation of low frequencies is desirable – for example when a local AM station broadcasting on a medium frequency may be overloading a high frequency receiver.
The Low R, high C circuit is shown feeding a short vertical antenna, such as would be the case for a compact, mobile antenna or otherwise on frequencies below an antenna's lowest natural resonant frequency. Here the inherent capacitance of a short, random wire antenna is so high that the L-network is best realized with two inductors, instead of aggravating the problem by using a capacitor.
The Low R, high L circuit is shown feeding a small loop antenna. Below resonance this type of antenna has so much inductance, that more inductance from adding a coil would make the reactance even worse. Therefore, the L-network is composed of two capacitors.
An L-network is the simplest circuit that will achieve the desired transformation; for any one given antenna and frequency, once a circuit is selected from the eight possible configurations (of which six are shown above) only one set of component values will match the in impedance to the out impedance. In contrast, the circuits described below all have three or more components, and hence have many more choices for inductance and capacitance that will produce an impedance match. The radio operator must experiment, test, and use judgement to choose among the many adjustments that produce the same impedance match.
Antenna system losses
Loss in Antenna tuners
Every means of impedance match will introduce some power loss. This will vary from a few percent for a transformer with a ferrite core, to 50% or more for a complex ATU that is improperly tuned or working at the limits of its tuning range.
With the narrow band tuners, the L-network has the lowest loss, partly because it has the fewest components, but mainly because it necessarily operates at the lowest possible for a given impedance transformation. With the L-network, the loaded is not adjustable, but is fixed midway between the source and load impedances. Since most of the loss in practical tuners will be in the coil, choosing either the low-pass or high-pass network may reduce the loss somewhat.
The L-network using only capacitors will have the lowest loss, but this network only works where the load impedance is very inductive, making it a good choice for a small loop antenna. Inductive impedance also occurs with straight-wire antennas used at frequencies slightly above a resonant frequency, where the antenna is too long – for example, between a quarter and a half wave long at the operating frequency. However, problematic straight-wire antennas are typically too short for the frequency in use.
With the high-pass T-network, the loss in the tuner can vary from a few percent – if tuned for lowest loss – to over 50% if the tuner is not properly adjusted. Using the maximum available capacitance will give less loss, than if one simply tunes for a match without regard for the settings. This is because using more capacitance means using fewer inductor turns, and the loss is mainly in the inductor.
With the SPC tuner the losses will be somewhat higher than with the T-network, since the added capacitance across the inductor will shunt some reactive current to ground which must be cancelled by additional current in the inductor. The trade-off is that the effective inductance of the coil is increased, thus allowing operation at lower frequencies than would otherwise be possible.
If additional filtering is desired, the inductor can be deliberately set to larger values, thus providing a partial band pass effect. Either the high-pass T, low-pass π, or the SPC tuner can be adjusted in this manner. The additional attenuation at harmonic frequencies can be increased significantly with only a small percentage of additional loss at the tuned frequency.
When adjusted for minimum loss, the SPC tuner will have better harmonic rejection than the high-pass T due to its internal tank circuit. Either type is capable of good harmonic rejection if a small additional loss is acceptable. The low-pass π has exceptional harmonic attenuation at any setting, including the lowest-loss.
ATU location
An ATU will be inserted somewhere along the line connecting the radio transmitter or receiver to the antenna. The antenna feedpoint is usually high in the air (for example, a dipole antenna) or far away (for example, an end-fed random wire antenna). A transmission line, or feedline, must carry the signal between the transmitter and the antenna. The ATU can be placed anywhere along the feedline: at the transmitter, at the antenna, or somewhere in between.
Antenna tuning is best done as close to the antenna as possible to minimize loss, increase bandwidth, and reduce voltage and current on the transmission line. Also, when the information being transmitted has frequency components whose wavelength is a significant fraction of the electrical length of the feed line, distortion of the transmitted information will occur if there are standing waves on the line. Analog TV and FM stereo broadcasts are affected in this way. For those modes, matching at the antenna is required.
When possible, an automatic or remotely-controlled tuner in a weather-proof case at or near the antenna is convenient and makes for an efficient system. With such a tuner, it is possible to match a wide range of antennas (including stealth antennas).SGC World: Smart Tuners for Stealth Antennas.
When the ATU must be located near the radio for convenient adjustment, any significant SWR will increase the loss in the feedline. For that reason, when using an ATU at the transmitter, low-loss, high-impedance feedline is a great advantage (open-wire line, for example). A short length of low-loss coaxial line is acceptable, but with longer lossy lines the additional loss due to SWR becomes very high.
It is very important to remember that when matching the transmitter to the line, as is done when the ATU is near the transmitter, there is no change in the SWR in the feedline. The backlash currents reflected from the antenna are retro-reflected by the ATU – usually several times between the two – and so are invisible on the transmitter-side of the ATU. The result of the multiple reflections is compounded loss, higher voltage or higher currents, and narrowed bandwidth, none of which can be corrected by the ATU.
Standing wave ratio
It is a common misconception that a high standing wave ratio (SWR) per se causes loss. A well-adjusted ATU feeding an antenna through a low-loss line may have only a small percentage of additional loss compared with an intrinsically matched antenna, even with a high SWR (4:1, for example). An ATU sitting beside the transmitter just re-reflects energy reflected from the antenna ("backlash current") back yet again along the feedline to the antenna ("retro-reflection"). High losses arise from RF resistance in the feedline and antenna, and those multiple reflections due to high SWR cause feedline losses to be compounded.
Using low-loss, high-impedance feedline with an ATU results in very little loss, even with multiple reflections. However, if the feedline-antenna combination is 'lossy' then an identical high SWR may lose a considerable fraction of the transmitter's power output. High impedance lines – such as most parallel-wire lines – carry power mostly as high voltage rather than high current, and current alone determines the power lost to line resistance. So despite high SWR, very little power is lost in high-impedance line compared low-impedance line – typical coaxial cable, for example. For that reason, radio operators can be more casual about using tuners with high-impedance feedline.
Without an ATU, the SWR from a mismatched antenna and feedline can present an improper load to the transmitter, causing distortion and loss of power or efficiency with heating and/or burning of the output stage components. Modern solid state transmitters will automatically reduce power when high SWR is detected, so some solid-state power stages only produce weak signals if the SWR rises above 1.5 to 1. Were it not for that problem, even the losses from an SWR of 2:1 could be tolerated, since only 11 percent of transmitted power would be reflected and 89 percent sent out through to the antenna. So the main loss of output power with high SWR is due to the transmitter "backing off" its output when challenged with backlash current.
Tube transmitters and amplifiers usually have an adjustable output network that can feed mismatched loads up to perhaps 3:1 SWR without trouble. In effect the built-in π-network of the transmitter output stage acts as an ATU. Further, since tubes are electrically robust (even though mechanically fragile), tube-based circuits can tolerate very high backlash current without damage.
Broadcast Applications
AM broadcast transmitters
One of the oldest applications for antenna tuners is in AM and shortwave broadcasting transmitters. AM transmitters usually use a vertical antenna (tower) which can be from 0.20 to 0.68 wavelengths long. At the base of the tower an ATU is used to match the antenna to the 50 Ohm transmission line from the transmitter. The most commonly used circuit is a T-network, using two series inductors with a shunt capacitor between them. When multiple towers are used the ATU network may also provide for a phase adjustment so that the currents in each tower can be phased relative to the others to produce a desired pattern. These patterns are often required by law to include nulls in directions that could produce interference as well as to increase the signal in the target area. Adjustment of the ATUs in a multitower array is a complex and time consuming process requiring considerable expertise.
High-power shortwave transmitters
For International Shortwave (50 kW and above), frequent antenna tuning is done as part of frequency changes which may be required on a seasonal or even a daily basis. Modern shortwave transmitters typically include built-in impedance-matching circuitry for SWR up to 2:1 , and can adjust their output impedance within 15 seconds.
The matching networks in transmitters sometimes incorporate a balun or an external one can be installed at the transmitter in order to feed a balanced line. Balanced transmission lines of 300 Ohms or more were more-or-less standard for all shortwave transmitters and antennas in the past, even by amateurs. Most shortwave broadcasters have continued to use high-impedance feeds even before the advent of automatic impedance matching.
The most commonly used shortwave antennas for international broadcasting are the
HRS antenna (curtain array), which cover a 2 to 1 frequency range and the log-periodic antenna which cover up to 8 to 1 frequency range. Within that range, the SWR will vary, but is usually kept below 1.7 to 1 – within the range of SWR that can be tuned by antenna matching built-into many modern transmitters. Hence, when feeding these antennas, a modern transmitter will be able to tune itself as needed to match at any frequency.
Automatic antenna tuning
Automatic antenna tuning is used in flagship mobile phones, transceivers for amateur radio, and in land mobile, marine, and tactical HF radio transceivers.
Each antenna tuning system (AT) shown in the figure has an "antenna port", which is directly or indirectly coupled to an antenna, and another port, referred to as "radio port" (or as "user port"), for transmitting and / or receiving radio signals through the AT and the antenna. Each AT shown in the figure has a single antenna-port, (SAP) AT, but a multiple antenna-port (MAP) AT may be needed for MIMO radio transmission.
Several control schemes can be used in a radio transceiver or transmitter to automatically adjust an antenna tuner (AT).
The control schemes are based on one of the two configurations, (a) and (b), shown in the diagram. For both configurations, the transmitter comprises:
antenna
antenna tuner / matching network (AT)
sensing unit (SU)
control unit (CU)
transmitter and signal processing unit (TSPU)
The TSPU incorporates all the parts of the transmitting not otherwise shown in the diagram.
The TX port of the TSPU delivers a test signal. The SU delivers, to the TSPU, one or more output signals indicating the response to the test signal, one or more electrical variables (such as voltage, current, incident or forward voltage, etc.). The response sensed at the radio port in the case of configuration (a) or at the antenna port'' in the case of configuration (b). Note that neither configuration (a) nor (b) is ideal, since the line between the antenna and the AT attenuates SWR; response to a test signal is most accurately tested at or near the antenna feedpoint.
{| style="text-align:center;" class="wikitable"
|+ '''
|- style="vertical-align:bottom;"
! Control scheme !! Configuration !! Extremum-seeking?
|-
| Type 0 || n/a || n/a
|-
| Type 1 || (a) || No
|-
| Type 2 || (a) || Yes
|-
| Type 3 || (b) || No
|-
| Type 4 || (b) || Yes
|}
Broydé & Clavelier (2020) distinguish five types of antenna tuner control schemes, as follows:
Type 0 designates the open-loop AT control schemes that do not use any SU, the adjustment being typically only based on previous knowledge programmed for each operating frequency
Type 1 and type 2 control schemes use configuration (a)
type 2 uses extremum-seeking control
type 1 does not seek an extreme
Type 3 and type 4 control schemes use configuration (b)
type 4 uses extremum-seeking control
type 3 does not seek an extreme
The control schemes may be compared as regards:
use of closed-loop or open-loop control (or both)
measurements used
ability to mitigate the effects of the electromagnetic characteristics of the surroundings
aim / goal
accuracy and speed
dependence on use of a particular model of AT or CU
| Technology | Broadcasting | null |
582180 | https://en.wikipedia.org/wiki/Wakefulness | Wakefulness | Wakefulness is a daily recurring brain state and state of consciousness in which an individual is conscious and engages in coherent cognitive and behavioral responses to the external world.
Being awake is the opposite of being asleep, in which most external inputs to the brain are excluded from neural processing.
Effects upon the brain
The longer the brain has been awake, the greater the synchronous firing rates of cerebral cortex neurons. After sustained periods of sleep, both the speed and synchronicity of the neurons firing are shown to decrease.
Another effect of wakefulness is the reduction of glycogen held in the astrocytes, which supply energy to the neurons. Studies have shown that one of sleep's underlying functions is to replenish this glycogen energy source.
Maintenance by the brain
Wakefulness is produced by a complex interaction between multiple neurotransmitter systems arising in the brainstem and ascending through the midbrain, hypothalamus, thalamus and basal forebrain.<ref></</ref> The posterior hypothalamus plays a key role in the maintenance of the cortical activation that underlies wakefulness. Several systems originating in this part of the brain control the shift from wakefulness into sleep and sleep into wakefulness. Histamine neurons in the tuberomammillary nucleus and nearby adjacent posterior hypothalamus project to the entire brain and are the most wake-selective system so far identified in the brain. Another key system is that provided by the orexins (also known as hypocretins) projecting neurons. These exist in areas adjacent to histamine neurons and like them project widely to most brain areas and associate with arousal. Orexin deficiency has been identified as responsible for narcolepsy.
Research suggests that orexin and histamine neurons play distinct, but complementary roles in controlling wakefulness with orexin being more involved with wakeful behavior and histamine with cognition and activation of cortical EEG.
It has been suggested the fetus is not awake, with wakefulness occurring in the newborn due to the stress of being born and the associated activation of the locus coeruleus.
| Biology and health sciences | Ethology | Biology |
582473 | https://en.wikipedia.org/wiki/Oogenesis | Oogenesis | Oogenesis () or ovogenesis is the differentiation of the ovum (egg cell) into a cell competent to further develop when fertilized. It is developed from the primary oocyte by maturation. Oogenesis is initiated in the embryonic stage.
Oogenesis in non-human mammals
In mammals, the first part of oogenesis starts in the germinal epithelium, which gives rise to the development of ovarian follicles, the functional unit of the ovary.
Oogenesis consists of several sub-processes: oocytogenesis, ootidogenesis, and finally maturation to form an ovum (oogenesis proper). Folliculogenesis is a separate sub-process that accompanies and supports all three oogenetic sub-processes.
Oogonium —(Oocytogenesis)—> Primary Oocyte —(Meiosis I)—> First Polar body (Discarded afterward) + Secondary oocyte —(Meiosis II)—> Second Polar Body (Discarded afterward) + Ovum
Oocyte meiosis, important to all animal life cycles yet unlike all other instances of animal cell division, occurs completely without the aid of spindle-coordinating centrosomes.
The creation of oogonia
The creation of oogonia traditionally does not belong to oogenesis proper, but, instead, to the common process of gametogenesis, which, in the female human, begins with the processes of folliculogenesis, oocytogenesis, and ootidogenesis. Oogonia enter meiosis during embryonic development, becoming oocytes. Meiosis begins with DNA replication and meiotic crossing over. It then stops in early prophase.
Maintenance of meiotic arrest
Mammalian oocytes are maintained in meiotic prophase arrest for a very long time—months in mice, years in humans. Initially, the arrest is due to lack of sufficient cell cycle proteins to allow meiotic progression. However, as the oocyte grows, these proteins are synthesized, and meiotic arrest becomes dependent on cyclic AMP. The cyclic AMP is generated by the oocyte by adenylyl cyclase in the oocyte membrane. The adenylyl cyclase is kept active by a constitutively active G-protein-coupled receptor known as GPR3 and a G-protein, Gs, also present in the oocyte membrane.
Maintenance of meiotic arrest also depends on the presence of a multilayered complex of cells, known as a follicle, that surrounds the oocyte. Removal of the oocyte from the follicle causes meiosis to progress in the oocyte. The cells that comprise the follicle, known as granulosa cells, are connected to each other by proteins known as gap junctions, that allow small molecules to pass between the cells. The granulosa cells produce a small molecule, cyclic GMP, that diffuses into the oocyte through the gap junctions. In the oocyte, cyclic GMP prevents the breakdown of cyclic AMP by the phosphodiesterase PDE3, and thus maintains meiotic arrest. The cyclic GMP is produced by the guanylyl cyclase NPR2.
Reinitiation of meiosis and stimulation of ovulation by luteinizing hormone
As follicles grow, they acquire receptors for luteinizing hormone, a pituitary hormone that reinitiates meiosis in the oocyte and causes ovulation of a fertilizable egg. Luteinizing hormone acts on receptors in the outer layers of granulosa cells of the follicle, causing a decrease in cyclic GMP in the granulosa cells. Because the granulosa cells and oocyte are connected by gap junctions, cyclic GMP also decreases in the oocyte, causing meiosis to resume. Meiosis then proceeds to second metaphase, where it pauses again until fertilization. Luteinizing hormone also stimulates gene expression leading to ovulation.
Human oogenesis
Oogenesis
Oogenesis starts with the process of developing primary oocytes, which occurs via the transformation of oogonia into primary [oocyte]s, a process called oocytogenesis. From one single oogonium, only one mature oocyte will rise, with 3 other cells called polar bodies. Oocytogenesis is complete either before or shortly after birth.
Number of primary oocytes
It is commonly believed that, when oocytogenesis is complete, no additional primary oocytes are created, in contrast to the male process of spermatogenesis, where gametocytes are continuously created. In other words, primary oocytes reach their maximum development at ~20 weeks of gestational age, when approximately seven million primary oocytes have been created; however, at birth, this number has already been reduced to approximately 1-2 million per ovary. At puberty, the number of oocytes decreases even more to reach about 60,000 to 80,000 per ovary, and only about 500 mature oocytes will be produced during a woman's life, the others will undergo atresia (degeneration). Two publications have challenged the belief that a finite number of oocytes are set around the time of birth generation in adult mammalian ovaries by putative germ cells in bone marrow and peripheral blood. The renewal of ovarian follicles from germline stem cells (originating from bone marrow and peripheral blood) has been reported in the postnatal mouse ovary. In contrast, DNA clock measurements do not indicate ongoing oogenesis during human females' lifetimes.
Thus, further experiments are required to determine the true dynamics of small follicle formation.
Ootidogenesis
The succeeding phase of ootidogenesis occurs when the primary oocyte develops into an ootid. This is achieved by the process of meiosis. In fact, a primary oocyte is, by its biological definition, a cell whose primary function is to divide by the process of meiosis.
However, although this process begins at prenatal age, it stops at prophase I. In late fetal life, all oocytes, still primary oocytes, have halted at this stage of development, called the dictyate. After menarche, these cells then continue to develop, although only a few do so every menstrual cycle.
Meiosis I
Meiosis I of ootidogenesis begins during embryonic development, but halts in the diplotene stage of prophase I until puberty. The mouse oocyte in the dictyate (prolonged diplotene) stage actively repairs DNA damage, whereas DNA repair is not detectable in the pre-dictyate (leptotene, zygotene and pachytene) stages of meiosis. For those primary oocytes that continue to develop in each menstrual cycle, however, synapsis occurs and tetrads form, enabling chromosomal crossover to occur. As a result of meiosis I, the primary oocyte has now developed into the secondary oocyte.
Meiosis II
Immediately after meiosis I, the haploid secondary oocyte initiates meiosis II. However, this process is also halted at the metaphase II stage until fertilization, if such should ever occur. If the egg is not fertilized, it is disintegrated and released (menstruation) and the secondary oocyte does not complete meiosis II (and does not become an ovum). When meiosis II has completed, an ootid and another polar body have now been created. The polar body is small in size.
Ovarian cycle
The ovarian cycle is divided into several phases:
Follicologenesis: Synchronously with ootidogenesis, the ovarian follicle surrounding the ootid has developed from a primordial follicle to a preovulatory one. The primary follicle takes four months to become a preantral, two months to become antral, and then passes to a mature (Graaf) follicle. The primary follicle has oocyte-lining cells that go from floor to cubic and begin to proliferate, increasing the metabolic activity of the oocyte and follicular cells, which release glycoproteins and proteoglycans acids that will form the zona pellucida, which accompany the installation. In the preantral secondary follicle, internal and external theca cells begin to form. Aromatase, produced by follicular cells, transforms androgens produced by the inner theca into estrogens under the stimulation of FSH. LH stimulates theca cells to produce androgens. In the antral follicle, there is an antrum containing a follicle liquor, which contains estrogen, to allow the passage from the antral follicle to the Graaf follicle. The follicular antrum moves the oocyte and becomes eccentric; the oocyte is always surrounded by the pellucid zone and by follicular cells that form the oophorus cumulus. The innermost ones are called radiated corona cells. At this stage, the oocyte produces cortical granules containing acid glycoproteins.
Dominant follicle selection: The follicle with more FSH receptors will be more favored, simultaneously inducing the death of the other follicles (3-10 antral follicles that enter this phase each month). Low concentration estrogen will inhibit further production of FSH by the pituitary gland with negative feedback, so the follicles left behind will accumulate in the follicular antrum instead of androgens.
Graaf follicle: Estrogen at other concentrations induces LH release, with the peak of LH called LH surge, which induces stages that will lead to follicle burst. LH receptors also appear on follicular cells, which stimulate the oocyte to become a secondary oocyte, blocked in metaphase, waiting for fertilization. LH also stimulates oophore cumulus cells to release progesterone.
Ovulation: bursting of the follicle, oocyte leakage with pellucid zone, and radiated corona cells. The lining membrane is thinned on the ovary where the follicle bursts and the cells attached to it emerge from the stigma. The ovary is collected from the uterine tube, where fertilization can take place in the ampullate zone.
Formation of the corpus luteum: From the remaining structures of the follicle, the corpus luteum is formed. At first, there is a clot, which is then replaced by loose connective tissue; the cells that form solid cords are follicular cells and cells of the outer theca (Tecali lutein cells) and internal (granulosa cells). The luteal body increases the concentration of progesterone, which LH constantly stimulates. If the egg is not fertilized, the corpus luteum degenerates (body albicans); if it is implanted, it remains until three months of pregnancy, where its function is replaced by the placenta (production of progesterone and estrogen). The level of LH (necessary to keep the corpus luteum alive) is replaced by human chorionic gonadotropin.
Uterine cycle
The uterine cycle occurs parallel to the ovarian cycle and is induced by estrogen and progesterone. The endometrium, formed by a monostratified cylindrical epithelium, with uterine glands (simple tubular), connective with a functional superficial layer (divided into a spongy layer, a compact layer, and a deeper basal layer, which is always maintained, presents four phases:
Proliferative phase: From the 5th to the 14th day of the ovarian cycle, it is conditioned by estrogens. The functional layer of the uterus is restored, with mitotic division of the basal layer.
Secretive phase: from the 14th to the 27th day of the ovarian cycle, influenced by the progesterone produced by the corpus luteum. Cells become hypertrophic, and tubular glands begin to produce glycogen
Ischemic phase: beginning of the menstrual phase from 27 to 28 days
Regressive or desquamative phase from 1 to 5 days, the spiral-shaped arteries undergo ischemia, and the functional layer detaches
If, instead, there is fertilization, the uterine mucosa is modified to accommodate the fertilized egg, and the secretive phase is maintained.
Maturation into ovum
Both polar bodies disintegrate at the end of Meiosis II, leaving only the ootid, which then eventually undergoes maturation into a mature ovum.
The function of forming polar bodies is to discard the extra haploid sets of chromosomes that have resulted as a consequence of meiosis.
In vitro maturation
In vitro maturation (IVM) is the technique of letting ovarian follicles mature in vitro. It can potentially be performed before an IVF. In such cases, ovarian hyperstimulation is not essential. Rather, oocytes can mature outside the body prior to IVF. Hence, no (or at least a lower dose of) gonadotropins have to be injected in the body. Immature eggs have been grown until maturation in vitro at a 10% survival rate, but the technique is not yet clinically available. With this technique, cryopreserved ovarian tissue could possibly be used to make oocytes that can directly undergo in vitro fertilization.
In vitro oogenesis
By definition it means, to recapitulate mammalian oogenesis and producing fertilizable oocytes in vitro.it is a complex process involving several different cell types, precise follicular cell-oocyte reciprocal interactions, a variety of nutrients and combinations of cytokines, and precise growth factors and hormones depending on the developmental stage. In 2016, two papers published by Morohaku et al. and Hikabe et al. reported in vitro procedures that appear to reproduce efficiently these conditions allowing for the production, completely in a dish, of a relatively large number of oocytes that are fertilizable and capable of giving rise to viable offspring in the mouse. This technique can be mainly benefited in cancer patients where in today's condition their ovarian tissue
is cryopreserved for preservation of fertility. Alternatively to the autologous transplantation, the development of culture systems that support oocyte development from the primordial follicle stage represent a valid strategy to restore fertility. Over time, many studies have been conducted with the aim to optimize the characteristics of ovarian tissue culture systems and to better support the three main phases: 1) activation of primordial follicles; 2) isolation and culture of growing preantral follicles; 3) removal from the follicle environment and maturation of oocyte cumulus complexes. While complete oocyte in vitro development has been achieved in mouse, with the production of live offspring, the goal of obtaining oocytes of sufficient quality to support embryo development has not been completely reached into higher mammals despite decades of effort.
Ovarian aging
BRCA1 and ATM proteins are employed in repair of DNA double-strand break during meiosis. These proteins appear to have a critical role in resisting ovarian aging. However, homologous recombinational repair of DNA double-strand breaks mediated by BRCA1 and ATM weakens with age in oocytes of humans and other species. Women with BRCA1 mutations have lower ovarian reserves and experience earlier menopause than women without these mutations. Even in woman without specific BRCA1 mutations, ovarian aging is associated with depletion of ovarian reserves leading to menopause, but at a slower rate than in those with such mutations. Since older premenopausal women ordinarily have normal progeny, their capability for meiotic recombinational repair appears to be sufficient to prevent deterioration of their germline despite the reduction in ovarian reserve. DNA damages may arise in the germline during the decades long period in humans between early oocytogenesis and the stage of meiosis in which homologous chromosomes are effectively paired (dictyate stage). It has been suggested that such DNA damages may be removed, in large part, by mechanisms dependent on chromosome pairing, such as homologous recombination.
Oogenesis in non-mammals
Some algae and the oomycetes produce eggs in oogonia. In the brown alga Fucus, all four egg cells survive oogenesis, which is an exception to the rule that generally only one product of female meiosis survives to maturity.
In plants, oogenesis occurs inside the female gametophyte via mitosis. In many plants such as bryophytes, ferns, and gymnosperms, egg cells are formed in archegonia. In flowering plants, the female gametophyte has been reduced to an eight-celled embryo sac within the ovule inside the ovary of the flower. Oogenesis occurs within the embryo sac and leads to the formation of a single egg cell per ovule.
In ascaris, the oocyte does not even begin meiosis until the sperm touches it, in contrast to mammals, where meiosis is completed in the estrus cycle.
In female Drosophila flies, genetic recombination occurs during meiosis. This recombination is associated with formation of DNA double-strand breaks and the repair of these breaks.
The repair process leads to crossover recombinants as well as at least three times as many noncrossover recombinants (e.g. arising by gene conversion without crossover).
| Biology and health sciences | Animal reproduction | Biology |
582702 | https://en.wikipedia.org/wiki/Quasistatic%20process | Quasistatic process | In thermodynamics, a quasi-static process, also known as a quasi-equilibrium process (from Latin quasi, meaning ‘as if’), is a thermodynamic process that happens slowly enough for the system to remain in internal physical (but not necessarily chemical) thermodynamic equilibrium. An example of this is quasi-static expansion of a mixture of hydrogen and oxygen gas, where the volume of the system changes so slowly that the pressure remains uniform throughout the system at each instant of time during the process. Such an idealized process is a succession of physical equilibrium states, characterized by infinite slowness.
Only in a quasi-static thermodynamic process can we exactly define intensive quantities (such as pressure, temperature, specific volume, specific entropy) of the system at any instant during the whole process; otherwise, since no internal equilibrium is established, different parts of the system would have different values of these quantities, so a single value per quantity may not be sufficient to represent the whole system. In other words, when an equation for a change in a state function contains P or T, it implies a quasi-static process.
Relation to reversible process
While all reversible processes are quasi-static, most authors do not require a general quasi-static process to maintain equilibrium between system and surroundings and avoid dissipation, which are defining characteristics of a reversible process. For example, quasi-static compression of a system by a piston subject to friction is irreversible; although the system is always in internal thermal equilibrium, the friction ensures the generation of dissipative entropy, which goes against the definition of reversibility. Any engineer would remember to include friction when calculating the dissipative entropy generation.
An example of a quasi-static process that is not idealizable as reversible is slow heat transfer between two bodies on two finitely different temperatures, where the heat transfer rate is controlled by a poorly conductive partition between the two bodies. In this case, no matter how slowly the process takes place, the state of the composite system consisting of the two bodies is far from equilibrium, since thermal equilibrium for this composite system requires that the two bodies be at the same temperature. Nevertheless, the entropy change for each body can be calculated using the Clausius equality for reversible heat transfer.
PV-work in various quasi-static processes
Constant pressure: Isobaric processes,
Constant volume: Isochoric processes,
Constant temperature: Isothermal processes, where (pressure) varies with (volume) via , so
Polytropic processes,
| Physical sciences | Thermodynamics | Physics |
582770 | https://en.wikipedia.org/wiki/Particle%20number | Particle number | In thermodynamics, the particle number (symbol ) of a thermodynamic system is the number of constituent particles in that system. The particle number is a fundamental thermodynamic property which is conjugate to the chemical potential. Unlike most physical quantities, the particle number is a dimensionless quantity, specifically a countable quantity. It is an extensive property, as it is directly proportional to the size of the system under consideration and thus meaningful only for closed systems.
A constituent particle is one that cannot be broken into smaller pieces at the scale of energy involved in the process (where is the Boltzmann constant and is the temperature). For example, in a thermodynamic system consisting of a piston containing water vapour, the particle number is the number of water molecules in the system. The meaning of constituent particles, and thereby of particle numbers, is thus temperature-dependent.
Determining the particle number
The concept of particle number plays a major role in theoretical considerations. In situations where the actual particle number of a given thermodynamical system needs to be determined, mainly in chemistry, it is not practically possible to measure it directly by counting the particles. If the material is homogeneous and has a known amount of substance n expressed in moles, the particle number N can be found by the relation :
,
where NA is the Avogadro constant.
Particle number density
A related intensive system parameter is the particle number density (or particle number concentration PNC), a quantity of kind volumetric number density obtained by dividing the particle number of a system by its volume. This parameter is often denoted by the lower-case letter n.
In quantum mechanics
In quantum mechanical processes, the total number of particles may not be preserved. The concept is therefore generalized to the particle number operator, that is, the observable that counts the number of constituent particles. In quantum field theory, the particle number operator (see Fock state) is conjugate to the phase of the classical wave (see coherent state).
In air quality
One measure of air pollution used in air quality standards is the atmospheric concentration of particulate matter. This measure is usually expressed in μg/m3 (micrograms per cubic metre). In the current EU emission norms for cars, vans, and trucks and in the upcoming EU emission norm for non-road mobile machinery, particle number measurements and limits are defined, commonly referred to as PN, with units [#/km] or [#/kWh]. In this case, PN expresses a quantity of particles per unit distance (or work).
| Physical sciences | Thermodynamics | Physics |
582780 | https://en.wikipedia.org/wiki/Standard%20atmosphere%20%28unit%29 | Standard atmosphere (unit) | The standard atmosphere (symbol: atm) is a unit of pressure defined as Pa. It is sometimes used as a reference pressure or standard pressure. It is approximately equal to Earth's average atmospheric pressure at sea level.
History
The standard atmosphere was originally defined as the pressure exerted by a 760 mm column of mercury at and standard gravity (gn = ). It was used as a reference condition for physical and chemical properties, and the definition of the centigrade temperature scale set 100 °C as the boiling point of water at this pressure. In 1954, the 10th General Conference on Weights and Measures (CGPM) adopted standard atmosphere for general use and affirmed its definition of being precisely equal to dynes per square centimetre (). This defined pressure in a way that is independent of the properties of any particular substance. In addition, the CGPM noted that there had been some misapprehension that the previous definition (from the 9th CGPM) "led some physicists to believe that this definition of the standard atmosphere was valid only for accurate work in thermometry."
In chemistry and in various industries, the reference pressure referred to in standard temperature and pressure was commonly prior to 1982, but standards have since diverged; in 1982, the International Union of Pure and Applied Chemistry recommended that for the purposes of specifying the physical properties of substances, standard pressure should be precisely .
Pressure units and equivalencies
A pressure of 1 atm can also be stated as:
≈ kgf/cm2
≈ m H2O
≈ mmHg
≈ inHg
≈ in H2O
≈ pounds-force per square foot (lbf/ft2)
The notation ata has been used to indicate an absolute pressure measured in either standard atmospheres (atm) or technical atmospheres (at).
| Physical sciences | Energy, power, force and pressure | null |
582844 | https://en.wikipedia.org/wiki/VGA%20connector | VGA connector | The Video Graphics Array (VGA) connector is a standard connector used for computer video output. Originating with the 1987 IBM PS/2 and its VGA graphics system, the 15-pin connector went on to become ubiquitous on PCs, as well as many monitors, projectors and HD television sets.
Other connectors have been used to carry VGA-compatible signals, such as mini-VGA or BNC, but "VGA connector" typically refers to this design.
Devices continue to be manufactured with VGA connectors, although newer digital interfaces such as DVI, HDMI and DisplayPort are increasingly displacing VGA, and many modern computers and other devices do not include it.
Physical design
The VGA connector is a three-row, 15-pin D-subminiature connector referred to variously as DE-15, HD-15 or commonly DB-15(HD). DE-15 is the accurate nomenclature under the proprietary D-sub specifications: an "E" size D-sub connector, with 15 pins in three rows.
Predecessor and early variant
The standard 15-pin VGA connector was derived from the earlier DE-9 connector, which used the same "E" D-shell size (hence that connector's misnomer of DB-9 transferred its "DB" part to the new DE-15 connector as well, see above). Though IBM always used DE-15 connectors for their Video Graphics Array hardware, several VGA clone hardware makers initially did not. Instead, some early VGA hardware, both monitors and VGA cards, used a DE-9 connector for VGA, just like what had been in use for MDA, CGA, Hercules, and EGA. This 9-pin variant of the then-emerging de-facto standard 15-pin connector omitted several pins, which was considered acceptable, because the autodetection features supported by those pins only evolved over time, and prior to Windows 95, there was no user expectation of graphics cards and displays being fully plug and play. DE-9 VGA connectors were generally compatible with each other, and adaptors to the DE-15 standard were available. Ultimately all VGA hardware makers switched to standard DE-15 connectors, relegating the early variant to relative obscurity.
Electrical design
All VGA connectors carry analog RGBHV (red, green, blue, horizontal sync, vertical sync) video signals. Modern connectors also include VESA DDC pins, for identifying attached display devices.
In both its modern and original variants, VGA utilizes multiple scan rates, so attached devices such as monitors are multisync by necessity.
The VGA interface includes no affordances for hot swapping, the ability to connect or disconnect the output device during operation, although in practice this can be done and usually does not cause damage to the hardware or other problems. The VESA DDC specification does, however, include a standard for hot-swapping.
PS/2 signaling
In the original IBM VGA implementation, refresh rates were limited to two vertical (60 and 70 Hz) frequencies, all of which were communicated to the monitor using combinations of different polarity H and V sync signals.
Some pins on the connector were also different: pin 9 was keyed by plugging the female connector hole, and four pins carried the monitor ID.
With the implementation of the VESA DDC specification, several of the monitor ID pins were reassigned for use by DDC signaling, and the key pin was replaced with a +5 V DC output per the DDC spec. Devices that comply with the DDC host system standard provide , from 50mA to 1A.
PS/55 signaling
The IBM PS/55 Display Adapter redefined pin 9 as "+12V", which signals the monitor to turn on when the system unit is powered on.
EDID
In order to advertise display capabilities VESA has introduced a scheme to redefining VGA connector pins 9, 12, and 15 as a serial bus for a Display Data Channel (DDC).
Cable quality
The same VGA cable can be used with a variety of supported VGA resolutions, ranging from 320×400px @70 Hz, or 320×480px @60 Hz (12.6 MHz of signal bandwidth) to 1280×1024px (SXGA) @85 Hz (160 MHz) and up to 2048×1536px (QXGA) @85 Hz (388 MHz).
There are no standards defining the quality required for each resolution, but higher-quality cables typically contain coaxial wiring and insulation that make them thicker.
While shorter VGA cables are less likely to introduce significant signal degradation, good-quality cable should not suffer from signal crosstalk (whereby signals in one wire induce unwanted currents in adjacent wires) even at greater lengths.
Ghosting occurs when impedance mismatches cause signals to be reflected. A correctly impedance matched cable (75ohm) should prevent this, however, ghosting with long cables may be caused by equipment with incorrect signal termination or by passive cable splitters rather than the cables themselves.
Alternative connectors
Some high-end monitors and video cards used multiple BNC connectors instead of a single standard VGA connector, providing a higher quality connection with less crosstalk by utilising five separate 75 ohm coaxial cables. The use of BNC RGB video cables predates VGA in other markets and industries.
Within a 15-pin connector, the red, green, and blue signals (pins 1, 2, 3) cannot be shielded from each other, so crosstalk is possible within the 15-pin interconnect. BNC prevents crosstalk by maintaining full coaxial shielding through the circular connectors, but the connectors are very large and bulky. The requirement to press and turn the plug shell to disconnect requires access space around each connector to allow grasping of each BNC plug shell. Supplementary signals such as DDC are typically not supported with BNC.
Some laptops and other portable devices in the early to mid-2000s contained a two-row mini-VGA connector that is much smaller than the three-row DE-15 connector, as well as five separate BNC connectors.
Adapters
Various adapters can be purchased to convert VGA to other connector types. One common variety is a DVI to VGA adapter, which is possible because many DVI interfaces also carry VGA-compatible analog signals. Adapting from HDMI or DisplayPort to VGA without an active converter is not possible because those connectors don't output analog signals.
For conversions to and from digital formats like HDMI or Displayport, a scan converter is required. VGA outputs to interfaces with different signaling, more complex converters may be used. Most of them need an external power source to operate and are inherently lossy. However, many modern displays are still made with multiple inputs including VGA, in which case adapters are not necessary.
VGA can also be adapted to SCART in some cases, because the signals are electrically compatible if the correct sync rates are set by the host PC. Many modern graphics adapters can modify their signal in software, including refresh rate, sync length, polarity and number of blank lines. Particular issues include interlace support and the use of the resolution 720×576 in PAL countries. Under these restrictive conditions, a simple circuit to combine the VGA separate synchronization signals into SCART composite sync may suffice.
Extenders
A VGA extender is an electronic device that increases the signal strength from a VGA port, most often from a computer. They are often used in schools, businesses, and homes when multiple monitors are being run off one VGA port, or if the cable between the monitor and the computer will be excessively long (often pictures appear blurry or have minor artifacts if the cable runs too far without an extender). VGA extenders are sometimes called VGA boosters.
| Technology | User interface | null |
583598 | https://en.wikipedia.org/wiki/Oxygen%20cycle | Oxygen cycle | Oxygen cycle refers to the movement of oxygen through the atmosphere (air), biosphere (plants and animals) and the lithosphere (the Earth’s crust). The oxygen cycle demonstrates how free oxygen is made available in each of these regions, as well as how it is used. The oxygen cycle is the biogeochemical cycle of oxygen atoms between different oxidation states in ions, oxides, and molecules through redox reactions within and between the spheres/reservoirs of the planet Earth. The word oxygen in the literature typically refers to the most common oxygen allotrope, elemental/diatomic oxygen (O2), as it is a common product or reactant of many biogeochemical redox reactions within the cycle. Processes within the oxygen cycle are considered to be biological or geological and are evaluated as either a source (O2 production) or sink (O2 consumption).
Oxygen is one of the most common elements on Earth and represents a large portion of each main reservoir. By far the largest reservoir of Earth's oxygen is within the silicate and oxide minerals of the crust and mantle (99.5% by weight). The Earth's atmosphere, hydrosphere, and biosphere together hold less than 0.05% of the Earth's total mass of oxygen. Besides O2, additional oxygen atoms are present in various forms spread throughout the surface reservoirs in the molecules of biomass, H2O, CO2, HNO3, NO, NO2, CO, H2O2, O3, SO2, H2SO4, MgO, CaO, Al2O3, SiO2, and PO4.
Atmosphere
The atmosphere is 21% oxygen by volume, which equates to a total of roughly 34 × 1018 mol of oxygen. Other oxygen-containing molecules in the atmosphere include ozone (O3), carbon dioxide (CO2), water vapor (H2O), and sulphur and nitrogen oxides (SO2, NO, N2O, etc.).
Biosphere
The biosphere is 22% oxygen by volume, present mainly as a component of organic molecules (CxHxNxOx) and water.
Hydrosphere
The hydrosphere is 33% oxygen by volume present mainly as a component of water molecules, with dissolved molecules including free oxygen and carbolic acids (HxCO3).
Lithosphere
The lithosphere is 46.6% oxygen by volume, present mainly as silica minerals (SiO2) and other oxide minerals.
Sources and sinks
While there are many abiotic sources and sinks for O2, the presence of the profuse concentration of free oxygen in modern Earth's atmosphere and ocean is attributed to O2 production from the biological process of oxygenic photosynthesis in conjunction with a biological sink known as the biological pump and a geologic process of carbon burial involving plate tectonics. Biology is the main driver of O2 flux on modern Earth, and the evolution of oxygenic photosynthesis by bacteria, which is discussed as part of the Great Oxygenation Event, is thought to be directly responsible for the conditions permitting the development and existence of all complex eukaryotic metabolism.
Biological production
The main source of atmospheric free oxygen is photosynthesis, which produces sugars and free oxygen from carbon dioxide and water:
Photosynthesizing organisms include the plant life of the land areas, as well as the phytoplankton of the oceans. The tiny marine cyanobacterium Prochlorococcus was discovered in 1986 and accounts for up to half of the photosynthesis of the open oceans.
Abiotic production
An additional source of atmospheric free oxygen comes from photolysis, whereby high-energy ultraviolet radiation breaks down atmospheric water and nitrous oxide into component atoms. The free hydrogen and nitrogen atoms escape into space, leaving O2 in the atmosphere:
Biological consumption
The main way free oxygen is lost from the atmosphere is via respiration and decay, mechanisms in which animal life and bacteria consume oxygen and release carbon dioxide.
Capacities and fluxes
The following tables offer estimates of oxygen cycle reservoir capacities and fluxes.
These numbers are based primarily on estimates from (Walker, J. C. G.): More recent research indicates that ocean life (marine primary production) is actually responsible for more than half the total oxygen production on Earth.
Table 2: Annual gain and loss of atmospheric oxygen (Units of 1010 kg O2 per year)
Ozone
The presence of atmospheric oxygen has led to the formation of ozone (O3) and the ozone layer within the stratosphere:
O + O2 :- O3
The ozone layer is extremely important to modern life as it absorbs harmful ultraviolet radiation:
| Physical sciences | Earth science basics: General | Earth science |
583678 | https://en.wikipedia.org/wiki/Amitriptyline | Amitriptyline | Amitriptyline, sold under the brand name Elavil among others, is a tricyclic antidepressant primarily used to treat major depressive disorder, and a variety of pain syndromes such as neuropathic pain, fibromyalgia, migraine and tension headaches. Due to the frequency and prominence of side effects, amitriptyline is generally considered a second-line therapy for these indications.
The most common side effects are dry mouth, drowsiness, dizziness, constipation, and weight gain. Glaucoma, liver toxicity and abnormal heart rhythms are rare but serious side effects. Blood levels of amitriptyline vary significantly from one person to another, and amitriptyline interacts with many other medications potentially aggravating its side effects.
Amitriptyline was discovered in the late 1950s by scientists at Merck and approved by the US Food and Drug Administration (FDA) in 1961. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the 87th most commonly prescribed medication in the United States, with more than 7million prescriptions.
Medical uses
Amitriptyline is indicated for the treatment of major depressive disorder, neuropathic pain, and for the prevention of migraine and chronic tension headache. It can be used for the treatment of nocturnal enuresis in children older than 6 after other treatments have failed.
Depression
Amitriptyline is effective for depression, but it is rarely used as a first-line antidepressant due to its higher toxicity in overdose and generally poorer tolerability. It can be tried for depression as a second-line therapy, after the failure of other treatments. For treatment-resistant adolescent depression or for cancer-related depression amitriptyline is no better than placebo; however, the number of treated patients in both studies was small. It is sometimes used for the treatment of depression in Parkinson's disease, but supporting evidence for that is lacking.
Pain
Amitriptyline alleviates painful diabetic neuropathy. It is recommended by a variety of guidelines as a first or second-line treatment. It is as effective for this indication as gabapentin or pregabalin but less well tolerated. Amitriptyline is as effective at relieving pain as duloxetine. Combination treatment of amitriptyline and pregabalin offers additional pain relief for people whose pain is not adequately controlled with one medication and is usually safe. Amitriptyline in certain formulations may also induce the level of sciatic-nerve blockade needed for local anesthesia therein. Here, it has been demonstrated to be of superior potency to bupivacaine, a customary long-acting local anesthetic.
Low doses of amitriptyline moderately improve sleep disturbances and reduce pain and fatigue associated with fibromyalgia. It is recommended for fibromyalgia accompanied by depression by Association of the Scientific Medical Societies in Germany and as a second-line option for fibromyalgia, with exercise being the first line option, by European League Against Rheumatism. Combinations of amitriptyline and fluoxetine or melatonin may reduce fibromyalgia pain better than either medication alone.
There is some (low-quality) evidence that amitriptyline may reduce pain in cancer patients. It is recommended only as a second-line therapy for non-chemotherapy-induced neuropathic or mixed neuropathic pain if opioids did not provide the desired effect.
Moderate evidence exists in favor of amitriptyline use for atypical facial pain. Amitriptyline is ineffective for HIV-associated neuropathy.
In multiple sclerosis, it is frequently used to treat painful paresthesias in the arms and legs (e.g., burning sensations, pins and needles, stabbing pains) caused by damage to the pain-regulating pathways of the brain and spinal cord.
Headache
Amitriptyline is probably effective for the prevention of periodic migraine in adults. Amitriptyline is similar in efficacy to venlafaxine and topiramate but carries a higher burden of adverse effects than topiramate. For many patients, even very small doses of amitriptyline are helpful, which may allow for minimization of side effects. Amitriptyline is not significantly different from placebo when used for the prevention of migraine in children.
Amitriptyline may reduce the frequency and duration of chronic tension headache, but it is associated with worse adverse effects than mirtazapine. Overall, amitriptyline is recommended for tension headache prophylaxis, along with lifestyle advice, which should include avoidance of analgesia and caffeine.
Other indications
Amitriptyline is effective for the treatment of irritable bowel syndrome; however, because of its side effects, it should be reserved for select patients for whom other agents do not work. There is insufficient evidence to support its use for abdominal pain in children with functional gastrointestinal disorders.
Tricyclic antidepressants decrease the frequency, severity, and duration of cyclic vomiting syndrome episodes. Amitriptyline, as the most commonly used of them, is recommended as a first-line agent for its therapy.
Amitriptyline may improve pain and urgency intensity associated with bladder pain syndrome and can be used in the management of this syndrome. Amitriptyline can be used in the treatment of nocturnal enuresis in children. However, its effect is not sustained after the treatment ends. Alarm therapy gives better short- and long-term results.
In the US, amitriptyline is commonly used in children with ADHD as an adjunct to stimulant medications without any evidence or guideline supporting this practice. Many physicians in the UK (and the US also) commonly prescribe amitriptyline for insomnia; however, Cochrane reviewers were not able to find any randomized controlled studies that would support or refute this practice. Similarly, a major systematic review and network meta-analysis of medications for the treatment of insomnia published in 2022 found little evidence to inform the use of amitriptyline for insomnia. The well-known sedating effects of amitriptyline, however, bear understanding on and arguable justification for this practice. It may function similarly to doxepin in this regard, although the evidence for doxepin is more robust. Trimipramine may be a more novel alternative given its tendency to not suppress but brighten R.E.M. sleep.
Contraindications and precautions
The known contraindications of amitriptyline are:
History of myocardial infarction
History of arrhythmias, particularly any degree of heart block
Coronary artery disease
Porphyria
Severe liver disease (such as cirrhosis)
Being under six years of age
Patients who are taking monoamine oxidase inhibitors (MAOIs) or have taken them within the last 14 days
Amitriptyline should be used with caution in patients with epilepsy, impaired liver function, pheochromocytoma, urinary retention, prostate enlargement, hyperthyroidism, and pyloric stenosis.
In patients with the rare condition of shallow anterior chamber of eyeball and narrow anterior chamber angle, amitriptyline may provoke attacks of acute glaucoma due to dilation of the pupil. It may aggravate psychosis, if used for depression with schizophrenia. It may precipitate the switch to mania in those with bipolar disorder.
CYP2D6 poor metabolizers should avoid amitriptyline due to increased side effects. If it is necessary to use it, half dose is recommended. Amitriptyline can be used during pregnancy and lactation when SSRIs have been shown not to work.
Side effects
The most frequent side effects, occurring in 20% or more of users, are dry mouth, drowsiness, dizziness, constipation, and weight gain (on average 1.8 kg). Other common side effects are headache problems (amblyopia, blurred vision), tachycardia, increased appetite, tremor, fatigue/asthenia/feeling slowed down, and dyspepsia.
A less common side effect of amitriptyline is urination problems (8.7%).
Amitriptyline can increase suicidal thoughts and behavior in people under the age of 24 and the US FDA required a boxed warning to be added to the prescription label. Amitriptyline-associated sexual dysfunction (occurring at a frequency of 6.9%) seems to be mostly confined to males with depression and is expressed predominantly as erectile dysfunction and low libido disorder, with lesser frequency of ejaculatory and orgasmic problems. The rate of sexual dysfunction in males treated for indications other than depression and in females is not significantly different from placebo.
Liver test abnormalities occur in 10–12% of patients on amitriptyline, but are usually mild, asymptomatic, and transient, with consistently elevated alanine transaminase in 3% of all patients. The increases of the enzymes above the 3-fold threshold of liver toxicity are uncommon, and cases of clinically apparent liver toxicity are rare; nevertheless, amitriptyline is placed in the group of antidepressants with greater risks of hepatic toxicity.
Amitriptyline prolongs the QT interval. This prolongation is relatively small at therapeutic doses but becomes severe in overdose.
Overdose
The symptoms and the treatment of an overdose are largely the same as for the other TCAs, including the presentation of serotonin syndrome and adverse cardiac effects. The British National Formulary notes that amitriptyline can be particularly dangerous in overdose, thus it and other TCAs are no longer recommended as first-line therapy for depression. The treatment of overdose is mostly supportive as no specific antidote for amitriptyline overdose is available. Activated charcoal may reduce absorption if given within 1–2 hours of ingestion. If the affected person is unconscious or has an impaired gag reflex, a nasogastric tube may be used to deliver the activated charcoal into the stomach. ECG monitoring for cardiac conduction abnormalities is essential and if one is found close monitoring of cardiac function is advised. Body temperature should be regulated with measures such as heating blankets if necessary. Cardiac monitoring is advised for at least five days after the overdose. Benzodiazepines are recommended to control seizures. Dialysis is of no use due to the high degree of protein binding with amitriptyline.
Interactions
Since amitriptyline and its active metabolite nortriptyline are primarily metabolized by cytochromes CYP2D6 and CYP2C19 (see its pharmacology), the inhibitors of these enzymes are expected to exhibit pharmacokinetic interactions with amitriptyline. According to the prescribing information, the interaction with CYP2D6 inhibitors may increase the plasma level of amitriptyline. However, the results in the other literature are inconsistent: the co-administration of amitriptyline with a potent CYP2D6 inhibitor paroxetine does increase the plasma levels of amitriptyline two-fold and of the main active metabolite nortriptyline 1.5-fold, but combination with less potent CYP2D6 inhibitors thioridazine or levomepromazine does not affect the levels of amitriptyline and increases nortriptyline by about 1.5-fold; A case of clinically significant interaction with potent CYP2D6 inhibitor terbinafine has been reported.
A potent inhibitor of CYP2C19 and other cytochromes fluvoxamine increases the level of amitriptyline two-fold while slightly decreasing the level of nortriptyline. Similar changes occur with a moderate inhibitor of CYP2C19 and other cytochromes cimetidine: amitriptyline level increases by about 70%, while nortriptyline decreases by 50%. CYP3A4 inhibitor ketoconazole elevates amitriptyline level by about a quarter. On the other hand, cytochrome P450 inducers such as carbamazepine and St. John's Wort decrease the levels of both amitriptyline and nortriptyline
Oral contraceptives may increase the blood level of amitriptyline by as high as 90%. Valproate moderately increases the levels of amitriptyline and nortriptyline through an unclear mechanism.
The prescribing information warns that the combination of amitriptyline with monoamine oxidase inhibitors may cause potentially lethal serotonin syndrome; however, this has been disputed. The prescribing information cautions that some patients may experience a large increase in amitriptyline concentration in the presence of topiramate. However, other literature states that there is little or no interaction: in a pharmacokinetic study topiramate only increased the level of amitriptyline by 20% and nortriptyline by 33%.
Amitriptyline counteracts the antihypertensive action of guanethidine. When given with amitriptyline, other anticholinergic agents may result in hyperpyrexia or paralytic ileus. Co-administration of amitriptyline and disulfiram is not recommended due to the potential for the development of toxic delirium. Amitriptyline causes an unusual type of interaction with the anticoagulant phenprocoumon during which great fluctuations of the prothrombin time have been observed.
Pharmacology
Pharmacodynamics
Amitriptyline inhibits serotonin transporter (SERT) and norepinephrine transporter (NET). It is metabolized to nortriptyline, a stronger norepinephrine reuptake inhibitor, further augmenting amitriptyline's effects on norepinephrine reuptake (see table in this section).
Amitriptyline additionally acts as a potent inhibitor of the serotonin 5-HT2A, 5-HT2C, the α1A-adrenergic, the histamine H1 and the M1-M5 muscarinic acetylcholine receptors (see table in this section).
Amitriptyline is a non-selective blocker of multiple ion channels, in particular, voltage-gated sodium channels Nav1.3, Nav1.5, Nav1.6, Nav1.7, and Nav1.8, voltage-gated potassium channels Kv7.2/ Kv7.3, Kv7.1, Kv7.1/KCNE1, and hERG.
Mechanism of action
Inhibition of serotonin and norepinephrine transporters by amitriptyline results in interference with neuronal reuptake of serotonin and norepinephrine. Since the reuptake process is important physiologically in terminating transmitting activity, this action may potentiate or prolong the activity of serotonergic and adrenergic neurons and is believed to underlie the antidepressant activity of amitriptyline.
Inhibition of norepinephrine reuptake leads to an increased concentration of norepinephrine in the posterior gray column of the spinal cord appears to be mostly responsible for the analgesic action of amitriptyline. Increased level of norepinephrine increases the basal activity of alpha-2 adrenergic receptors, which mediate an analgesic effect by increasing gamma-aminobutyric acid transmission among spinal interneurons. The blocking effect of amitriptyline on sodium channels may also contribute to its efficacy in pain conditions.
Pharmacokinetics
Amitriptyline is readily absorbed from the gastrointestinal tract (90–95%). Absorption is gradual with the peak concentration in blood plasma reached after about 4 hours. Extensive metabolism on the first pass through the liver leads to average bioavailability of about 50% (45%-53%). Amitriptyline is metabolized mostly by CYP2C19 into nortriptyline and by CYP2D6 leading to a variety of hydroxylated metabolites, with the principal one among them being (E)-10-hydroxynortriptyline (see metabolism scheme), and to a lesser degree, by CYP3A4.
Nortriptyline, the main active metabolite of amitriptyline, is an antidepressant on its own right. Nortriptyline reaches 10% higher level in the blood plasma than the parent drug amitriptyline and 40% greater area under the curve, and its action is an important part of the overall action of amitriptyline.
Another active metabolite is (E)-10-hydroxynortriptyline, which is a norepinephrine uptake inhibitor four times weaker than nortriptyline. (E)-10-hydroxynortiptyline blood level is comparable to that of nortriptyline, but its cerebrospinal fluid level, which is a close proxy of the brain concentration of a drug, is twice higher than nortriptyline's. Based on this, (E)-10-hydroxynortriptyline was suggested to significantly contribute to the antidepressant effects of amitriptyline.
Blood levels of amitriptyline and nortriptyline and pharmacokinetics of amitriptyline in general, with clearance difference of up to 10-fold, vary widely between individuals. Variability of the area under the curve in steady state is also high, which makes a slow upward titration of the dose necessary.
In the blood, amitriptyline is 96% bound to plasma proteins; nortriptyline is 93–95% bound, and (E)-10-hydroxynortiptyline is about 60% bound. Amitriptyline has an elimination half life of 21 hours, nortriptyline – 23–31 hours, and (E)-10-hydroxynortiptyline − 8–10 hours. Within 48 hours, 12−80% of amitriptyline is eliminated in the urine, mostly as metabolites. 2% of the unchanged drug is excreted in the urine. Elimination in the feces, apparently, have not been studied.
Therapeutic levels of amitriptyline range from 75 to 175 ng/mL (270–631 nM), or 80–250 ng/mL of both amitriptyline and its metabolite nortriptyline.
Pharmacogenetics
Since amitriptyline is primarily metabolized by CYP2D6 and CYP2C19, genetic variations within the genes coding for these enzymes can affect its metabolism, leading to changes in the concentrations of the drug in the body. Increased concentrations of amitriptyline may increase the risk for side effects, including anticholinergic and nervous system adverse effects, while decreased concentrations may reduce the drug's efficacy.
Individuals can be categorized into different types of CYP2D6 or CYP2C19 metabolizers depending on which genetic variations they carry. These metabolizer types include poor, intermediate, extensive, and ultrarapid metabolizers. Most individuals (about 77–92%) are extensive metabolizers, and have "normal" metabolism of amitriptyline. Poor and intermediate metabolizers have reduced metabolism of the drug as compared to extensive metabolizers; patients with these metabolizer types may have an increased probability of experiencing side effects. Ultrarapid metabolizers use amitriptyline much faster than extensive metabolizers; patients with this metabolizer type may have a greater chance of experiencing pharmacological failure.
The Clinical Pharmacogenetics Implementation Consortium recommends avoiding amitriptyline in patients who are CYP2D6 ultrarapid or poor metabolizers, due to the risk of a lack of efficacy and side effects, respectively. The consortium also recommends considering an alternative drug not metabolized by CYP2C19 in patients who are CYP2C19 ultrarapid metabolizers. A reduction in the starting dose is recommended for patients who are CYP2D6 intermediate metabolizers and CYP2C19 poor metabolizers. If the use of amitriptyline is warranted, therapeutic drug monitoring is recommended to guide dose adjustments. The Dutch Pharmacogenetics Working Group also recommends selecting an alternative drug or monitoring plasma concentrations of amitriptyline in patients who are CYP2D6 poor or ultrarapid metabolizers, and selecting an alternative drug or reducing initial dose in patients who are CYP2D6 intermediate metabolizers.
Chemistry
Amitriptyline is a highly lipophilic molecule having an octanol-water partition coefficient (pH 7.4) of 3.0, while the log P of the free base was reported as 4.92. Solubility of the free base amitriptyline in water is 14 mg/L. Amitriptyline is prepared by reacting dibenzosuberane with 3-(dimethylamino)propylmagnesium chloride and then heating the resulting intermediate product with hydrochloric acid to eliminate water.
History
Amitriptyline was first developed by the American pharmaceutical company Merck in the late 1950s. In 1958, Merck approached several clinical investigators proposing to conduct clinical trials of amitriptyline for schizophrenia. One of these researchers, Frank Ayd, instead, suggested using amitriptyline for depression. Ayd treated 130 patients and, in 1960, reported that amitriptyline had antidepressant properties similar to another, and the only known at the time, tricyclic antidepressant imipramine. Following this, the US Food and Drug Administration approved amitriptyline for depression in 1961.
In Europe, due to a quirk of the patent law at the time allowing patents only on the chemical synthesis but not on the drug itself, Roche and Lundbeck were able to independently develop and market amitriptyline in the early 1960s.
According to research by a historian of psychopharmacology David Healy, amitriptyline became a much bigger selling drug than its precursor imipramine because of two factors. First, amitriptyline has a much stronger anxiolytic effect. Second, Merck conducted a marketing campaign raising clinicians' awareness of depression as a clinical entity.
Society and culture
In the 2021 film The Many Saints of Newark, amitriptyline (referred to by the brand name Elavil) is part of the plot line of the movie.
Names
Amitriptyline is the English and French generic name of the drug and its , , and , while amitriptyline hydrochloride is its , , , and . Its generic name in Spanish and Italian and its are , in German is , and in Latin is . The embonate salt is known as amitriptyline embonate, which is its BANM, or as amitriptyline pamoate unofficially.
Prescription trends
Between 1998 and 2017, along with imipramine, amitriptyline was the most commonly prescribed first antidepressant for children aged 5–11 years in England. It was also the most prescribed antidepressant (along with fluoxetine) for 12- to 17-year-olds.
Research
The few randomized controlled trials investigating amitriptyline efficacy in eating disorder have been discouraging.
| Biology and health sciences | Psychiatric drugs | Health |
583832 | https://en.wikipedia.org/wiki/Safety%20razor | Safety razor | A safety razor is a shaving implement with a protective device positioned between the edge of the blade and the skin. The initial purpose of these protective devices was to reduce the level of skill needed for injury-free shaving, thereby reducing the reliance on professional barbers.
Protective devices for razors have existed since at least the 1700s: a circa 1762 invention by French cutler Jean-Jacques Perret added a protective guard to a regular straight razor. The first known occurrence of the term "safety razor" is found in a patent from 1880 for a razor in the basic contemporary configuration with a handle in which a removable blade is placed (although this form predated the patent).
Safety razors were popularized in the 1900s by King Camp Gillette's invention, the double-edge safety razor. While other safety razors of the time used blades that required stropping before use and after a time had to be honed by a cutler, Gillette's razor used a disposable blade with two sharpened edges. Gillette's invention became the predominant style of razor during and after the First World War, when the U.S. Army began issuing Gillette shaving kits to its servicemen.
Since their introduction in the 1970s, cartridge razors and disposable razors – where the blades are embedded in plastic – have become the predominant types of safety razors. In 2010, Procter & Gamble stated that almost a billion men were shaving with double-edge razors.
History
Early designs
The first step towards a safer-to-use razor was the guard razor – also called a straight safety razor – which added a protective guard to a regular straight razor. The first such razor was most likely invented by French cutler Jean-Jacques Perret circa 1762. The invention was inspired by the joiner's plane and was essentially a straight razor with its blade surrounded by a wooden sleeve. The earliest razor guards had comb-like teeth and could only be attached to one side of a razor; a reversible guard was one of the first improvements made to guard razors.
The basic form of a razor, "the cutting blade of which is at right angles with the handle, and resembles somewhat the form of a common hoe", was first described in a patent application in 1847 by William S. Henson. This also covered a "comb tooth guard or protector" which could be attached both to the hoe form and to a conventional straight razor.
The first attested use of the term "safety razor" is in a patent application for "new and useful improvements in Safety-Razors", filed in May 1880 by Frederic and Otto Kampfe of Brooklyn, New York, and issued the following month. This differed from the Henson design in distancing the blade from the handle by interposing "a hollow metallic blade-holder having a preferably removable handle and a flat plate in front, to which the blade is attached by clips and a pivoted catch, said plate having bars or teeth at its lower edge, and the lower plate having an opening, for the purpose set forth", which is to "insure a smooth bearing for the plate upon the skin, while the teeth or bars will yield sufficiently to allow the razor to sever the hair without danger of cutting the skin." The Kampfe Brothers produced razors under their own name following the 1880 patent and improved the design in a series of subsequent patents. These models were manufactured under the "Star Safety Razor" brand.
A third pivotal innovation was a safety razor using a disposable double-edge blade for which King Camp Gillette submitted a patent application in 1901 and was granted in 1904. The Gillette Safety Razor Company was awarded a contract to supply the American troops in World War I with double-edge safety razors as part of their standard field kits (delivering a total of 3.5 million razors and 32 million blades for them). The returning soldiers were permitted to keep that part of their equipment and therefore retained their new shaving habits. The subsequent consumer demand for replacement blades put the shaving industry on course toward its present form with Gillette as a dominant force. Prior to the introduction of the disposable blade, users of safety razors still needed to strop and hone the edges of their blades. These are not trivial skills (honing frequently being left to a professional) and remained a barrier to the ubiquitous adopting of the "be your own barber" ideal.
Single-edge razors
The first safety razors used a single-edge blade that was essentially a long segment of a straight razor. A flat blade that could be used alternately with this "wedge" was first illustrated in a patent issued in 1878, serving as a close prototype for the single-edge blade in its present form. New single-edge razors were developed and used side by side with double-edge razors for decades. The largest manufacturers were the American Safety Razor Company with its "Ever-Ready" series, and the Gem Cutlery Company with its "Gem" models. Although these brands of single-edge razors are no longer in production, they are readily available in antique trade, and compatible modern designs are being made. Blades for them are still being manufactured both for shaving and technical purposes.
A second popular single-edge design is the "Injector" razor developed and placed on the market by Schick Razors in the 1920s. This uses narrow blades stored in an injector device with which they are inserted directly into the razor, so that the user never needs to handle the blade. The injector blade was the first to depart from the rectangular dimensions shared by the wedge, standard single-edge, and double-edge blades. The injector, itself, was also the first device intended to reduce the risk of injury from handling blades. The Gillette blade dispenser released in 1947 had the same purpose. The narrow injector blade, as well as the form of the injector razor, also strongly influenced the corresponding details of the subsequently developed cartridge razors. Both injector blades and injector safety razors are still available on the market, from antique stock as well as modern manufacture. The injector blades have also inspired a variety of specialised blades for professional barber use, some of which have been re-adopted for shaving by modern designs.
Until the 1960s, razor blades were made of carbon steel. These were extremely prone to rusting and forced users to change blades frequently. In 1962, the British company Wilkinson Sword began to sell blades made of stainless steel, whose edge did not corrode nearly so quickly and could be used far longer. Wilkinson quickly captured U.S., British and European markets. As a result, American Safety Razor, Gillette and Schick were driven to produce stainless steel blades to compete. Today, almost all razor blades are stainless steel, although carbon steel blades remain in limited production for lower income markets. Because Gillette held a patent on stainless blades but had not acted on it, the company was accused of exploiting customers by forcing them to buy the rust-prone blade.
The risk of injury from handling razor blades was further reduced in 1970 when Wilkinson released its "Bonded Shaving System", which embedded a single blade in a disposable polymer plastic cartridge. A flurry of competing models soon followed with everything from one to six blades, with many cartridge blade razors also having disposable handles. Cartridge blade razors are sometimes considered to be a generic category of their own and not a variety of safety razor. The similarities between single-edge cartridge blade razors and the classic injector razor do, however, provide equal justification for treating both categories contiguously.
In 1974, Bic introduced the disposable razor. Instead of being a razor with a disposable blade, the entire razor was manufactured to be disposable. Gillette's response was the Good News disposable razor which was launched on the US market in 1976 before the Bic disposable was made available on that market. Shortly thereafter, Gillette modified the Good News construction to add an aloe strip above the razor, resulting in the Good News Plus. The purported benefit of the aloe strip is to ease any discomfort felt on the face while shaving.
In direct response to Wilkinson's Bonded cartridge, during the following year Gillette introduced the twin-blade Trac II. They claimed that research showed the tandem action of the two blades to give a closer shave than a single blade, because of a "hysteresis" effect. In addition to the cutting action of the first blade, it is also supposed to pull the hair out of the follicle into which it does not fully retract before the second blade cuts it further. The extent to which this is of practical consequence has, however, been questioned.
Recent changes
Gillette introduced the first triple-blade cartridge razor, the Mach3, in 1998, and later upgraded the Sensor cartridge to the Sensor3 by adding a third blade. Schick/Wilkinson responded to the Mach3 with the Quattro, the first four-blade cartridge razor. These innovations are marketed with the message that they help consumers achieve the best shave as easily as possible. Another impetus for the sale of multiple-blade cartridges is that they have high profit margins. With manufacturers frequently updating their shaving systems, consumers can become locked into buying their proprietary cartridges, for as long as the manufacturer continues to make them. Subsequent to introducing the higher-priced Mach3 in 1998, Gillette's blade sales realized a 50% increase, and profits increased in an otherwise mature market.
The marketing of increasing numbers of blades in a cartridge has been parodied since the 1970s. The debut episode of Saturday Night Live in 1975 included a parody advertisement for the Triple Trac Razor, shortly after the first two-blade cartridge for men's razors was advertised. Mad magazine announced the "Trac 76", arranged as a chain of cartridges with a handle on each end. In the early 1990s, the (Australian) Late Show skitted a "Gillette 3000" with 16 blades and 75 lubricating strips as arrived at by working in conjunction with the help of NASA scientists - "The first blade distracts the hair...". The 16 January 1999 episode of Mad TV ran a parody commercial advertising the "Spishak Mach 20" with blades that variously "cut(s) away that pesky second layer of skin" and "gently smooth(s) out the jawbone" culminating in a blade that "destroys the part of the brain responsible for hair growth." In 2004, a satirical article in The Onion entitled "Fuck Everything, We're Doing Five Blades" predicted the release of five-blade cartridges, two years before their commercial introduction. South Korean manufacturer Dorco released their own six-blade cartridge in 2012, and later released a seven-blade cartridge.
Gillette has also produced powered variants of the Mach3 (M3Power, M3Power Nitro) and Fusion (Fusion Power and Fusion Power Phantom) razors. These razors accept a single AAA battery which is used to produce vibration in the razor; this action was purported to raise hair up and away from the skin prior to being cut. These claims were ruled in an American court as "unsubstantiated and inaccurate".
Design
Safety razors originally had an edge protected by a comb patterned on various types of protective guards that had been affixed to open-blade straight razors during the preceding decades.
Lifespan
To maintain their cutting action, razor blades can be stropped using an old strip of denim. Twinplex also sold a blade stropper which was used to extend the life of vintage carbon steel blades.
Safety razor blades are usually made of razor steel which is a low chromium stainless steel which can be made extremely sharp, but corrodes relatively easily. Safety razor blade life may be extended by drying the blades after use. Salts from human skin also tend to corrode the blades, but washing and carefully drying them can greatly extend their life.
Disposable safety razor blades can be sharpened using various methods. There are commercial devices intended for this duty (Razormate, RazorPit, Blade Buddy, etc.).
Variants
Double-edged razors
Double-edge (DE) safety razors remain a popular alternative to proprietary cartridge razors, and usually offer significantly lower total cost of ownership since they are not marketed under the "razor and blades business model". Double-edge razors are still designed and produced in many countries, and in 2010, Procter & Gamble estimated that almost a billion men were shaving with double-edge razors. Better known manufacturers include Edwin Jagger, Feather, iKon, Lord, Mühle, Merkur, and Weishi, with several of them producing razors that are marketed under other brands. Often different models of razors within a brand share the same razor-head designs, differing primarily in the color, length, texture, material(s), and weight of the handles. Three-piece razors generally have interchangeable handles, and some companies specialize in manufacturing custom or high-end replacement handles. The butterfly safety razor utilizes a twist-to-open mechanism head to make changing the blade easy and convenient. Variations in razor head designs include straight safety bar (SB), open comb (OC)(toothed) bar, adjustable razors, and slant bar razors. The slant bar was a common design in Germany in which the blade is slightly angled and curved along its length to make for a slicing action and a more rigid cutting edge.
A primary functional difference between double-edge razors and modern cartridge razors is that DE razor heads come in a wide array of aggression levels (where aggression is commonly defined as being less protection from the blade).
| Biology and health sciences | Hygiene products | Health |
583901 | https://en.wikipedia.org/wiki/Combined%20cycle%20power%20plant | Combined cycle power plant | A combined cycle power plant is an assembly of heat engines that work in tandem from the same source of heat, converting it into mechanical energy. On land, when used to make electricity the most common type is called a combined cycle gas turbine (CCGT) plant, which is a kind of gas-fired power plant. The same principle is also used for marine propulsion, where it is called a combined gas and steam (COGAS) plant. Combining two or more thermodynamic cycles improves overall efficiency, which reduces fuel costs.
The principle is that after completing its cycle in the first engine, the working fluid (the exhaust) is still hot enough that a second subsequent heat engine can extract energy from the heat in the exhaust. Usually the heat passes through a heat exchanger so that the two engines can use different working fluids.
By generating power from multiple streams of work, the overall efficiency can be increased by 50–60%. That is, from an overall efficiency of the system of say 34% for a simple cycle, to as much as 64% net for the turbine alone in specified conditions for a combined cycle.
Historical cycles
Historically successful combined cycles have used mercury vapour turbines, magnetohydrodynamic generators and molten carbonate fuel cells, with steam plants for the low temperature "bottoming" cycle. Very low temperature bottoming cycles have been too costly due to the very large sizes of equipment needed to handle the large mass flows and small temperature differences. However, in cold climates it is common to sell hot power plant water for hot water and space heating. Vacuum-insulated piping can let this utility reach as far as 90 km. The approach is called "combined heat and power" (CHP).
In stationary and marine power plants, a widely used combined cycle has a large gas turbine (operating by the Brayton cycle). The turbine's hot exhaust powers a steam power plant (operating by the Rankine cycle). This is a combined cycle gas turbine (CCGT) plant. These achieve a best-of-class real (see below) thermal efficiency of around 64% in base-load operation. In contrast, a single cycle steam power plant is limited to efficiencies from 35 to 42%. Many new power plants utilize CCGTs. Stationary CCGTs burn natural gas or synthesis gas from coal. Ships burn fuel oil.
Multiple stage turbine or steam cycles can also be used, but CCGT plants have advantages for both electricity generation and marine power. The gas turbine cycle can often start very quickly, which gives immediate power. This avoids the need for separate expensive peaker plants, or lets a ship maneuver. Over time the secondary steam cycle will warm up, improving fuel efficiency and providing further power.
In November 2013, the Fraunhofer Institute for Solar Energy Systems ISE assessed the levelised cost of energy for newly built power plants in the German electricity sector. They gave costs of between 78 and €100 /MWh for CCGT plants powered by natural gas. In addition the capital costs of combined cycle power is relatively low, at around $1000/kW, making it one of the cheapest types of generation to install.
Basic combined cycle
The thermodynamic cycle of the basic combined cycle consists of two power plant cycles. One is the Joule or Brayton cycle which is a gas turbine cycle and the other is the Rankine cycle which is a steam turbine cycle. The cycle 1-2-3-4-1 which is the gas turbine power plant cycle is the topping cycle. It depicts the heat and work transfer process taking place in the high temperature region.
The cycle a-b-c-d-e-f-a which is the Rankine steam cycle takes place at a lower temperature and is known as the bottoming cycle. Transfer of heat energy from high temperature exhaust gas to water and steam takes place in a waste heat recovery boiler in the bottoming cycle. During the constant pressure process 4-1 the exhaust gases from the gas turbine reject heat. The feed water, wet and super heated steam absorb some of this heat in the process a-b, b-c and c-d.
Steam generators
The steam power plant takes its input heat from the high temperature exhaust gases from a gas turbine power plant. The steam thus generated can be used to drive a steam turbine. The Waste Heat Recovery Boiler (WHRB) has 3 sections: Economiser, evaporator and superheater.
Cheng cycle
The Cheng cycle is a simplified form of combined cycle where the steam turbine is eliminated by injecting steam directly into the combustion turbine. This has been used since the mid 1970s and allows recovery of waste heat with less total complexity, but at the loss of the additional power and redundancy of a true combined cycle system. It has no additional steam turbine or generator, and therefore it cannot be used as a backup or supplementary power. It is named after American professor D. Y. Cheng who patented the design in 1976.
Design principles
The efficiency of a heat engine, the fraction of input heat energy that can be converted to useful work, is limited by the temperature difference between the heat entering the engine and the exhaust heat leaving the engine.
In a thermal power station, water is the working medium. High pressure steam requires strong, bulky components. High temperatures require expensive alloys made from nickel or cobalt, rather than inexpensive steel. These alloys limit practical steam temperatures to 655 °C while the lower temperature of a steam plant is fixed by the temperature of the cooling water. With these limits, a steam plant has a fixed upper efficiency of 35–42%.
An open circuit gas turbine cycle has a compressor, a combustor and a turbine. For gas turbines the amount of metal that must withstand the high temperatures and pressures is small, and lower quantities of expensive materials can be used. In this type of cycle, the input temperature to the turbine (the firing temperature), is relatively high (900 to 1,400 °C). The output temperature of the flue gas is also high (450 to 650 °C). This is therefore high enough to provide heat for a second cycle which uses steam as the working fluid (a Rankine cycle).
In a combined cycle power plant, the heat of the gas turbine's exhaust is used to generate steam by passing it through a heat recovery steam generator (HRSG) with a live steam temperature between 420 and 580 °C. The condenser of the Rankine cycle is usually cooled by water from a lake, river, sea or cooling towers. This temperature can be as low as 15 °C.
Typical size
Plant size is important in the cost of the plant. The larger plant sizes benefit from economies of scale (lower initial cost per kilowatt) and improved efficiency.
For large-scale power generation, a typical set would be a 270 MW primary gas turbine coupled to a 130 MW secondary steam turbine, giving a total output of 400 MW. A typical power station might consist of between 1 and 6 such sets.
Gas turbines for large-scale power generation are manufactured by at least four separate groups – General Electric, Siemens, Mitsubishi-Hitachi, and Ansaldo Energia. These groups are also developing, testing and/or marketing gas turbine sizes in excess of 300 MW (for 60 Hz applications) and 400 MW (for 50 Hz applications). Combined cycle units are made up of one or more such gas turbines, each with a waste heat steam generator arranged to supply steam to a single or multiple steam turbines, thus forming a combined cycle block or unit. Combined cycle block sizes offered by three major manufacturers (Alstom, General Electric and Siemens) can range anywhere from 50 MW to well over 1300 MW with costs approaching $670/kW.
Unfired boiler
The heat recovery boiler is item 5 in the COGAS figure shown above. Hot gas turbine exhaust enters the super heater, then passes through the evaporator and finally through the economiser section as it flows out from the boiler. Feed water comes in through the economizer and then exits after having attained saturation temperature in the water or steam circuit. Finally it flows through the evaporator and super heater. If the temperature of the gases entering the heat recovery boiler is higher, then the temperature of the exiting gases is also high.
Dual pressure boiler
In order to remove the maximum amount of heat from the gasses exiting the high temperature cycle, a dual pressure boiler is often employed. It has two water/steam drums. The low-pressure drum is connected to the low-pressure economizer or evaporator. The low-pressure steam is generated in the low temperature zone of the turbine exhaust gasses. The low-pressure steam is supplied to the low-temperature turbine. A super heater can be provided in the low-pressure circuit.
Some part of the feed water from the low-pressure zone is transferred to the high-pressure economizer by a booster pump. This economizer heats up the water to its saturation temperature. This saturated water goes through the high-temperature zone of the boiler and is supplied to the high-pressure turbine.
Supplementary firing
The HRSG can be designed to burn supplementary fuel after the gas turbine. Supplementary burners are also called duct burners. Duct burning is possible because the turbine exhaust gas (flue gas) still contains some oxygen. Temperature limits at the gas turbine inlet force the turbine to use excess air, above the optimal stoichiometric ratio to burn the fuel. Often in gas turbine designs part of the compressed air flow bypasses the burner in order to cool the turbine blades. The turbine exhaust is already hot, so a regenerative air preheater is not required as in a conventional steam plant. However, a fresh air fan blowing directly into the duct permits a duct-burning steam plant to operate even when the gas turbine cannot.
Without supplementary firing, the thermal efficiency of a combined cycle power plant is higher. But more flexible plant operations make a marine CCGT safer by permitting a ship to operate with equipment failures. A flexible stationary plant can make more money. Duct burning raises the flue temperature, which increases the quantity or temperature of the steam (e.g. to 84 bar, 525 degree Celsius). This improves the efficiency of the steam cycle. Supplementary firing lets the plant respond to fluctuations of electrical load, because duct burners can have very good efficiency with partial loads. It can enable higher steam production to compensate for the failure of another unit. Also, coal can be burned in the steam generator as an economical supplementary fuel.
Supplementary firing can raise exhaust temperatures from 600 °C (GT exhaust) to 800 or even 1000 °C. Supplemental firing does not raise the efficiency of most combined cycles. For single boilers it can raise the efficiency if fired to 700–750 °C; for multiple boilers however, the flexibility of the plant should be the major attraction.
"Maximum supplementary firing" is the condition when the maximum fuel is fired with the oxygen available in the gas turbine exhaust.
Combined cycle advanced Rankine subatmospheric reheating
Fuel for combined cycle power plants
Combined cycle plants are usually powered by natural gas, although fuel oil, synthesis gas or other fuels can be used. The supplementary fuel may be natural gas, fuel oil, or coal. Biofuels can also be used. Integrated solar combined cycle power stations combine the energy harvested from solar radiation with another fuel to cut fuel costs and environmental impact (See: ISCC section). Many next generation nuclear power plants can use the higher temperature range of a Brayton top cycle, as well as the increase in thermal efficiency offered by a Rankine bottoming cycle.
Where the extension of a gas pipeline is impractical or cannot be economically justified, electricity needs in remote areas can be met with small-scale combined cycle plants using renewable fuels. Instead of natural gas, these gasify and burn agricultural and forestry waste, which is often readily available in rural areas.
Managing low-grade fuels in turbines
Gas turbines burn mainly natural gas and light oil. Crude oil, residual, and some distillates contain corrosive components and as such require fuel treatment equipment. In addition, ash deposits from these fuels result in gas turbine deratings of up to 15%. They may still be economically attractive fuels however, particularly in combined-cycle plants.
Sodium and potassium are removed from residual, crude and heavy distillates by a water washing procedure. A simpler and less expensive purification system will do the same job for light crude and light distillates. A magnesium additive system may also be needed to reduce the corrosive effects if vanadium is present. Fuels requiring such treatment must have a separate fuel-treatment plant and a system of accurate fuel monitoring to assure reliable, low-maintenance operation of gas turbines.
Hydrogen
Xcel Energy is going to build two natural gas power plants in the Midwestern United States that can mix 30% hydrogen with the natural gas. Intermountain Power Plant is being retrofitted to a natural gas/hydrogen power plant that can run on 30% hydrogen as well, and is scheduled to run on pure hydrogen by 2045. However others think low-carbon hydrogen should be used for things which are harder to decarbonize, such as making fertilizer, so there may not be enough for electricity generation.
Configuration
Combined-cycle systems can have single-shaft or multi-shaft configurations. Also, there are several configurations of steam systems.
The most fuel-efficient power generation cycles use an unfired heat recovery steam generator (HRSG) with modular pre-engineered components. These unfired steam cycles are also the lowest in initial cost, and they are often part of a single shaft system that is installed as a unit.
Supplementary-fired and multishaft combined-cycle systems are usually selected for specific fuels, applications or situations. For example, cogeneration combined-cycle systems sometimes need more heat, or higher temperatures, and electricity is a lower priority. Multishaft systems with supplementary firing can provide a wider range of temperatures or heat to electric power. Systems burning low quality fuels such as brown coal or peat might use relatively expensive closed-cycle helium turbines as the topping cycle to avoid even more expensive fuel processing and gasification that would be needed by a conventional gas turbine.
A typical single-shaft system has one gas turbine, one steam turbine, one generator and one heat recovery steam generator (HRSG). The gas turbine and steam turbine are both coupled in tandem to a single electrical generator on a single shaft. This arrangement is simpler to operate, smaller, with a lower startup cost.
Single-shaft arrangements can have less flexibility and reliability than multi-shaft systems. With some expense, there are ways to add operational flexibility: Most often, the operator desires to operate the gas turbine as a peaking plant. In these plants, the steam turbine's shaft can be disconnected with a synchro-self-shifting (SSS) clutch, for start up or for simple cycle operation of the gas turbine. Another less common set of options enable more heat or standalone operation of the steam turbine to increase reliability: Duct burning, perhaps with a fresh air blower in the duct and a clutch on the gas turbine side of the shaft.
A multi-shaft system usually has only one steam system for up to three gas turbines. Having only one large steam turbine and heat sink has economies of scale and can have lower cost operations and maintenance. A larger steam turbine can also use higher pressures, for a more efficient steam cycle. However, a multi-shaft system is about 5% higher in initial cost.
The overall plant size and the associated number of gas turbines required can also determine which type of plant is more economical. A collection of single shaft combined cycle power plants can be more costly to operate and maintain, because there are more pieces of equipment. However, it can save interest costs by letting a business add plant capacity as it is needed.
Multiple-pressure reheat steam cycles are applied to combined-cycle systems with gas turbines with exhaust gas temperatures near 600 °C. Single- and multiple-pressure non-reheat steam cycles are applied to combined-cycle systems with gas turbines that have exhaust gas temperatures of 540 °C or less. Selection of the steam cycle for a specific application is determined by an economic evaluation that considers a plant's installed cost, fuel cost and quality, duty cycle, and the costs of interest, business risks, and operations and maintenance.
Efficiency
By combining both gas and steam cycles, high input temperatures and low output temperatures can be achieved.
The efficiency of the cycles add, because they are powered by the same fuel source.
So, a combined cycle plant has a thermodynamic cycle that operates between the gas-turbine's high firing temperature and the waste heat temperature from the condensers of the steam cycle.
This large range means that the Carnot efficiency of the cycle is high.
The actual efficiency, while lower than the Carnot efficiency, is still higher than that of either plant on its own.
The electric efficiency of a combined cycle power station, if calculated as electric energy produced as a percentage of the lower heating value of the fuel consumed, can be over 60% when operating new, i.e. unaged, and at continuous output which are ideal conditions.
As with single cycle thermal units, combined cycle units may also deliver low temperature heat energy for industrial processes, district heating and other uses. This is called cogeneration and such power plants are often referred to as a combined heat and power (CHP) plant.
In general, combined cycle efficiencies in service are over 50% on a lower heating value and Gross Output basis.
Most combined cycle units, especially the larger units, have peak, steady-state efficiencies on the LHV basis of 55 to 59%.
A limitation of combined cycles is that efficiency is reduced when not running at continuous output. During start up, the second cycle can take time to start up. Thus efficiency is initially much lower until the second cycle is running, which can take an hour or more.
Fuel heating value
Heat engine efficiency can be based on the fuel Higher Heating Value (HHV), including latent heat of vaporisation that would be recuperated in condensing boilers, or the Lower Heating Value (LHV), excluding it.
The HHV of methane is , compared to a LHV: a 11% increase.
Boosting efficiency
Efficiency of the turbine is increased when combustion can run hotter, so the working fluid expands more. Therefore, efficiency is limited by whether the first stage of turbine blades can survive higher temperatures. Cooling and materials research are continuing. A common technique, adopted from aircraft, is to pressurise hot-stage turbine blades with coolant. This is also bled-off in proprietary ways to improve the aerodynamic efficiencies of the turbine blades. Different vendors have experimented with different coolants. Air is common but steam is increasingly used. Some vendors might now utilize single-crystal turbine blades in the hot section, a technique already common in military aircraft engines.
The efficiency of CCGT and GT can also be boosted by pre-cooling combustion air. This increases its density, also increasing the expansion ratio of the turbine. This is practised in hot climates and also has the effect of increasing power output. This is achieved by evaporative cooling of water using a moist matrix placed in the turbine's inlet, or by using Ice storage air conditioning. The latter has the advantage of greater improvements due to the lower temperatures available. Furthermore, ice storage can be used as a means of load control or load shifting since ice can be made during periods of low power demand and, potentially in the future the anticipated high availability of other resources such as renewables during certain periods.
Combustion technology is a proprietary but very active area of research, because fuels, gasification and carburation all affect fuel efficiency. A typical focus is to combine aerodynamic and chemical computer simulations to find combustor designs that assure complete fuel burn up, yet minimize both pollution and dilution of the hot exhaust gases. Some combustors inject other materials, such air or steam, to reduce pollution by reducing the formation of nitrates and ozone.
Another active area of research is the steam generator for the Rankine cycle. Typical plants already use a two-stage steam turbine, reheating the steam between the two stages. When the heat-exchangers' thermal conductivity can be improved, efficiency improves. As in nuclear reactors, tubes might be made thinner (e.g. from stronger or more corrosion-resistant steel). Another approach might use silicon carbide sandwiches, which do not corrode.
There is also some development of modified Rankine cycles. Two promising areas are ammonia/water mixtures, and turbines that utilize supercritical carbon dioxide.
Modern CCGT plants also need software that is precisely tuned to every choice of fuel, equipment, temperature, humidity and pressure. When a plant is improved, the software becomes a moving target. CCGT software is also expensive to test, because actual time is limited on the multimillion-dollar prototypes of new CCGT plants. Testing usually simulates unusual fuels and conditions, but validates the simulations with selected data points measured on actual equipment.
Competition
There is active competition to reach higher efficiencies.
Research aimed at turbine inlet temperature has led to even more efficient combined cycles.
Nearly 60% LHV efficiency (54% HHV efficiency) was reached in the Baglan Bay power station, using a GE H-technology gas turbine with a NEM 3 pressure reheat boiler, using steam from the heat recovery steam generator (HRSG) to cool the turbine blades.
In May 2011 Siemens AG announced they had achieved a 60.75% efficiency with a 578 megawatt SGT5-8000H gas turbine at the Irsching Power Station.
The Chubu Electric’s Nishi-ku, Nagoya power plant 405 MW 7HA is expected to have 62% gross combined cycle efficiency.
On April 28, 2016, the plant run by Électricité de France in Bouchain was certified by Guinness World Records as the world's most efficient combined cycle power plant at 62.22%. It uses a General Electric 9HA, that claimed 41.5% simple cycle efficiency and 61.4% in combined cycle mode, with a gas turbine output of 397 MW to 470 MW and a combined output of 592 MW to 701 MW. Its firing temperature is between , its overall pressure ratio is 21.8 to 1.
In December 2016, Mitsubishi claimed a LHV efficiency of greater than 63% for some members of its J Series turbines.
In December 2017, GE claimed 64% in its latest 826 MW HA plant, up from 63.7%. They said this was due to advances in additive manufacturing and combustion. Their press release said that they planned to achieve 65% by the early 2020s.
Integrated gasification combined cycle (IGCC)
An integrated gasification combined cycle, or IGCC, is a power plant using synthesis gas (syngas). Syngas can be produced from a number of sources, including coal and biomass. The system uses gas and steam turbines, the steam turbine operating from the heat left over from the gas turbine. This process can raise electricity generation efficiency to around 50%.
Integrated solar combined cycle (ISCC)
An Integrated Solar Combined Cycle (ISCC) is a hybrid technology in which a solar thermal field is integrated within a combined cycle plant. In ISCC plants, solar energy is used as an auxiliary heat supply, supporting the steam cycle, which results in increased generation capacity or a reduction of fossil fuel use.
Thermodynamic benefits are that daily steam turbine startup losses are eliminated.
Major factors limiting the load output of a combined cycle power plant are the allowed pressure and temperature transients of the steam turbine and the heat recovery steam generator waiting times to establish required steam chemistry conditions and warm-up times for the balance of plant and the main piping system. Those limitations also influence the fast start-up capability of the gas turbine by requiring waiting times. And waiting gas turbines consume gas. The solar component, if the plant is started after sunshine, or before, if there is heat storage, allows the preheat of the steam to the required conditions. That is, the plant is started faster and with less consumption of gas before achieving operating conditions. Economic benefits are that the solar components costs are 25% to 75% those of a Solar Energy Generating Systems plant of the same collector surface.
The first such system to come online was the Archimede combined cycle power plant, Italy in 2010, followed by Martin Next Generation Solar Energy Center in Florida, and in 2011 by the Kuraymat ISCC Power Plant in Egypt, Yazd power plant in Iran, Hassi R'mel in Algeria, Ain Beni Mathar in Morocco. In Australia CS Energy's Kogan Creek and Macquarie Generation's Liddell Power Station started construction of a solar Fresnel boost section (44 MW and 9 MW), but the projects never became active.
Bottoming cycles
In most successful combined cycles, the bottoming cycle for power is a conventional steam Rankine cycle.
It is already common in cold climates (such as Finland) to drive community heating systems from a steam power plant's condenser heat. Such cogeneration systems can yield theoretical efficiencies above 95%.
Bottoming cycles producing electricity from the steam condenser's heat exhaust are theoretically possible, but conventional turbines are uneconomically large. The small temperature differences between condensing steam and outside air or water require very large movements of mass to drive the turbines.
Although not reduced to practice, a vortex of air can concentrate the mass flows for a bottoming cycle. Theoretical studies of the Vortex engine show that if built at scale it is an economical bottoming cycle for a large steam Rankine cycle power plant.
Combined cycle hydrogen power plant
A combined cycle hydrogen power plant is a power plant that uses hydrogen in a combined cycle power plant. A green hydrogen combined cycle power plant is only about 40% efficient, after electrolysis and reburning for electricity, and is a viable option for energy storage for longer term compared to battery storage. Natural gas power plants could be converted to hydrogen power plants with minimal renovation or do a combined mix of natural gas and hydrogen.
Retrofitting natural gas power plants
Natural gas power plants could be designed with a transition to hydrogen in mind by having wider inlet pipes to the burner to increase flow rates because hydrogen is less dense than natural gas, and have the right material because hydrogen can cause hydrogen embrittlement.
Limitations
Current electrolysis plants are not capable of providing the scale of hydrogen that is needed to provide for a large scale power plant. On site electrolysis may be needed, then storing large amounts of hydrogen could take up a lot of space if it is only compressed hydrogen and not Liquid hydrogen. Hydrogen embrittlement could happen in pipelines, but 316L stainless steel pipelines could handle compressed hydrogen above 50 Bar (unit), which is what compressed natural gas is piped at, or wider pipelines could be built for hydrogen. Polyethylene or fiber-reinforced polymer pipelines coule also be used.
Nitrous oxide
When hydrogen is burned as a fuel no carbon dioxide is produced, but more nitrous oxide is produced because of the higher flame temperature from hydrogen, a selective catalytic reduction process could be implemented to break NO₂ down into just nitrogen and water. The exhaust from a burning hydrogen reaction is water vapor and could be used as a diluent to lower the high burning temp that creates the nitrous oxide.
Corrosion
Corrosion of the turbine from the water vapor from the hydrogen flame could reduce plant life or parts may need to be replaced more often.
Fuel handling
Hydrogen is the smallest and lightest element and can leak more easily at connection points and joints. Hydrogen diffuses quickly mitigating explosions. A hydrogen flame is also not as visible as a standard flame.
Transition to a renewable power grid
Wind and solar power are variable renewable energy sources that aren't as consistent as base load energy. Hydrogen could help renewables by capturing excess energy, with electrolysis, when they produce too much, and fill the gaps with that energy when they aren't producing as much.
| Technology | Power generation | null |
584096 | https://en.wikipedia.org/wiki/Myriapoda | Myriapoda | Myriapods () are the members of subphylum Myriapoda, containing arthropods such as millipedes and centipedes. The group contains about 13,000 species, all of them terrestrial.
Although molecular evidence and similar fossils suggests a diversification in the Cambrian Period, the oldest known fossil record of myriapods dates between the Late Silurian and Early Devonian, with Pneumodesmus preserving the earliest known evidence of air-breathing on land. Other early myriapod fossil species around the similar time period include Kampecaris obanensis and Archidesmus sp. The phylogenetic classification of myriapods is still debated.
The scientific study of myriapods is myriapodology, and those who study myriapods are myriapodologists.
Anatomy
Myriapods have a single pair of antennae and, in most cases, simple eyes. Exceptions are the two classes of symphylans and pauropods, the millipede order Polydesmida and the centipede order Geophilomorpha, which are all eyeless. The house centipedes (Scutigera) on the other hand, have large and well-developed compound eyes. The mouthparts lie on the underside of the head, with an "epistome" and labrum forming the upper lip, and a pair of maxillae forming the lower lip. A pair of mandibles lie inside the mouth. Myriapods breathe through spiracles that connect to a tracheal system similar to that of insects. There is a long tubular heart that extends through much of the body, but usually few, if any, blood vessels.
Malpighian tubules excrete nitrogenous waste into the digestive system, which typically consists of a simple tube. Although the ventral nerve cord has a ganglion in each segment, the brain is relatively poorly developed.
During mating, male myriapods produce a packet of sperm, or spermatophore, which they must transfer to the female externally; this process is often complex and highly developed. The female lays eggs which hatch as much-shortened versions of the adults, with only a few segments and as few as three pairs of legs. With the exception of the two centipede orders Scolopendromorpha and Geophilomorpha, which have epimorphic development (all body segments are formed segments embryonically), the young add additional segments and limbs as they repeatedly moult to reach the adult form.
The process of adding new segments during postembryonic growth is known as anamorphosis, of which there are three types: euanamorphosis, emianamorphosis, and teloanamorphosis. In euanamorphosis, every moult is followed by addition of new segments, even after reaching sexual maturity; in emianamorphosis, new segments are added until a certain stage, and further moults happen without addition of segments; and in teloanamorphosis, where the addition of new segments stops after the adult form is reached, after no further moults occur.
Ecology
Myriapods are most abundant in moist forests, where they fulfill an important role in breaking down decaying plant material, although a few live in grasslands, semi-arid habitats or even deserts. A very small percentage of species are littoral (found along the sea shore). The majority are detritivorous, with the exception of centipedes, which are chiefly nocturnal predators.
A few species of centipedes and millipedes are able to produce light and are therefore bioluminescent. Pauropodans and symphylans are small, sometimes microscopic animals that resemble centipedes superficially and live in soils. Millipedes differ from the other groups in having their body segments fused into pairs, giving the appearance that each segment bears two pairs of legs, while the other three groups have a single pair of legs on each body segment.
Although not generally considered dangerous to humans, many millipedes produce noxious secretions (often containing benzoquinones) which in rare cases can cause temporary blistering and discolouration of the skin. Large centipedes, however, can bite humans, and although the bite may cause intense pain and discomfort, fatalities are extremely rare.
Classification
There has been much debate as to which arthropod group is most closely related to the Myriapoda. Under the Mandibulata hypothesis, Myriapoda is the sister taxon to Pancrustacea, a group comprising the Crustacea and Hexapoda (insects and their close relatives). Under the Atelocerata hypothesis, Hexapoda is the closest, whereas under the Paradoxopoda hypothesis, Chelicerata is the closest. This last hypothesis, although supported by few, if any, morphological characters, is supported by a number of molecular studies.
A 2020 study found numerous characters of the eye and preoral region suggesting that the closest relatives to crown myriapods are the extinct Euthycarcinoids. There are four classes of extant myriapods, Chilopoda (centipedes), Diplopoda, Pauropoda and Symphyla, containing a total of around 12,000 species. While each of these groups of myriapods is believed to be monophyletic, relationships among them are less certain.
Centipedes
Centipedes make up the class Chilopoda. They are fast, predatory and venomous, hunting mostly at night. There are around 3,300 species, ranging from the diminutive Nannarrup hoffmani (less than 12 mm or in in length) to the giant Scolopendra gigantea, which may exceed .
Millipedes
Millipedes form the class Diplopoda. Most millipedes are slower than centipedes, and feed on leaf litter and detritus. Except for the first segment called collum, which don't have any appendages, and the next three segments with a single pair of legs each, they are distinguished by the fusion of each pair of body segments into a single unit, giving the appearance of having two pairs of legs per segment. It is also common for the sternites, pleurites and tergites to fuse into rigid armour rings. The males produce aflagellate sperm cells, unlike the rest of the myriapods which produce flagellated sperm. Around 12,000 species have been described, which may represent less than a tenth of the true global millipede diversity. Although the name "millipede" is a compound word formed from the Latin roots millia ("thousand") and pes (gen. pedis) ("foot"), millipedes typically have between 36 and 400 legs. In 2021, however, was described Eumillipes persephone, the first species known to have 1,000 or more legs, possessing 1,306 of them. Pill millipedes are much shorter, and are capable of rolling up into a ball, like pillbugs.
Symphyla
Symphylans, or garden centipedes, are closely related to centipedes and millipedes. They are 3 to 6 cm long, and have 6 to 12 pairs of legs, depending on their life stage. Their eggs, which are white and spherical and covered with small hexagonal ridges, are laid in batches of 4 to 25 at a time, and usually take up to 40 days to hatch. There are about 200 species worldwide.
Pauropoda
Pauropoda is another small group of small myriapods. They are typically 0.5–2.0 mm long and live in the soil on all continents except Antarctica. Over 700 species have been described. They are believed to be the sister group to millipedes, and have the dorsal tergites fused across pairs of segments, similar to the more complete fusion of segments seen in millipedes.
Arthropleuridea
Arthropleurideans were ancient myriapods that are now extinct, known from the late Silurian to the Permian. The most famous members are from the genus Arthropleura, which was a giant, probably herbivorous, animal that could be up to long, but the group also includes species less than . Arthropleuridea was historically considered a distinct class of myriapods, but since 2000 scientific consensus has viewed the group as a subset of millipedes, although the relationship of arthropleurideans to other millipedes and to each other is debated.
Myriapod relationships
A variety of groupings (clades) of the myriapod classes have been proposed, some of which are mutually exclusive, and all of which represent hypotheses of evolutionary relationships. Traditional relationships supported by morphological similarities (anatomical or developmental similarities) are challenged by newer relationships supported by molecular evidence (including DNA sequence and amino acid similarities).
Dignatha (also called Collifera) is a clade consisting of millipedes and pauropods, and is supported by morphological similarities including the presence of a gnathochilarium (a modified jaw and plate apparatus) and a collum, a legless segment behind the head.
Trignatha (also called Atelopoda) is a grouping of centipedes and symphylans, united by similarities of mouthparts.
Edafopoda is a grouping of symphylans and pauropodans that is supported by shared genetic sequences, yet conflicts with Dignatha and Trignatha.
Pectinopoda consist of millipedes and centipedes, a classification that also supports Edafopoda.
Progoneata is a group encompassing millipedes, pauropods and symphylans while excluding centipedes. Shared features include reproductive openings (gonopores) behind the second body segment, and sensory hairs (trichobothria) with a bulb-like swelling. It is compatible with either Dignatha or Edafopoda.
| Biology and health sciences | Myriapoda | null |
584598 | https://en.wikipedia.org/wiki/Acidobacteriota | Acidobacteriota | Acidobacteriota is a phylum of Gram-negative bacteria. Its members are physiologically diverse and ubiquitous, especially in soils, but are under-represented in culture.
Description
Members of this phylum are physiologically diverse, and can be found in a variety of environments including soil, decomposing wood, hot springs, oceans, caves, and metal-contaminated soils. The members of this phylum are particularly abundant in soil habitats representing up to 52% of the total bacterial community. Environmental factors such as pH and nutrients have been seen to drive Acidobacteriota dynamics. Many Acidobacteriota are acidophilic, including the first described member of the phylum, Acidobacterium capsulatum.
There is much that is unknown about Acidobacteria both in their form and function. Thus, this is a growing field of microbiology. Some of this uncertainty can be attributed to the difficulty with which these bacteria are grown in the laboratory. There has been recent success in propagation by using low concentrations of nutrients in combination with high amounts of CO2, yet, progress is still quite slow. These new methods have only allowed approximately 30% of subdivisions to have species documented.
Additionally, many of the samples sequenced do not have taxonomic names as they have not yet been fully characterized. This area of study is a very current topic, and scientific understanding is expected to grow and change as new information comes to light.
Other notable species are Holophaga foetida, Geothrix fermentans, Acanthopleuribacter pedis and Bryobacter aggregatus.
Since they have only recently been discovered and the large majority have not been cultured, the ecology and metabolism of these bacteria is not well understood. However, these bacteria may be an important contributor to ecosystems, since they are particularly abundant within soils. Members of subdivisions 1, 4, and 6 are found to be particularly abundant in soils.
As well as their natural soil habitat, unclassified subdivision 2 Acidobacteriota have also been identified as a contaminant of DNA extraction kit reagents, which may lead to their erroneous appearance in microbiota or metagenomic datasets.
Members of subdivision 1 have been found to dominate in low pH conditions. Additionally, Acidobacteriota from acid mine drainage have been found to be more adapted to acidic pH conditions (pH 2-3) compared to Acidobacteriota from soils, potentially due to cell specialization and enzyme stability.
The G+C content of Acidobacteria genomes are consistent within their subdivisions - above 60% for group V fragments and roughly 10% lower for group III fragments.
The majority of Acidobacteriota are considered aerobes. There are some Acidobacteriota that are considered anaerobes within subdivision 8 and subdivision 23. It has been found that some strains of Acidobacteriota originating from soils have the genomic potential to respire oxygen at atmospheric and sub-atmospheric concentrations.
Members of the Acidobacteriota phylum have been considered oligotrophic bacteria due to high abundances in low organic carbon environments. However, the variation in this phylum may indicate that they may not have the same ecological strategy.
History
The first species, Acidobacterium capsulatum, of this phylum was discovered in 1991. However, Acidobacteriota were not recognized as a distinct clade until 1997, and were not recognized as a phylum until 2012. First genome was sequenced in 2006.
Subdivisions
In an effort to further classify Acidobacteria, 16S rRNA gene regions were sequenced from many different strains. These sequences lead to the formation of subdivisions within the phyla. Today, there are 26 accepted subdivisions recognized in the Ribosomal Database Project.
Much of this variety comes from populations of acidobacteria found in soils contaminated with uranium. Therefore, most of the known species in this phyla are concentrated in a few of the subdivisions, the largest being #1. Most of these microbes are aerobes, and they are all heterotrophic. Subdivision 1 contains 11 of the known genera in addition to the majority of the species that have been able to be cultivated thus far.
Within the 22 known genera, there are 40 conclusive species. The genera are divided amongst subdivisions 3, 4, 8, 10, 23, and 1. As the Acidobateria are a developing area of microbiology, it is hypothesized that these numbers will change drastically with further study.
Metabolism
Carbon
Some members of subdivision 1 are able to use D-glucose, D-xylose, and lactose as carbon sources, but are unable to use fucose or sorbose. Members of subdivision 1 also contain enzymes such as galactosidases used in the breakdown of sugars. Members of subdivision 4 have been found to use chitin as a carbon source.
Despite the presence of genetic information generally known to encode for carbohydrate processing machinery in various genera of Acidobacteria, several experimental studies have demonstrated the inability to break down various polysaccharides.
Cellulose is the main component of plant cell walls and a seemingly opportune resource for carbon. However, only a single species across all subdivisions has been shown to process it, Telmactobacter bradus from subvision 1. Scientists note that it is much too early in their understanding of the field to draw conclusions about carbon processing in Acidobacteria, but believe that xylan degradation (a polysaccharide primarily found in the secondary cell wall of plants) currently appears to be the most universal carbon breakdown ability.
Researchers believe that an additional factor in the lack of understanding of carbon degradation by acidobacteria may stem from the present limited ability to provide adequate cultivation conditions. To study the natural behavior of these bacteria, they must grow and live in a controlled, observable environment. If such a habitat cannot be provided, recorded data cannot reliably report on the activity of the microbes in question. Therefore, the inconsistencies between genome sequence based predictions and observed carbon processes may be explained by present study methods.
Nitrogen
There has been no clear evidence that Acidobacteriota are involved in nitrogen-cycle processes such as nitrification, denitrification, or nitrogen fixation. However, Geothrix fermantans was shown to be able to reduce nitrate and contained the norB gene. The NorB gene was also identified in Koribacter verstailis and Solibacter usitatus. In addition, the presence of the nirA gene has been observed in members of subdivision 1. Additionally, to date, all genomes have been described to directly uptake ammonium via ammonium channel transporter family genes. Acidobacteriota can use both inorganic and organic nitrogen as their nitrogen sources.
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature and National Center for Biotechnology Information.
| Biology and health sciences | Gram-negative bacteria | Plants |
584732 | https://en.wikipedia.org/wiki/Chlamydiota | Chlamydiota | The Chlamydiota (synonym Chlamydiae) are a bacterial phylum and class whose members are remarkably diverse, including pathogens of humans and animals, symbionts of ubiquitous protozoa, and marine sediment forms not yet well understood. All of the Chlamydiota that humans have known about for many decades are obligate intracellular bacteria; in 2020 many additional Chlamydiota were discovered in ocean-floor environments, and it is not yet known whether they all have hosts. Historically it was believed that all Chlamydiota had a peptidoglycan-free cell wall, but studies in the 2010s demonstrated a detectable presence of peptidoglycan, as well as other important proteins.
Among the Chlamydiota, all of the ones long known to science grow only by infecting eukaryotic host cells. They are as small as or smaller than many viruses. They are ovoid in shape and stain Gram-negative. They are dependent on replication inside the host cells; thus, some species are termed obligate intracellular pathogens and others are symbionts of ubiquitous protozoa. Most intracellular Chlamydiota are located in an inclusion body or vacuole. Outside cells, they survive only as an extracellular infectious form.
These Chlamydiota can grow only where their host cells grow, and develop according to a characteristic biphasic developmental cycle. Therefore, clinically relevant Chlamydiota cannot be propagated in bacterial culture media in the clinical laboratory. They are most successfully isolated while still inside their host cells.
Of various Chlamydiota that cause human disease, the two most important species are Chlamydia pneumoniae, which causes a type of pneumonia, and Chlamydia trachomatis, which causes chlamydia. Chlamydia is the most common bacterial sexually transmitted infection in the United States, and 2.86 million chlamydia infections are reported annually.
History
Chlamydia-like disease affecting the eyes of people was first described in ancient Chinese and Egyptian manuscripts. A modern description of chlamydia-like organisms was provided by Halberstaedrrter and von Prowazek in 1907.
Chlamydial isolates cultured in the yolk sacs of embryonating eggs were obtained from a human pneumonitis outbreak in the late 1920s and early 1930s, and by the mid-20th century, isolates had been obtained from dozens of vertebrate species. The term chlamydia (a cloak) appeared in the literature in 1945, although other names continued to be used, including Bedsonia, Miyagawanella, ornithosis-, TRIC-, and PLT-agents. In 1956, Chlamydia trachomatis was first cultured by Tang Fei-fan, though they were not yet recognized as bacteria.
Nomenclature
In 1966, Chlamydiota were recognized as bacteria and the genus Chlamydia was validated. The order Chlamydiales was created by Storz and Page in 1971. The class Chlamydiia was recently validly published. Between 1989 and 1999, new families, genera, and species were recognized. The phylum Chlamydiae was established in Bergey's Manual of Systematic Bacteriology. By 2006, genetic data for over 350 chlamydial lineages had been reported. Discovery of ocean-floor forms reported in 2020 involves new clades. In 2022 the phylum was renamed Chlamydiota.
Taxonomy and molecular signatures
The Chlamydiota currently contain eight validly named genera, and 14 genera. The phylum presently consist of two orders (Chlamydiales, Parachlamydiales) and nine families within a single class (Chlamydiia). Only four of these families are validly named (Chlamydiaceae, Parachlamydiaceae, Simkaniaceae, Waddliaceae) while five are described as families (Clavichlamydiaceae, Criblamydiaceae, Parilichlamydiaceae, Piscichlamydiaceae, and Rhabdochlamydiaceae).
The Chlamydiales order as recently described contains the families Chlamydiaceae, and the Clavichlamydiaceae, while the new Parachlamydiales order harbors the remaining seven families. This proposal is supported by the observation of two distinct phylogenetic clades that warrant taxonomic ranks above the family level. Molecular signatures in the form of conserved indels (CSIs) and proteins (CSPs) have been found to be uniquely shared by each separate order, providing a means of distinguishing each clade from the other and supporting the view of shared ancestry of the families within each order. The distinctness of the two orders is also supported by the fact that no CSIs were found among any other combination of families.
Molecular signatures have also been found that are exclusive for the family Chlamydiaceae. The Chlamydiaceae originally consisted of one genus, Chlamydia, but in 1999 was split into two genera, Chlamydophila and Chlamydia. The genera have since 2015 been reunited where species belonging to the genus Chlamydophila have been reclassified as Chlamydia species.
However, CSIs and CSPs have been found specifically for Chlamydophila species, supporting their distinctness from Chlamydia, perhaps warranting additional consideration of two separate groupings within the family. CSIs and CSPs have also been found that are exclusively shared by all Chlamydia that are further indicative of a lineage independent from Chlamydophila, supporting a means to distinguish Chlamydia species from neighbouring Chlamydophila members.
Phylogenetics
The Chlamydiota form a unique bacterial evolutionary group that separated from other bacteria about a billion years ago, and can be distinguished by the presence of several CSIs and CSPs. The species from this group can be distinguished from all other bacteria by the presence of conserved indels in a number of proteins and by large numbers of signature proteins that are uniquely present in different Chlamydiae species.
Reports have varied as to whether the Chlamydiota are related to the Planctomycetota or Spirochaetota. Genome sequencing, however, indicates that 11% of the genes in Protochlamydia amoebophila UWE25 and 4% in the Chlamydiaceae are most similar to chloroplast, plant, and cyanobacterial genes. Cavalier-Smith has postulated that the Chlamydiota fall into the clade Planctobacteria in the larger clade Gracilicutes. However, phylogeny and shared presence of CSIs in proteins that are lineage-specific indicate that the Verrucomicrobiota are the closest free-living relatives of these parasitic organisms. Comparison of ribosomal RNA genes has provided a phylogeny of known strains within Chlamydiota.
Human pathogens and diagnostics
Three species of Chlamydiota that commonly infect humans are described:
Chlamydia trachomatis, which causes the eye-disease trachoma and the sexually transmitted infection chlamydia
Chlamydophila pneumoniae, which causes a form of pneumonia
Chlamydophila psittaci, which causes psittacosis
The unique physiological status of the Chlamydiota including their biphasic lifecycle and obligation to replicate within a eukaryotic host has enabled the use of DNA analysis for chlamydial diagnostics. Horizontal transfer of genes is evident and complicates this area of research. In one extreme example, two genes encoding histone-like H1 proteins of eukaryotic origin have been found in the prokaryotic genome of C. trachomatis, an obligate intracellular pathogen.
Phylogeny
Taxonomy
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI)
"Similichlamydiales" Pallen, Rodriguez-R & Alikhan 2022 [Hat2]
Family "Piscichlamydiaceae" Horn 2010
Family "Parilichlamydiaceae" Stride et al. 2013 ["Similichlamydiaceae" Pallen, Rodriguez-R & Alikhan 2022]
Order Chlamydiales Storz & Page 1971
Family "Actinochlamydiaceae" Steigen et al. 2013
Family "Criblamydiaceae" Thomas, Casson & Greub 2006
Family Chlamydiaceae Rake 1957 ["Clavichlamydiaceae" Horn 2011]
Family Parachlamydiaceae Everett, Bush & Andersen 1999
Family Rhabdochlamydiaceae Corsaro et al. 2009
Family Simkaniaceae Everett, Bush & Andersen 1999
Family Waddliaceae Rurangirwa et al. 1999
| Biology and health sciences | Gram-negative bacteria | Plants |
584887 | https://en.wikipedia.org/wiki/Optical%20coating | Optical coating | An optical coating is one or more thin layers of material deposited on an optical component such as a lens, prism or mirror, which alters the way in which the optic reflects and transmits light. These coatings have become a key technology in the field of optics. One type of optical coating is an anti-reflective coating, which reduces unwanted reflections from surfaces, and is commonly used on spectacle and camera lenses. Another type is the high-reflector coating, which can be used to produce mirrors that reflect greater than 99.99% of the light that falls on them. More complex optical coatings exhibit high reflection over some range of wavelengths, and anti-reflection over another range, allowing the production of dichroic thin-film filters.
Types of coating
The simplest optical coatings are thin layers of metals, such as aluminium, which are deposited on glass substrates to make mirror surfaces, a process known as silvering. The metal used determines the reflection characteristics of the mirror; aluminium is the cheapest and most common coating, and yields a reflectivity of around 88%-92% over the visible spectrum. More expensive is silver, which has a reflectivity of 95%-99% even into the far infrared, but suffers from decreasing reflectivity (<90%) in the blue and ultraviolet spectral regions. Most expensive is gold, which gives excellent (98%-99%) reflectivity throughout the infrared, but limited reflectivity at wavelengths shorter than 550 nm, resulting in the typical gold colour.
By controlling the thickness and density of metal coatings, it is possible to decrease the reflectivity and increase the transmission of the surface, resulting in a half-silvered mirror. These are sometimes used as "one-way mirrors".
The other major type of optical coating is the dielectric coating (i.e. using materials with a different refractive index to the substrate). These are constructed from thin layers of materials such as magnesium fluoride, calcium fluoride, and various metal oxides, which are deposited onto the optical substrate. By careful choice of the exact composition, thickness, and number of these layers, it is possible to tailor the reflectivity and transmitivity of the coating to produce almost any desired characteristic. Reflection coefficients of surfaces can be reduced to less than 0.2%, producing an antireflection (AR) coating. Conversely, the reflectivity can be increased to greater than 99.99%, producing a high-reflector (HR) coating. The level of reflectivity can also be tuned to any particular value, for instance to produce a mirror that reflects 90% and transmits 10% of the light that falls on it, over some range of wavelengths. Such mirrors are often used as beamsplitters, and as output couplers in lasers. Alternatively, the coating can be designed such that the mirror reflects light only in a narrow band of wavelengths, producing an optical filter.
The versatility of dielectric coatings leads to their use in many scientific optical instruments (such as lasers, optical microscopes, refracting telescopes, and interferometers) as well as consumer devices such as binoculars, spectacles, and photographic lenses.
Dielectric layers are sometimes applied over top of metal films, either to provide a protective layer (as in silicon dioxide over aluminium), or to enhance the reflectivity of the metal film. Metal and dielectric combinations are also used to make advanced coatings that cannot be made any other way. One example is the so-called "perfect mirror", which exhibits high (but not perfect) reflection, with unusually low sensitivity to wavelength, angle, and polarization.
Antireflection coatings
Antireflection coatings are used to reduce reflection from surfaces. Whenever a ray of light moves from one medium to another (such as when light enters a sheet of glass after travelling through air), some portion of the light is reflected from the surface (known as the interface) between the two media.
A number of different effects are used to reduce reflection. The simplest is to use a thin layer of material at the interface, with an index of refraction between those of the two media. The reflection is minimized when
,
where is the index of the thin layer, and and are the indices of the two media. The optimum refractive indices for multiple coating layers at angles of incidence other than 0° is given by Moreno et al. (2005).
Such coatings can reduce the reflection for ordinary glass from about 4% per surface to around 2%. These were the first type of antireflection coating known, having been discovered by Lord Rayleigh in 1886. He found that old, slightly tarnished pieces of glass transmitted more light than new, clean pieces due to this effect.
Practical antireflection coatings rely on an intermediate layer not only for its direct reduction of reflection coefficient, but also use the interference effect of a thin layer. If the layer's thickness is controlled precisely such that it is exactly one-quarter of the wavelength of the light in the layer (a quarter-wave coating), the reflections from the front and back sides of the thin layer will destructively interfere and cancel each other.
In practice, the performance of a simple one-layer interference coating is limited by the fact that the reflections only exactly cancel for one wavelength of light at one angle, and by difficulties finding suitable materials. For ordinary glass (n≈1.5), the optimum coating index is n≈1.23. Few useful substances have the required refractive index. Magnesium fluoride (MgF2) is often used, since it is hard-wearing and can be easily applied to substrates using physical vapour deposition, even though its index is higher than desirable (n=1.38). With such coatings, reflection as low as 1% can be achieved on common glass, and better results can be obtained on higher index media.
Further reduction is possible by using multiple coating layers, designed such that reflections from the surfaces undergo maximum destructive interference. By using two or more layers, broadband antireflection coatings which cover the visible range (400-700 nm) with maximum reflectivities of less than 0.5% are commonly achievable. Reflection in narrower wavelength bands can be as low as 0.1%. Alternatively, a series of layers with small differences in refractive index can be used to create a broadband antireflective coating by means of a refractive index gradient.
High-reflection coatings
High-reflection (HR) coatings work the opposite way to antireflection coatings. The general idea is usually based on the periodic layer system composed from two materials, one with a high index, such as zinc sulfide (n=2.32) or titanium dioxide (n=2.4), and one with a low index, such as magnesium fluoride (n=1.38) or silicon dioxide (n=1.49). This periodic system significantly enhances the reflectivity of the surface in the certain wavelength range called band-stop, whose width is determined by the ratio of the two used indices only (for quarter-wave systems), while the maximum reflectivity increases up to almost 100% with a number of layers in the stack. The thicknesses of the layers are generally quarter-wave (then they yield to the broadest high reflection band in comparison to the non-quarter-wave systems composed from the same materials), this time designed such that reflected beams constructively interfere with one another to maximize reflection and minimize transmission. The best of these coatings built-up from deposited dielectric lossless materials on perfectly smooth surfaces can reach reflectivities greater than 99.999% (over a fairly narrow range of wavelengths). Common HR coatings can achieve 99.9% reflectivity over a broad wavelength range (tens of nanometers in the visible spectrum range).
As for AR coatings, HR coatings are affected by the incidence angle of the light. When used away from normal incidence, the reflective range shifts to shorter wavelengths, and becomes polarization dependent. This effect can be exploited to produce coatings that polarize a light beam.
By manipulating the exact thickness and composition of the layers in the reflective stack, the reflection characteristics can be tuned to a particular application, and may incorporate both high-reflective and anti-reflective wavelength regions. The coating can be designed as a long- or short-pass filter, a bandpass or notch filter, or a mirror with a specific reflectivity (useful in lasers). For example, the dichroic prism assembly used in some cameras requires two dielectric coatings, one long-wavelength pass filter reflecting light below 500 nm (to separate the blue component of the light), and one short-pass filter to reflect red light, above 600 nm wavelength. The remaining transmitted light is the green component.
Extreme ultraviolet coatings
In the EUV portion of the spectrum (wavelengths shorter than about 30 nm) nearly all materials absorb strongly, making it difficult to focus or otherwise manipulate light in this wavelength range. Telescopes such as TRACE or EIT that form images with EUV light use multilayer mirrors that are constructed of hundreds of alternating layers of a high-mass metal such as molybdenum or tungsten, and a low-mass spacer such as silicon, vacuum deposited onto a substrate such as glass. Each layer pair is designed to have a thickness equal to half the wavelength of light to be reflected. Constructive interference between scattered light from each layer causes the mirror to reflect EUV light of the desired wavelength as would a normal metal mirror in visible light. Using multilayer optics it is possible to reflect up to 70% of incident EUV light (at a particular wavelength chosen when the mirror is constructed).
Transparent conductive coatings
Transparent conductive coatings are used in applications where it is important that the coating conduct electricity or dissipate static charge. Conductive coatings are used to protect the aperture from electromagnetic interference, while dissipative coatings are used to prevent the build-up of static electricity. Transparent conductive coatings are also used extensively to provide electrodes in situations where light is required to pass, for example in flat panel display technologies and in many photoelectrochemical experiments. A common substance used in transparent conductive coatings is indium tin oxide (ITO). ITO is not very optically transparent, however. The layers must be thin to provide substantial transparency, particularly at the blue end of the spectrum. Using ITO, sheet resistances of 20 to 10,000 ohms per square can be achieved. An ITO coating may be combined with an antireflective coating to further improve transmittance. Other TCOs (Transparent Conductive Oxides) include AZO (Aluminium doped Zinc Oxide), which offers much better UV transmission than ITO.
A special class of transparent conductive coatings applies to infrared films for theater-air military optics where IR transparent windows need to have (Radar) stealth (Stealth technology) properties. These are known as RAITs (Radar Attenuating / Infrared Transmitting) and include materials such as boron doped DLC (Diamond-like carbon).
Phase correction coatings
The multiple internal reflections in roof prisms cause a polarization-dependent phase-lag of the transmitted light, in a manner similar to a Fresnel rhomb. This must be suppressed by multilayer phase-correction coatings applied to one of the roof surfaces to avoid unwanted interference effects and a loss of contrast in the image. Dielectric phase-correction prism coatings are applied in a vacuum chamber with maybe 30 different superimposed vapor coating layers deposits, making it a complex production process.
In a roof prism without a phase-correcting coating, s-polarized and p-polarized light each acquire a different geometric phase as they pass through the upper prism. When the two polarized components are recombined, interference between the s-polarized and p-polarized light results in a different intensity distribution perpendicular to the roof edge as compared to that along the roof edge. This effect reduces contrast and resolution in the image perpendicular to the roof edge, producing an inferior image compared to that from a porro prism erecting system. This roof edge diffraction effect may also be seen as a diffraction spike perpendicular to the roof edge generated by bright points in the image. In technical optics, such a phase is also known as the Pancharatnam phase, and in quantum physics an equivalent phenomenon is known as the Berry phase.
This effect can be seen in the elongation of the Airy disk in the direction perpendicular to the crest of the roof as this is a diffraction from the discontinuity at the roof crest.
The unwanted interference effects are suppressed by vapour-depositing a special dielectric coating known as a phase-compensating coating on the roof surfaces of the roof prism. These phase-correction coating or P-coating on the roof surfaces was developed in 1988 by Adolf Weyrauch at Carl Zeiss Other manufacturers followed soon, and since then phase-correction coatings are used across the board in medium and high-quality roof prism binoculars. This coating corrects for the difference in geometric phase between s- and p-polarized light so both have effectively the same phase shift, preventing image-degrading interference.
From a technical point of view, the phase-correction coating layer does not correct the actual phase shift, but rather the partial polarization of the light that results from total reflection. Such a correction can always only be made for a selected wavelength and for a specific angle of incidence; however, it is possible to approximately correct a roof prism for polychromatic light by superimposing several layers. In this way, since the 1990s, roof prism binoculars have also achieved resolution values that were previously only achievable with porro prisms. The presence of a phase-correction coating can be checked on unopened binoculars using two polarization filters.
Fano-resonant optical coatings
Fano-resonant optical coatings (FROCs) represent a new category of optical coatings. FROCs exhibit the photonic Fano resonance by coupling a broadband nanocavity, which serves as the continuum, with a narrowband Fabry–Perot nanocavity, representing the discrete state. The interference between these two resonances manifests as an asymmetric Fano-resonance line-shape. FROCs are considered a separate category of optical coatings because they enjoy optical properties that cannot be reproduced using other optical coatings. Mainly, semi-transparent FROCs act as a beam splitting filter that reflects and transmits the same color, a property that cannot be achieved with transmission filters, dielectric mirrors, or semi-transparent metals.
FROCs enjoy remarkable structural coloring properties, as they can produce colors across a wide color gamut with both high brightness and high purity. Moreover, the dependence of color on the angle of incident light can be controlled through the dielectric cavity material, making FROCs adaptable for applications requiring either angle-independent or angle-dependent coloring. This includes decorative purposes and anti-counterfeit measures.
FROCs were used as both monolithic spectrum splitters and selective solar absorbers, which makes them suitable for hybrid solar-thermal energy generation. They can be designed to reflect specific wavelength ranges, aligning with the energy band gap of photovoltaic cells, while absorbing the remaining solar spectrum. This enables higher photovoltaic efficiency at elevated optical concentrations by reducing the photovoltaic's cell temperature. The reduced temperature also increases the cell's lifetime. Additionally, their low infrared emissivity minimizes thermal losses, increasing the system's overall optothermal efficiency.
Sources
Hecht, Eugene. Chapter 9, Optics, 2nd ed. (1990), Addison Wesley. .
I. Moreno, et al., "Thin-film spatial filters", Optics Letters, 30, 914–916 (2005), .
C. Clark, et al., "Two-color Mach 3 IR coating for TAMD systems", Proc. SPIE, vol. 4375, p. 307–314 (2001), .
| Technology | Optical components | null |
584946 | https://en.wikipedia.org/wiki/Toronto%20subway | Toronto subway | The Toronto subway is a rapid transit system serving Toronto and the neighbouring city of Vaughan in Ontario, Canada, operated by the Toronto Transit Commission (TTC). The subway system is a rail network consisting of three heavy-capacity rail lines operating predominantly underground. three new lines are under construction: two light rail lines (one running mostly underground, the other running mostly at-grade) and one heavy rail line (running both underground and on elevated guideways).
In 1954, the TTC opened Canada's first underground rail line, then known as the "Yonge subway", under Yonge Street between Union Station and Eglinton Avenue with 12 stations. As of 2024, the network encompasses 70 stations and of route. In , the system had a ridership of , or about per weekday as of , making it the second-busiest rapid transit system in Canada in terms of daily ridership, behind the Montreal Metro. There are 60 stations under construction as part of three new lines, two light rail lines and one subway line, and two extensions to existing lines.
Overview
There are three operating rapid transit lines in Toronto:
Line 1 Yonge–University is the longest and busiest rapid transit line in the system. It opened as the Yonge subway in 1954 with a length of , and since then has grown to a length of . The modern line is U-shaped, having two northern terminalsat Vaughan Metropolitan Centre and Finchand its southern end at Union station in downtown Toronto.
Line 2 Bloor–Danforth, opened in 1966, runs parallel to Bloor Street and Danforth Avenue between Kipling station in Etobicoke and Kennedy station in Scarborough. Construction has started on a three-stop extension of Line 2 northeastward from Kennedy station to Sheppard Avenue and McCowan via Scarborough City Centre.
Line 4 Sheppard opened in 2002 running under Sheppard Avenue East eastwards from Sheppard–Yonge station on Line 1 to Don Mills station; it is the shortest rapid transit line in Toronto at a length of and the only one without any open sections.
three new lines are under construction, two light rail lines and one subway line.
Line 5 Eglinton (also known as the Eglinton Crosstown LRT) is an under-construction light rail line along Eglinton Avenue, planned to run from Kennedy station in the east to Mount Dennis station in the west. The line will have 25 stations, 15 of which will be underground, while the remaining ten will be at-grade stops located in at the road's median. Construction began in 2011. The line was expected to be completed in 2024 at a cost of approximately $12billion, though it has since been delayed.
An extension of Line 5 westwards for to Renforth station is also under construction. The extension will have seven stations, four of which will be underground and two of which will be elevated. Construction began in 2022, and is scheduled for completion in the 2030s.
Line 6 Finch West (also known as the Finch West LRT) is an under-construction , 18-stop light rail line travelling from Finch West station on Line 1 Yonge–University to the North Campus of Humber College, located mainly in the median of Finch Avenue. Construction on Line 6 began in 2019. It was scheduled for completion within the first half of 2024, with an estimated cost of $1.2billion, though it has since been delayed.
Ontario Line is an under-construction subway line from Exhibition station to Science Centre station, providing a second rapid transit line through the Financial District and downtown core. The project evolved from the long-planned Downtown Relief Line, first proposed in the mid-1980s. The line is scheduled for completion in 2031 at a cost of $17 to $19 billion. Upon opening, the plan is to reassign the "Line 3" moniker formerly used by Line 3 Scarborough to the Ontario Line.
Until July 2023, the TTC operated an elevated light metro service:
Line 3 Scarborough, originally known as the Scarborough RT, was an elevated medium-capacity (light metro) rail line serving the city's eponymous suburban district. It opened in 1985, running from Kennedy station to McCowan station via . It was the only rapid transit line in Toronto to use Intermediate Capacity Transit System (ICTS) technology. Because of maintenance difficulties (along with the Line 2 subway extension into Scarborough), Line 3 was to be decommissioned on November 19, 2023. However, it was decommissioned approximately four months early due to a derailment on July 24, 2023. Bus service replaced Line 3 and is scheduled to continue until the extension of Line 2 to Scarborough City Centre opens in 2030.
History
Timeline of openings and closings
Line 1 Yonge–University
Canada's first subway, the Yonge subway, opened in 1954 with a length of . The line ran under or parallel to Yonge Street between Eglinton Avenue and Union station. It replaced the Yonge streetcar line, Canada's first streetcar line. In 1963, the line was extended northwards from Union station under University Avenue to Bloor Street, where it would later connect with the Bloor–Danforth subway (opened in 1966) at the double-deck St. George station. In 1974, the Yonge Street portion of the line was extended from Eglinton station north to Finch station. The Spadina segment of the line was constructed north from St. George station initially to Wilson station in 1978, and in 1996 to Downsview station, renamed Sheppard West in 2017. Part of the Spadina segment runs in the median of Allen Road – an expressway formerly known as the Spadina Expressway – and crosses over Highway 401 on overpasses. Six decades of extensions gave the line a U-shaped route running from its two northern terminals (Finch and Vaughan Metropolitan Centre stations) and looping on its southern end at Union station. The latest extension from Sheppard West to opened on December 17, 2017, making the line long, over five times its original length.
Line 2 Bloor–Danforth
Opened in 1966, the Bloor–Danforth subway runs east–west under or near Bloor Street and Danforth Avenue. It replaced the Bloor streetcar line (which also served Danforth Avenue). Initially, the subway line ran between Keele station and Woodbine station. In 1968, the line was extended west to Islington station and east to Warden station, and in 1980, it was further extended west to Kipling station and east to Kennedy station.
Line 3 Scarborough
Opened in 1985, Line 3 (originally the Scarborough RT) was a light metro line running from Kennedy station to McCowan station. The TTC started to construct the line to use Canadian Light Rail Vehicles. However, the TTC was forced to convert to the Intermediate Capacity Transit System technology because the provincial government threatened to cut funding to the TTC if it did not. This line was never extended, and in July 2023, the line was shut down pending its dismantling due to a derailment that resulted in injuries. It is set to be replaced with an extension of Line 2 to Sheppard Avenue and McCowan Road via Scarborough Town Centre.
Line 4 Sheppard
Opened in 2002, the Sheppard subway runs under Sheppard Avenue from Sheppard–Yonge station to Don Mills station. The line was under construction when a change in provincial government threatened to terminate the project, but Mel Lastman, the last mayor of the former City of North York (today part of Toronto), used his influence to save the project. Despite the construction of many high-rise residential buildings along the line since its opening, ridership remains low resulting in a subsidy of $10 per ride. The line was intended to be extended to Scarborough Centre station, but because of the low ridership and the cost of tunnelling, there was a plan to extend rapid transit eastwards from Don Mills station via a surface light rail line, the Sheppard East LRT. However, in April 2019, Premier Doug Ford announced that the provincial government would extend Line 4 Sheppard to McCowan Road at some unspecified time in the future, thus replacing the proposed Sheppard East LRT. Line 4 Sheppard is also the only subway line in Toronto not to have any open sections.
Line 5 Eglinton
Metrolinx is funding the Line 5 Eglinton, a light rail line along Eglinton Avenue. From Mount Dennis in the west to Brentcliffe Road (east of Laird Drive), the line will run almost entirely underground where Eglinton Avenue is generally four to five lanes wide. From east of Brentcliffe Road to Kennedy station, the line will operate on the surface in a reserved median in the middle of Eglinton Avenue, where the street is at least six lanes wide. Building on the surface instead of tunnelling reduces the cost of construction on the eastern end of the line. The average speed of the line is expected to be ; as a comparison, the average speed of the heavy-rail Line 2 Bloor–Danforth is . The Eglinton line originated from Transit City, a plan sponsored by then–Toronto mayor David Miller, to expedite transit improvement by building several light rail lines through the lower density parts of the city. Of the light rail lines proposed, only the Eglinton and Finch West lines are under construction . Line 5 was expected to be completed in 2024, though it has since been delayed.
Line 6 Finch West
Line 6 Finch West, also known as the "Finch West LRT", is an under-construction line being built by Mosaic Transit Group along Finch Avenue. It is to be operated by the Toronto Transit Commission and was also part of the Transit City proposal announced on March 16, 2007. The , 18-stop line is to extend from Finch West station on Line 1 Yonge–University to the north campus of Humber Polytechnic (formerly Humber College). The line is forecast to carry about 14.6million rides a year or 40,000 a day by 2031. Construction on this line began in 2019. It was scheduled for completion in the first half of 2024, with an estimated cost of $1.2billion, though it has since been delayed.
Ontario Line
Ontario Line is an under-construction subway line from Exhibition station to Science Centre station, providing a second rapid transit line through the Financial District and downtown core. Although a subway line along Queen Street was first proposed in the early 1900s, the Downtown Relief Line was first proposed in the mid-1980s. The Ontario Line project extends further west and north than previous proposals to serve more of the city. The line is scheduled for completion in 2031 at a cost of $17 to $19 billion. Upon opening, the plan is for the line is take the "Line 3" moniker formerly used by Line 3 Scarborough.
Major incidents
On March 27, 1963, there was an electrical short in a subway car's motor. The driver decided to continue operating the train, despite visible smoke in the affected car, until the train reached Union station. This decision resulted in the destruction of six subway cars and extensive damage to the tunnel and signal lines west of Union station. Following this incident, safety procedures involving electrical malfunctions and/or fire in subway trains, were revised to improve safety and reduce the likelihood of a similar incident occurring.
On October 14, 1976, arson caused the destruction of four subway cars and damage to Christie station, resulting in the closure of part of the Bloor–Danforth line for three days, and the bypassing of Christie station for some time afterwards for repairs.
On August 11, 1995, the TTC suffered the deadliest subway accident in Canadian history, known as the Russell Hill accident, on the Yonge–University line south of St. Clair West station. Halfway between St. Clair West and Dupont stations, a southbound Line 1 subway train hit the rear of a stationary train ahead of it. Three people died and 100 other people were injured, some of them seriously. This led to a major reorganization at the TTC, with more focus on maintaining a "state of good repair" (i.e., an increased emphasis on safety and maintenance of existing TTC capital/services) and less on expansion.
On July 24, 2023, the last car of a train on Line 3 Scarborough derailed south of Ellesmere station. There were 45 people on board, with five injuries reported. The TTC closed the line while the cause of the accident, which was not immediately apparent, was investigated. Though the TTC planned to close Line 3 in November 2023, it announced on August 24 that the line would not reopen.
Operations and procedures
Terminal station reversals and short turns
The heavy-rail subway lines were built in multiple segments with multiple crossovers. These are typically used for reversals at terminal stations, and allow arriving and departing trains to cross to and from the station's farside platform. They are also used for short turning trains at some through stations in order to accommodate emergency and planned service suspensions. Planned service suspensions generally occur on weekends for planned maintenance activities that are impractical to perform overnight. There is only one regular short turn service that occurs during the morning rush hour on Line 1 Yonge–University when some northbound trains short turn at Glencairn station.
On Line 3 Scarborough, light metro trains were not able to switch direction except at the ends of the line as there were no intermediate crossovers between the two termini. Thus, no short turns on Line 3 were possible.
Door operation
The heavy-rail subway lines use either a one- or two-person crew. With two-person train operation, an on-board train guard at the rear of the train is responsible for opening and closing the subway car doors and making sure no one is trapped in a door as the train leaves a station. From the subway's inception in 1954 to 1991, the train guard notified patrons that the subway car doors were closing with two short blasts from a whistle. With one-person train operation (OPTO), one person operates the train as well as the doors. The TTC notes that modern technology now allows one person to safely operate the train and close the doors, and that OPTO is in use in many major cities with large subway systems such as the London Underground, the Paris Metro, the Chicago "L" and the Montreal Metro.
Initially, all the heavy-rail subway lines (1, 2 and 4) used two-person train operation. On October 9, 2016, Line 4 Sheppard was converted to OPTO. On August 1, 2021, the TTC tested OPTO on a portion of Line 1 on Sundays only. Effective November 21, 2021, the TTC introduced OPTO seven days per week on Line 1 between Vaughan Metropolitan Centre and St. George stations. Between St. George and Finch stations, the TTC continued using two-person train operation until the full conversion of the line to OPTO on November 20, 2022. From its opening in 1985 to its close in 2023, trains on Line 3 Scarborough were operated by one person.
According to a 2020 survey conducted by the Amalgamated Transit Union Local 113, two-thirds of Torontonians surveyed opposed the TTC's plan to eliminate the train guard on Line 1, and three-quarters of Torontonians disapproved of the fact that the public was not consulted when train guards were removed from Line 4's daily operations in 2016, citing safety concerns, among other issues, as key reasons motivating their response.
In 1991, as a result of lawsuits, electronic chimes, in the form of a descending arpeggiated major triad and a flashing pair of orange lights above the doorway, added for the hearing impaired, were tested and gradually introduced system-wide during the 1990s. The Toronto Rocket trains use the same door chimes and flashing orange lights as the older trains do, and also plays the additional voice announcement, "Please stand clear of the doors". Those chimes have become synonymous with the TTC and Toronto in general to the point that the CBC Radio One local afternoon show, Here and Now, includes them in its theme music.
Entering a station
There are several basic procedures that need to be completed once a train has entered a station. On TTC's Line 2, several symbols of different colours are installed on the station wall for the crew to use as a reference in positioning the train in the platform. A red circle, located at the train exit end of the platform, should be directly in front of the train operator's cab window when the train is aligned properly. A green triangle, located at the opposite end of the platform, is provided as a reference to the train guard that shows that the train is correctly aligned. Before opening the train doors, the guard lowers the cab window and points their finger out the window toward the green triangle when the cab is lined up with the triangle. If the train is not lined up properly, the guard is not permitted to open the doors.
To operate the doors, the guard is first required to insert and turn a key. This action provides system control to the door control panel. The doors are then opened by pushing buttons. After the doors are opened, the guard is required to stick their head out the cab window to observe passengers boarding and exiting. The train doors remain open for at least 15 seconds.
When the guard determines that boarding is complete, the doors are closed. Electronic chimes and flashing lights are turned on, then the automated announcement "please stand clear of the doors" is played over the train's public address system, and finally the doors are closed. The chimes provide a clear notification and warning to passengers that the doors are closing and are played before the automated announcement is played, because such announcements may not be heard when the station is crowded.
After the doors are closed, the guard provides a signal to the train operator that the train can proceed. The signal is in the form of a green light that turns on inside the operating cab. When the doors are closed, a light turns on in the operating cab. The guard is instructed to visually observe the platform while the train departs the station. The distance for this visual inspection is typically three car lengths. An orange triangle installed on the station wall indicates the location where the guard may stop observing the platform and pull their head back into the cab. This is done to ensure that no passengers are being dragged along by the train.
Platform markers
All staffed subway operations must verify that the train is properly berthed before the doors are opened. At each subway platform, a set of three platform markers are affixed onto the platform wall. The train operator and guard use them to position the train.
The current platform markers used for Lines 1, 2, and 4 are as follows:
Circular red disk (Lines 1, 2, and 4)—This marker is typically mounted on the station platform wall to assist the train operator in positioning the train in the station. When the operator's window is aligned with the red disk, the train is properly berthed in the station.
Green triangle (Lines 1 and 2)—This marker is typically mounted on the station platform wall to indicate to the guard, who is positioned in the trailing car, that it is safe to open the doors. When the guard's window is aligned with this marker, the guard must confirm the stop position by physically pointing to the green triangle. If the guard cannot see the green triangle, they are not permitted to open the train doors.
Orange triangle (Lines 1 and 2)—This marker is typically mounted on the station platform wall to assist the guard, who is positioned in the trailing car, to observe the platform for the required distance as the train is moving to exit the station. When the guard sees this triangle, they can cease observations. The distance between the green and orange triangles is typically the length of three rail cars.
Prior to 2017, when subway guards operated the doors from the fifth car instead of the trailing car in the T1 trains on Line 2, different platform markers were used. The following markers have now fallen into disuse as a result of a March 2017 policy change that required all guards to work from the trailing car on Line 2:
Circular green disk (Line 2)—This marker was mounted on the station platform wall in front of the guard's window in the fifth car from the lead unit. It indicated to the guard that the train was properly berthed. The guard was required to point to the circle before opening the doors to confirm the stop position.
Circular orange disk (Line 2)—This marker was mounted on the station platform wall to indicate to the guard when they could cease train departure platform observations. At this point, the guard closed the cab window.
Service frequency
During rush hour, up to 65 trains are on Line 1 simultaneously, 45 trains on Line 2, and 4 trains on Line 4. During non-rush hour periods, there are 30–46 trains on Line 1 at any one time.
On weekdays and Saturdays, subway service runs from approximately 6:00am to 1:30am; Sunday service begins at 8:00am. Start times on holidays may vary.
Station announcements
On January 8, 1995, train operators began to announce each stop over the train's speaker system as a result of pressure from advocacy groups for the visually impaired, but announcements were sporadic until the TTC began to enforce the policy circa 2005. Later, automated announcements were implemented under further pressure from the advocacy groups. All Toronto subway trains use an automated system to announce each station, which is played twice over the speaker system: when the train departs a station (e.g. "The next station is: Dufferin, Dufferin station") and when it arrives at the following station (e.g. "Arriving at: Dufferin, Dufferin station"). In addition, the TTC's Toronto Rocket subway trains provide visible and audible automatic stop announcements. Unlike the other trains, the Toronto Rocket trains also announce connections to other TTC subway lines, such as "Change for Line 2", and terminus stations, "This is a terminal station" where applicable. , they also announce, except at terminus stations, which side the train doors will open on at each stop based on the direction of train travel.
Winter operations
Switches and power rails are vulnerable to malfunction under extreme winter conditions such as heavy snow or freezing rain. During such events, the TTC runs "storm trains" overnight along subway lines to keep power rails clear of ice. The TTC also has trains to apply an anti-freeze to the power rail once freezing rain starts.
These precautions were also used on Line 3 Scarborough, which used two power rails. After reviewing operations during the winter of 2018–2019, the TTC decided to change its procedures for Line 3. Thus, about two hours before an expected storm, the TTC would decide whether to shut down Line 3 and replace it with bus service. Just before the storm of February 2, 2022, the TTC replaced all Line 3 trains with 25 buses.
To keep switches in the yards from freezing, crews use switch heaters and manually monitor them to ensure they stay in working order during winter storms. Workcars are run as storm trains within the yards to prevent ice from building up on the power rails. The TTC stores subway trains in tunnels along main lines rather than in exterior yards.
Stations
The Toronto subway has 70 stations across three lines. Most stations are named for the nearest major arterial road crossed by the line in question. A few are named for major landmarks, such as shopping centres or transportation hubs, served by the station. The stations along the University Avenue section of Line 1 Yonge–University, in particular, are named entirely for landmarks and public institutions (, , and ) and major churches ( and ). All trains, except for short turns, stop at every station along their route and run the entire length of their line from terminus to terminus. Nearly all stations outside the central business district have terminals for local TTC bus routes and streetcar routes situated within their fare-paid areas. All regular TTC bus and streetcar routes permit free transfers both to and from connecting subway lines.
By December 23, 2016, Presto card readers had been installed in at least one priority subway station entrance across the TTC network. Throughout 2017 and into mid-2018, the remaining subway station entrances that still use legacy turnstiles (which were retrofitted with Presto readers between 2010 and 2015) and the "floor-to-ceiling" revolving turnstiles (found in automatic/secondary entrances, which do not have Presto readers on them) were replaced by the new Presto-equipped "glass-paddle" fare gates.
Accessibility
Most of the Toronto subway system was built before wheelchair access was a requirement under the Ontarians with Disabilities Act (ODA). However, all subway stations built since 1996 are equipped with elevators, and seventy percent (56 of 75) of Toronto's subway stations are now accessible following upgrade works to add elevators, wide fare gates, and access doors to the station. The figures include the stations on the closed Line 3 Scarborough.
In 2021, the TTC planned to make all of its stations accessible by 2025. By comparison, the Montreal Metro plans for all stations to be accessible by 2038, the Chicago "L" plans for all stations to be accessible in the 2030s, and the New York City Subway plans for 95 percent of stations to be accessible by 2055.
All TTC trains offer level boarding for customers with wheelchairs and other accessibility needs, with priority seating and dedicated wheelchair areas onboard each train.
Cleanliness
The May 2010 TTC cleanliness audit of subway stations found that none of them meets the transit agency's highest standard for cleanliness and general state of repair. Only 21 stations scored in the 70- to 80-percent range in the TTC's cleanliness scale, a range described as "Ordinary Tidiness", while 45 fell in the 60- to 70-percent range achieving what the commission describes as "Casual Inattentiveness". The May audit was the third in a series of comprehensive assessments that began in 2009. The commission announced a "Cleaning Blitz" that would add 30 new temporary cleaners for the latter part of 2010 to address major issues and has other action plans that include more full-time cleaners, and new and more effective ways at addressing station cleanliness.
The TTC implemented stricter cleanliness protocols during the COVID-19 pandemic.
Design and public art
According to a 1991 CBC report, "aesthetics weren't really a priority" on Toronto's subway system, describing stations as "a series of bathrooms without plumbing". Since that time, Toronto's subway system has had over 40 pieces installed in various subway stations. More art appeared as new stations were built and older ones were renovated.
In 2004, USA Today said of the Sheppard subway line: "Despite the remarkable engineering feats of this metro, known as Sheppard Subway, [it is] the art covering walls, ceilings, and platforms of all five stations that stands out. Each station is 'a total art experience where artists have created imaginative environments, uniquely expressing themes of community, location, and heritage' through panoramic landscapes and ceramic wall murals."
Internet and mobile phone access
Wireless service implementation
In 2012, the TTC awarded a contract to BAI Communications Canada to design, build and maintain a celular and Wi-Fi system along Toronto subway lines. BAI agreed to pay $25million to the TTC over a 20-year period for the exclusive rights to provide the service. BAI in turn would sell access to the cellular system to other carriers.
On December 13, 2013, Wi-Fi Internet access was launched at and St. George stations. The ad-supported service (branded as "TConnect") was provided by BAI Canada. The TTC and BAI Canada planned to offer TConnect at all underground stations. Commuters had to view a video advertisement to gain access to the Internet. It was expected that all of the 70 subway stations would have service by 2017, as well as the six stations along the Line 1 extension to Vaughan. From early December 2015 to late January 2016, users of TConnect were required to authenticate using a Twitter account, with Twitter's Canadian operations sponsoring the TConnect Wi-Fi network. Users of the network could sign in to enable an automatic Wi-Fi connection for 30 days. This arrangement was resumed on an optional basis from July 2016 to early December 2016. By August 2017, Wi-Fi was available at all existing stations and would be available in all future stations.
On June 17, 2015, the TTC announced that Wind Mobile (later rebranded Freedom Mobile) customers would be able to access cellular connectivity at some TTC subway stations. Service was initially between Bloor–Yonge and St. George stations on Line 1, and between Bloor–Yonge and Spadina stations on Line 2. Other carriers declined to use the BAI cellular system because of the price BAI was asking for access.
In April 2023, Rogers Communications took over BAI Communications and honoured existing access to Freedom Mobile customers. In August 2023, Rogers implemented 5G wireless service at all the TTC's downtown stations and within the tunnels between them. In September 2023, the federal government imposed new licence conditions requiring that cellphone and data services be available on the entire subway network by the end of 2026 and that all carriers, including Telus and Bell, were to have access to it. On October 2, 2023, Bell and Telus offered its cellular customers access to the subway's 5G system.
By November 2023, wireless service had been expanded to all TTC stations and to the tunnels between Sheppard West and Vaughan Metropolitan Centre stations, but only for Rogers and Freedom customers. Bell and Telus customers continued to have wireless service at a limited number of stations. In December 2023, Telus and Bell reached a deal with Rogers to provide their customers the same subway wireless services as Rogers and Freedom customers.
Rogers and the TTC decided to end TConnect, the free public Wi-Fi service, on December 27, 2024, due to low usage and the cost of upgrading it.
Current wireless services
, Rogers 5G wireless service is available in all subway stations for customers of Rogers, Freedom Mobile, Telus and Bell, but service access between stations is limited. 5G wireless service is available in open sections, as well as between Bloor–Yonge and Dupont stations on Line 1, and between and Keele stations on Line 2. 5G service is also available in the tunnels between Sheppard West and Vaughan Metropolitan Centre stations. Wireless service is available to customers of Rogers, Freedom Mobile, Bell and Telus (including flanker brands of these companies such as Koodo and Virgin Plus). This wireless service is not free, and users require a subscription from one of the four aforementioned carriers, given the lack of subsidized wireless plans in Ontario.
Naming
The TTC considers multiple different factors when they name stations and stops for subway and LRT stations. They consider local landmarks, the cross streets of the station, distinct communities of the past and present in the vicinity of the station, names of other stations in the system, and the grade of the station.
Metrolinx uses five criteria for naming stations and stops. These are:
Simplicity
Names must be logical and relevant to the area the station is built in
Names should be relevant for the life of the station
Names should help passengers locate themselves within the region
Uniqueness
Metrolinx will use the word "stop" in place of "station" at 10 of the 25 stations along the first phase of Line 5, particularly those that are not grade-separated.
Rolling stock
The following table shows the vehicle type by line:
Heavy rail stock
Line 1 Yonge–University and Line 4 Sheppard operate using the newest version of Toronto's subway cars, the Toronto Rocket, while Line 2 Bloor–Danforth uses the older T1 subway trains.
The TTC's original G-series cars were manufactured by the Gloucester Railway Carriage and Wagon Company. All subsequent heavy-rail subway cars were manufactured by Bombardier Transportation or one of its predecessors (Montreal Locomotive Works, Hawker Siddeley, and UTDC). All cars starting with the Hawker Siddeley H series in 1965 have been built in Bombardier's Thunder Bay, Ontario, plant. The final H4 subway cars were retired on January 27, 2012. This was followed by the retirement of the H5 subway cars, which had their final in-service trip on June 14, 2013, and the H6 retirement, which followed one year later with a final run on June 20, 2014.
Following the introduction of the Toronto Rocket trains on Lines 1 and 4, all the T1 trains were moved to Line 2. The T1s were expected to last until 2026. By the end of 2019, the TTC had proposed an overhaul to extend the T1 fleet's life by 10 years at an estimated cost of $100 million. By mid-2020, the TTC had started the design phase for a new generation of subway trains to replace the T1 fleet on Line 2 Bloor–Danforth. In late 2021, the TTC expected that the new trains would be introduced between 2026 and 2030, at an estimated cost of $1.6 billion. On October 13, 2022, the TTC issued a request for proposals to construct 480 new subway cars (80 six-car train sets) of a design different from the T1 and Toronto Rocket fleet for delivery between 2027 and 2033. , the TTC plans to overhaul the T1 fleet if newer trains cannot be delivered in time.
The Ontario Line will use smaller train sets and a smaller gauge than those used on the Toronto subway system. By using driverless trains with automatic train control (ATC), Metrolinx expects the line to be as frequent as the existing subway lines despite using smaller, lighter trains. In conjunction with ATC, stations will have platform-edge doors for safety, also allowing riders to exit and enter trains more quickly. The trains will be manufactured by Hitachi Rail, similar to trains in Copenhagen or Rome.
Light metro stock
Line 3 Scarborough used 28 S-series trains built by the Urban Transportation Development Corporation (UTDC) in Millhaven, Ontario. These Intermediate Capacity Transit System (ICTS) trains were Mark I models, similar in design to the original trains found on the Vancouver SkyTrain and the Detroit People Mover. These were the original vehicles on the line and were in service from the line's opening in 1985 to its closure in 2023. Because of the trains' age, they were refurbished for operation and initially intended to last until the extension of Line 2 Bloor–Danforth was built. In February 2021, the TTC announced plans to accelerate the retirement of Line 3, intending to close it in 2023. This was due to delays in planning and construction of the Line 2 extension (which was then projected to open in 2030 at the earliest) along with the increasing difficulty of performing critical maintenance work on the trains. Following an initial temporary closure owing to a derailment in July 2023, the TTC decided in August 2023 not to reopen the line. The TTC proposed selling some of these trains to the Detroit People Mover, which uses a similar technology.
Light rail stock
Metrolinx plans to use 76 Bombardier Flexity Freedom low-floor, light-rail vehicles for Line 5 Eglinton; however, 44 Alstom Citadis Spirit vehicles may be used if Bombardier is unable to deliver the Flexity Freedom on time. Such a substitution would require modifications to Line 5, especially the maintenance facility, as the Citadis Spirit is longer than the Flexity Freedom. Metrolinx intends to use 17 Citadis Spirit vehicles on Line 6 Finch West instead of the Flexity Freedom.
Technology
The heavy rail and light metro lines have some characteristics in common: Such lines are fully isolated from road traffic and pedestrians; the station platforms are covered, and the trains are boarded through many doors from high platforms within a fare-paid zone separated by faregates.
In contrast, the surface portions of the light rail lines (Lines 5 and 6) will fit into the street environment. Light-rail tracks will be laid on the surface within reserved lanes in the middle of the street, and cross street intersections at grade. Surface stations will have simple, low-level platforms. However, like heavy rail and light metro, passengers will be able to board and alight the light rail trains by multiple doors.
Line 3 Scarborough, a light metro, used a more complex technology than heavy rail, which a TTC document describes as follows:
Track is the 5 rail system on direct fixation and car is powered by an induction or "reaction rail" situated between the running rails at the same top of rail elevation. There are two side contacting power rails +300V and −300V respectively situated a distance of about 14 in. from the closest gauge line of one running rail.
Signals
Heavy rail
Fixed-block signalling was originally used on the Toronto subway since the opening of Toronto's first subway in 1954 and was the first signalling system used on Lines 2 and 4. As of 2022, Lines 2 and 4 use fixed-block signalling but Line 1 no longer does. Fixed-block signalling uses automatic signalling to prevent rear-end train collisions, while interlocking signals are used to prevent collisions from conflicting movements on track crossovers.
, automatic train control (ATC) has been implemented along the entire length of Line 1. In 2009, the TTC awarded a contract to Alstom to upgrade the signalling system of the existing section of Line 1, as well as equip its extension into Vaughan, with moving block–based communications-based train control (CBTC) by 2012. The estimated cost to implement ATC on Line 1 was $562million, $424million of which was funded by Metrolinx. The first section of the "Urbalis 400" ATC system on Line 1 entered revenue service on December 17, 2017, between Sheppard West and Vaughan stations, in conjunction with the opening of the Toronto–York Spadina subway extension (TYSSE) project.
The benefits of ATC on Line 1 are:
a reduced headway between trains from 2.5 minutes to 2 minutes during rush hours, allowing a 25 percent increase in the number of trains that can operate
fewer signal-related delays relative to the old fixed-block system
more efficient use of electricity, thus reducing operational costs
allowing single-track, bidirectional operation for trains in passenger service, albeit with reduced frequency, to allow for off-hour maintenance of the opposite track
The TTC has plans to convert Line 2 to ATC by 2030, subject to the availability of funding.
Light metro
Line 3 Scarborough was equipped with automatic train control from the outset, using the same SelTrac IS system as Vancouver's SkyTrain, meaning it could be operated autonomously. However, the TTC opted to equip each S-series train with an operator on board for door monitoring.
The future Ontario Line will use automatic train control with driverless trains. Its stations will be equipped with platform screen doors.
Light rail
When completed, Line 5 Eglinton will use Bombardier Transportation's Cityflo 650 CBTC automatic train control on the underground section of the line between Laird station and Mount Dennis station, along with the Eglinton Maintenance and Storage Facility adjacent to Mount Dennis station.
Track
Lines 1, 2 and 4the heavy-rail linesrun on tracks built to the Toronto gauge of , the same gauge used on the Toronto streetcar system. According to rail historians John F. Bromley and Jack May, the reason that the Yonge subway was built to the streetcar gauge was that between 1954 and 1965, subway bogies were maintained at the Hillcrest Complex, where the streetcar gauge was used for shop tracks. The Davisville Carhouse was not equipped to perform such heavy maintenance, and the bogies would be loaded onto a specially built track trailer for shipment between Davisville and Hillcrest. This practice ceased with the opening of the shops at the Greenwood Yard in 1965.
Line 3 Scarborough used standard-gauge tracks, as the ICTS design for the line did not allow for the interchange of rail equipment between the traditional subway system and Line 3. When its ICTS vehicles needed anything more than basic service (which could be carried out at the McCowan Yard), they were carried by truck to the Greenwood Subway Yard.
The Line 5 Eglinton and Line 6 Finch West LRT lines will be constructed with standard-gauge tracks. The projects are receiving a large part of their funding from the Ontario provincial transit authority Metrolinx and, to ensure a better price for purchasing vehicles, it wanted to have a degree of commonality with other similar projects within Ontario. The Ontario Line subway will similarly be built to standard gauge.
Facilities
The subway system has the following yards to provide storage, maintenance and cleaning for rolling stock. All yards are located above ground.
In the second quarter of 2018, the City of Toronto moved to expropriate Canadian Pacific Railway's disused Obico Yard at 30 Newbridge Road / 36 North Queen Street in Etobicoke for use as a potential future yard at the western end of Line 2 Bloor–Danforth. The yard is situated immediately to the southwest of Kipling station, the western terminus of Line 2.
Safety
There are several safety systems for use by passengers in emergencies:
Emergency alarms (formerly "Passenger assistance alarms"): Located throughout all subway trainsWhen the yellow strip is pressed, an audible alarm is activated within the car, a notification is sent to the train crew and the Transit Control Centre, which in turn dispatches a tiered response. An orange light is activated on the outside of the car with the alarm for emergency personnel to see where the problem is.
Emergency power cut devices: Marked by a blue light, located at both ends of each subway platformFor use to cut DC traction power in the event a person falls or is observed at track level or any emergency where train movement into the station would be dangerous. These devices cut power in both directions for approximately one station each way.
Emergency stopping mechanisms (PGEV: passenger/guard emergency valve): Located at each end of each subway car (with exception of the Toronto Rocket trains)Will activate the emergency brakes of the vehicle stopping it in its current location (for use in extreme emergencies, such as persons trapped in doors as train departs station, doors opening in the tunnel, derailments etc.)
Passenger intercoms: Located on subway platforms and near/in elevators in stationsFor use to inform station collector of security/life safety issues
Automated external defibrillators (AEDs): Located in several subway stations near collector boothsFor use in the event someone suffers cardiac arrest
Public telephones: Located in various locations in all stations, and at the Designated Waiting Area's on each subway platform. Emergency calls can be made to 911 toll free. Phones located at the DWAs also include a "Crisis Link" button that connect callers, free of charge, to a 24-hour crisis line in the event that they are contemplating self-harm.
Stations with high platforms have a crawl space under the platform edge which the TTC recommends that a person who has fallen onto the track use to avoid an oncoming train. Lying flat between the two rails is not recommended due to shallow clearances. The platform edge has a yellow strip behind which passengers should wait to avoid a fall.
Stations do not have platform screen doors, a feature which for Lines 1, 2 and 4 would require station modification, automatic train control (ATC) and a $1.35-billion investment which is not funded . ATC is needed to stop trains at a precise position along the platform to line up train doors with platform doors. , ATC has been activated along the entire length of Line 1; thus, it would be possible to install platform screen doors along Line 1. The future Ontario Line will be built to operate with ATC and will feature platform doors from its opening. The benefits of platform doors would be:
Blocking those attempting suicide or trespassers from the tracks: it takes 70 to 90 minutes to resume operations each time there is a personal injury at track level
Eliminating fires from debris falling on the tracks and the third rail
Allowing trains to enter crowded stations at speed, thus speeding up service along the line
The light-rail Line 5 Eglinton will use a guideway intrusion detection system (GIDS) to detect trespassers on the tracks on the underground sections of the line. When GIDS detects a trespasser on the tracks, it will issue an audio warning to the trespasser, provide live CCTV video to central control, and automatically stop the train without driver intervention. Each station will be equipped with multiple GIDS scanners along the station platform. There will also be GIDS scanners at each tunnel portal. In addition, there will be scanners within the yellow tactile strips along the platform edge to issue an audio warning if a person steps on it before the train has arrived.
A trial program began in 2008 with Toronto EMS and has been expanded and made permanent, with paramedics on hand at several stations during peak hours: Spadina and Bloor–Yonge (morning peak: 7am–10am) and Union and Eglinton (evening peak: 2pm–6pm).
By September 2023, the TTC was making naloxone available at each subway station so that designated trained TTC staff could attempt to rescue anyone having a drug overdose. Kits containing naloxone nasal spray would be stored at station collector booths. TTC special constables would carry naloxone.
Training
Subway operators begin their training at Hillcrest with a virtual reality mockup of a Toronto Rocket car. The simulator consists of the operator cab with full functions, a door and partial interior of a subway car. The simulator is housed in a simulated subway tunnel. Construction of a new subway training centre is underway at the Wilson Complex, as part of the Toronto Rocket subway car program.
Expansion plans
Provincially supported projects
On April 10, 2019, Ontario premier Doug Ford announced rapid transit–related projects that the Province of Ontario would support with either committed or future financing. One such project is the Ontario Line, a proposed rapid transit line that has succeeded the Relief Line proposal. Initially, the project was projected to be completed in 2027, but this was later pushed back to 2030. A groundbreaking ceremony for the Ontario Line was held on March 27, 2022.
The Line 5 West Extension to Pearson Airport is a proposal to extend Line 5 Eglinton from its terminus at Mount Dennis station west along Eglinton Avenue West to the proposed Pearson Transit Hub in Mississauga. In April 2019, Ford said that he would commit funds for this proposal.
The Yonge North Subway Extension (YNSE) is a proposal to extend Line 1 Yonge–University north along Yonge Street from Finch station, the existing terminus of Line 1, to near Highway 7 in Richmond Hill. There would be new stations at Steeles Avenue, Clark Avenue, between Highway 7 and Highway 407 near Langstaff GO Station and Richmond Hill Centre Terminal (dubbed "Bridge station"), and High Tech Road. The extension was proposed in the province's 2007 MoveOntario 2020 plan. A major problem with this proposal was that Line 1 was at capacity, and the TTC said in 2016 that the proposed Relief Line and SmartTrack would both need to be in service before opening the YNSE. In 2020, a preliminary agreement was signed between the Ontario provincial government and York Region that anticipated the completion of the extension by approximately 2030.
The Scarborough Subway Extension (SSE) is a proposal to replace Line 3 Scarborough with an eastward extension of Line 2 Bloor–Danforth. On October 8, 2013, Toronto City Council conducted a debate on whether to replace Line 3 with a light rail line or a subway extension. In 2014, the city council voted to extend Line 2 to Scarborough City Centre, which would result in the closure of Line 3. The SSE would be long and add one new station to Line 2 at Scarborough Town Centre. TTC and city staff finalized the precise route of the SSE in early 2017. In 2019, the Government of Ontario proposed a modified version of the proposal now known as the Line 2 East Extension (L2EE). The L2EE is long and adds three new stations, rather than one. The proposed completion deadline for the project is between 2029 and 2030.
The Line 4 East Extension to McCowan is a proposal to extend Line 4 Sheppard east along Sheppard Avenue East to McCowan Road, where it will connect with the Scarborough Subway Extension. Doug Ford said in April 2019 that he would commit funds related to this proposal.
Other active proposals
The Eglinton East LRT is a City of Toronto proposal to construct an LRT line (separate from Line 5 Eglinton) from Kennedy station east to Malvern. This proposal was originally part of the cancelled Scarborough–Malvern LRT in Transit City. It would have stations at Eglinton GO and Guildwood GO, as well as the University of Toronto Scarborough campus.
Inactive proposals
The Jane LRT is a proposed LRT line that would begin at Jane station on Line 2 and proceed north to Pioneer Village station on Line 1. While initially part of the cancelled Transit City plan, the Jane LRT is part of the 2018–2022 TTC Corporate Plan and tentatively referred to as Line 8.
The Line 4 West Extension to Sheppard West station is a proposal that would extend Line 4 Sheppard west along Sheppard Avenue West to Sheppard West station, where it would link to Line 1 Yonge–University. It is currently listed as an "unfunded future rapid transportation project" in the City of Toronto's 2013 Feeling Congested? report.
The Line 6 East Extension to Finch station is a proposal that would extend Line 6 Finch West east along Finch Avenue West to Finch station, where it would link up with Line 1 Yonge–University. In March 2010, the Ontario government eliminated the proposed section of line between Finch West station and Finch station because of budget constraints. This section of the line was part of the original Transit City proposal. In 2013, this plan was revived as an "unfunded future rapid transit project" in the City of Toronto's Feeling Congested? report, meaning this extension may be constructed sometime in the future. The extension was later shown in the 2018–2022 TTC Corporate Plan with no timeline for completion.
Along with a proposal to extend Line 6 to Finch station, there was another proposal that would have extended the line farther to Don Mills station, where it would have provided a connection to Line 4 Sheppard. In May 2009, Metrolinx proposed that the line be extended from Finch station along Finch Avenue East and Don Mills Road into Don Mills station to connect with the Sheppard East LRT and create a seamless crosstown LRT line in northern Toronto to parallel the Eglinton Crosstown LRT (later designated Line 5 Eglinton) in central Toronto. The TTC said that a planning study would have commenced in 2010.
The Line 6 West Extension to Pearson Airport is a proposal that would extend Line 6 Finch West west to Pearson Airport, where it would provide a link to Line 5 Eglinton. In 2009, the TTC studied the feasibility of potential routings for a future westward extension of the Etobicoke–Finch West LRT to the vicinity of the Woodbine Live development, Woodbine Centre, and Pearson International Airport. This extension was later reclassified as a future transit project as described in the 2013 Feeling Congested? report by the City of Toronto. Metrolinx revealed in January 2020 that they would study a possible connection to the Pearson Transit Hub at Pearson Airport.
Abandoned plans
The Queen subway line was a subway line first proposed in 1911. When Line 1 was first built, a roughed-in station was included under Queen station, with the intention that the Queen subway would be the city's second subway line. The route of the Queen subway line is included in the routes for both the Relief Line and the Ontario Line proposals.
The Eglinton West line was a proposed subway line in the late 1980s on which construction began in the early 1990s. It was cancelled after the election of Mike Harris as premier of Ontario. Much of its planned route is included in Line 5 Eglinton.
One proposed expansion of Line 2 Bloor–Danforth into Mississauga included eight potential stations stretching west from Kipling station to Mississauga City Centre, retrofitting some existing GO Transit stations. The plan was for the subway stations to open in 2011. Mississauga mayor Hazel McCallion and the Regional Municipality of Peel did not support the project.
The Relief Line was a proposed heavy-rail subway line running from Pape station south to Queen Street East and then west to the vicinity of Toronto City Hall. The proposal included intermediate stations at Sherbourne Street, Sumach Street, Broadview Avenue, and another near Gerrard Square. In January 2016, alignment options and possible stations were still being studied, and the project was unfunded. Construction was expected to take about ten years to complete. As early as 2008, Metrolinx chair Rob MacIsaac expressed the intent to construct the Relief Line to prevent overcrowding along Line 1. Toronto City Council also expressed support for this plan. In April 2019, the Government of Ontario under Doug Ford announced that the Ontario Line would be built instead of the Relief Line. As a result, TTC and City of Toronto staff suspended further planning work on the Relief Line in June 2019.
Transit City
The Sheppard East LRT was a proposed light rail line running east from Don Mills station to Morningside Avenue in Scarborough. The line was to be long with 25 surface stations and one underground connection at Don Mills station on Line 4 Sheppard. Construction of the Sheppard East LRT was to start upon completion of Line 6 Finch West. However, in July 2016, the Toronto Star reported the Sheppard LRT had been deferred indefinitely. In April 2019, Premier Doug Ford announced that the provincial government would extend Line 4 Sheppard to McCowan Road at some unspecified time in the future, replacing the proposed Sheppard East LRT.
The Don Mills LRT was a proposed LRT line that would have headed north from Pape station along Don Mills to Don Mills station. Its route was later incorporated into the Relief Line and Ontario Line proposals.
| Technology | Canada | null |
584992 | https://en.wikipedia.org/wiki/Argentinosaurus | Argentinosaurus | Argentinosaurus (meaning "lizard from Argentina") is a genus of giant sauropod dinosaur that lived during the Late Cretaceous period in what is now Argentina. Although it is only known from fragmentary remains, Argentinosaurus is one of the largest known land animals of all time, perhaps the largest, measuring long and weighing . It was a member of Titanosauria, the dominant group of sauropods during the Cretaceous. It is regarded by many paleontologists as the largest dinosaur ever, and perhaps lengthwise the longest animal ever, though both claims have no concrete evidence yet.
The first Argentinosaurus bone was discovered in 1987 by a farmer on his farm near the city of Plaza Huincul. A scientific excavation of the site led by the Argentine palaeontologist José Bonaparte was conducted in 1989, yielding several back vertebrae and parts of a sacrum—fused vertebrae between the back and tail vertebrae. Additional specimens include a complete femur (thigh bone) and the shaft of another. Argentinosaurus was named by Bonaparte and the Argentine palaeontologist Rodolfo Coria in 1993; the genus contains a single species, A. huinculensis. The generic name Argentinosaurus means "Argentine lizard", and the specific name huinculensis refers to its place of discovery, Plaza Huincul.
The fragmentary nature of Argentinosaurus remains makes their interpretation difficult. Arguments revolve around the position of the recovered vertebrae within the vertebral column and the presence of accessory articulations between the vertebrae that would have strengthened the spine. A computer model of the skeleton and muscles estimated this dinosaur had a maximum speed of 7 km/h (5 mph) with a pace, a gait where the fore and hind limb of the same side of the body move simultaneously. The fossils of Argentinosaurus were recovered from the Huincul Formation, which was deposited in the middle Cenomanian to early Turonian ages (about 96 to 92 million years ago) and contains a diverse dinosaur fauna including the giant theropod Mapusaurus.
Discovery
The first Argentinosaurus bone, which is now thought to be a fibula (calf bone), was discovered in 1987 by Guillermo Heredia on his farm "Las Overas" about east of Plaza Huincul, in Neuquén Province, Argentina. Heredia, initially believing he had discovered petrified logs, informed the local museum, the Museo Carmen Funes, whose staff members excavated the bone and stored it in the museum's exhibition room. In early 1989, the Argentine palaeontologist José F. Bonaparte initiated a larger excavation of the site involving palaeontologists of the Museo Argentino de Ciencias Naturales, yielding a number of additional elements from the same individual. The individual, which later became the holotype of Argentinosaurus huinculensis, is catalogued under the specimen number MCF-PVPH 1.
Separating fossils from the very hard rock in which the bones were encased required the use of pneumatic hammers. The additional material recovered included seven dorsal vertebrae (vertebrae of the back), the underside of the sacrum (fused vertebrae between the dorsal and tail vertebrae) including the first to fifth and some sacral ribs, and a part of a dorsal rib (rib from the flank). These finds were also incorporated into the collection of the Museo Carmen Funes.
Bonaparte presented the new find in 1989 at a scientific conference in San Juan. The formal description was published in 1993 by Bonaparte and the Argentine palaeontologist Rodolfo Coria, with the naming of a new genus and species, Argentinosaurus huinculensis. The generic name means "Argentine lizard", while the specific name refers to the town Plaza Huincul. Bonaparte and Coria described the limb bone discovered in 1987 as an eroded tibia (shin bone), although the Uruguayan palaeontologist Gerardo Mazzetta and colleagues reidentified this bone as a left fibula in 2004. In 1996, Bonaparte referred (assigned) a complete femur (thigh bone) from the same locality to the genus, which was put on exhibit at the Museo Carmen Funes. This bone was deformed by front-to-back crushing during fossilization. In their 2004 study, Mazzetta and colleagues mentioned an additional femur that is housed in the La Plata Museum under the specimen number MLP-DP 46-VIII-21-3. Though not as strongly deformed as the complete femur, it preserves only the shaft and lacks its upper and lower ends. Both specimens belonged to individuals equivalent in size to the holotype individual. As of 2019, however, it was still uncertain whether any of these femora belonged to Argentinosaurus.
Description
Size
Argentinosaurus is among the largest known land animals, although its exact size is difficult to estimate because of the incompleteness of its remains. To counter this problem, palaeontologists can compare the known material with that of smaller related sauropods known from more complete remains. The more complete taxon can then be scaled up to match the dimensions of Argentinosaurus. Mass can be estimated from known relationships between certain bone measurements and body mass, or through determining the volume of models.
A reconstruction of Argentinosaurus created by Gregory Paul in 1994 yielded a length estimate of . Later that year, estimates by Bonaparte and Coria suggesting a hind limb length of , a trunk length (hip to shoulder) of , and an overall body length of were published. In 2006, Kenneth Carpenter reconstructed Argentinosaurus using the more complete Saltasaurus as a guide and estimated a length of . In 2008, Jorge Calvo and colleagues used the proportions of Futalognkosaurus to estimate the length of Argentinosaurus at less than . In 2013, William Sellers and colleagues arrived at a length estimate of and a shoulder height of by measuring the skeletal mount in Museo Carmen Funes. During the same year, Scott Hartman suggested that because Argentinosaurus was then thought to be a basal titanosaur, it would have a shorter tail and narrower chest than Puertasaurus, which he estimated to be about long, indicating Argentinosaurus was slightly smaller. In 2016, Paul estimated the length of Argentinosaurus at , but later estimated a greater length of or longer in 2019, restoring the unknown neck and tail of Argentinosaurus after those of other large South American titanosaurs.
Paul estimated a body mass of for Argentinosaurus in 1994. In 2004, Mazzetta and colleagues provided a range of and considered to be the most likely mass, making it the heaviest sauropod known from good material. In 2013, Sellers and colleagues estimated a mass of by calculating the volume of the aforementioned Museo Carmen Funes skeleton. In 2014 and 2018, Roger Benson and colleagues estimated the mass of Argentinosaurus at , but these estimates were questioned due to a very large error range and lack of precision. In 2016, using equations that estimate body mass based on the circumference of the humerus and femur of quadrupedal animals, Bernardo Gonzáles Riga and colleagues estimated a mass of based on an isolated femur; it is uncertain whether this femur actually belongs to Argentinosaurus. In the same year, Paul moderated his earlier estimate from 1994 and listed the body mass of Argentinosaurus at more than . In 2019, Paul moderated his 2016 estimate and gave a mass estimate of based on his skeletal reconstructions (diagrams illustrating the bones and shape of an animal) of Argentinosaurus in dorsal and lateral view. In 2020, Campione and Evans also yielded a body mass estimate of approximately . In 2023, Paul and Larramendi proposed that the holotype would have weighed between at maximum. They further suggested that the enigmatic, fragmentary Bruhathkayosaurus possibly weighed more, between .
While Argentinosaurus was definitely a massive animal, there is disagreement over whether it was the largest known titanosaur. Puertasaurus, Futalognkosaurus, Dreadnoughtus, Paralititan, "Antarctosaurus" giganteus, and Alamosaurus have all been considered to be comparable in size with Argentinosaurus by some studies, although others have found them to be notably smaller. In 2017, Carballido and colleagues considered Argentinosaurus to be smaller than Patagotitan, since the latter had a greater area enclosed by the , , and of its anterior dorsal vertebrae. However, Paul found Patagotitan to be smaller than Argentinosaurus in 2019, due to the latter's dorsal column being considerably longer. Even if Argentinosaurus was the largest-known titanosaur, other sauropods including Maraapunisaurus and a giant mamenchisaurid, may have been larger, although these are only known from very scant remains. Some diplodocids, such as Supersaurus and Diplodocus may have exceeded Argentinosaurus in length despite being considerably less massive. The mass of the blue whale, however, which can be greater than , still exceeds that of all known sauropods.
Vertebrae
Argentinosaurus likely possessed 10 dorsal vertebrae, like other titanosaurs. The vertebrae were enormous even for sauropods; one dorsal vertebra has a reconstructed height of and a width of , and the are up to in width. In 2019, Paul estimated the total length of the dorsal vertebral column at and the width of the pelvis at 0.6 times the combined length of the dorsal and sacral vertebral column. The dorsals were (concave at the rear) as in other macronarian sauropods. The (excavations on the sides of the centra) were proportionally small and positioned in the front half of the centrum. The vertebrae were internally lightened by a complex pattern of numerous air-filled chambers. Such camellate bone is, among sauropods, especially pronounced in the largest and longest-necked species. In both the dorsal and sacral vertebrae, very large cavities measuring were present. The dorsal ribs were tubular and cylindrical in shape, in contrast with other titanosaurs. Bonaparte and Coria, in their 1993 description, noted the ribs were hollow, unlike those of many other sauropods, but later authors argued this hollowing could also have been due to erosion after the death of the individual. Argentinosaurus, like many titanosaurs, probably had six sacral vertebrae (those in the hip region), although the last one is not preserved. The centra of the second to fifth sacral vertebrae were much reduced in size and considerably smaller than the centrum of the first sacral. The sacral ribs curved downwards. The second sacral rib was larger than the other preserved sacral ribs, though the size of the first is unknown due to its incompleteness.
Because of their incomplete preservation, the original position of the known dorsal vertebrae within the vertebral column is disputed. Dissenting configurations were suggested by Bonaparte and Coria in 1993; Fernando Novas and Martín Ezcurra in 2006; and Leonardo Salgado and Jaime Powell in 2010. One vertebra was interpreted by these studies as the first, fifth or third; and another vertebra as the second, tenth or eleventh, or ninth, respectively. A reasonably complete vertebra was found to be the third by the 1993 and 2006 studies, but the fourth by the 2010 study. Another vertebra was interpreted by the three studies as being part of the rear section of the dorsal vertebral column, as the fourth, or as the fifth, respectively. In 1993, two articulated (still connected) vertebrae were thought to be of the rear part of the dorsal column but are interpreted as the sixth and seventh vertebrae in the two later studies. The 2010 study mentioned another vertebra that was not mentioned by the 1993 and 2006 studies; it was presumed to belong to the rear part of the dorsal column.
Another contentious issue is the presence of hyposphene-hypantrum articulations, accessory joints between vertebrae that were located below the main articular processes. Difficulties in interpretation arise from the fragmentary preservation of the vertebral column; these joints are hidden from view in the two connected vertebrae. In 1993, Bonaparte and Coria said the hyposphene-hypantrum articulations were enlarged, as in the related Epachthosaurus, and had additional articular surfaces that extended downwards. This was confirmed by some later authors; Novas noted the hypantrum (a bony extension below the articular processes of the front face of a vertebra) extended sidewards and downwards, forming a much-broadened surface that connected with the equally enlarged hyposphene at the back face of the following vertebra. In 1996, Bonaparte stated these features would have made the spine more rigid and were possibly an adaptation to the giant size of the animal. Other authors argued most titanosaur genera lacked hyposphene-hypantrum articulations and that the articular structures seen in Epachthosaurus and Argentinosaurus are thickened vertebral (ridges). Sebastián Apesteguía, in 2005, argued the structures seen in Argentinosaurus, which he termed hyposphenal bars, are indeed thickened laminae that could have been derived from the original hyposphene and had the same function.
Limbs
The complete femur that was assigned to Argentinosaurus is long. The femoral shaft has a circumference of about at its narrowest part. Mazzetta and colleagues used regression equations to estimate its original length at , which is similar to the length of the other femur, and later in 2019 Paul gave a similar estimate of . By comparison, the complete femora preserved in the other giant titanosaurs Antarctosaurus giganteus and Patagotitan mayorum measure and , respectively. While the holotype specimen does not preserve a femur, it preserves a slender fibula (originally interpreted as a tibia) that is in length. When it was identified as a tibia, it was thought to have a comparatively short , a prominent extension at the upper front that anchored muscles for stretching the leg. However, as stated by Mazzetta and colleagues, this bone lacks both the proportions and anatomical details of a tibia, while being similar in shape to other sauropod fibulae.
Classification
Relationships within Titanosauria are amongst the least understood of all groups of dinosaurs. Traditionally, the majority of sauropod fossils from the Cretaceous had been referred to a single family, the Titanosauridae, which has been in use since 1893. In their 1993 first description of Argentinosaurus, Bonaparte and Coria noted it differed from typical titanosaurids in having hyposphene-hypantrum articulations. As these articulations were also present in the titanosaurids Andesaurus and Epachthosaurus, Bonaparte and Coria proposed a separate family for the three genera, the Andesauridae. Both families were united into a new, higher group called Titanosauria.
In 1997, Salgado and colleagues found Argentinosaurus to belong to Titanosauridae in an unnamed clade with Opisthocoelicaudia and an indeterminate titanosaur. In 2002, Davide Pisani and colleagues recovered Argentinosaurus as a member of Titanosauria, and again found it to be in a clade with Opisthocoelicaudia and an unnamed taxon, in addition to Lirainosaurus. A 2003 study by Jeffrey Wilson and Paul Upchurch found both Titanosauridae and Andesauridae to be invalid; the Titanosauridae because it was based on the dubious genus Titanosaurus and the Andesauridae because it was defined on plesiomorphies (primitive features) rather than on synapomorphies (newly evolved features that distinguish the group from related groups). A 2011 study by Philip Mannion and Calvo found Andesauridae to be paraphyletic (excluding some of the group's descendants) and likewise recommended its disuse.
In 2004, Upchurch and colleagues introduced a new group called Lithostrotia that included the more derived (evolved) members of Titanosauria. Argentinosaurus was classified outside this group and thus as a more basal ("primitive") titanosaurian. The basal position within Titanosauria was confirmed by a number of subsequent studies. In 2007, Calvo and colleagues named Futalognkosaurus; they found it to form a clade with Mendozasaurus and named it Lognkosauria. A 2017 study by Carballido and colleagues recovered Argentinosaurus as a member of Lognkosauria and the sister taxon of Patagotitan. In 2018, González Riga and colleagues also found it to belong in Lognkosauria, which in turn was found to belong to Lithostrotia.
Another 2018 study by Hesham Sallam and colleagues found two different phylogenetic positions for Argentinosaurus based on two data sets. They did not recover it as a lognkosaurian but as either a basal titanosaur or a sister taxon of the more derived Epachthosaurus. In 2019, Julian Silva Junior and colleagues found Argentinosaurus to belong to Lognkosauria once again; they recovered Lognkosauria and Rinconsauria (another group generally included in Titanosauria) to be outside Titanosauria. Another 2019 study by González Riga and colleagues also found Argentinosaurus to belong to Lognkosauria; they found this group to form a larger clade with Rinconsauria within Titanosauria, which they named Colossosauria.
Topology according to Carballido and colleagues, 2017.
Topology according to González Riga and colleagues, 2019.
Palaeobiology
The giant size of Argentinosaurus and other sauropods was likely made possible by a combination of factors; these include fast and energy-efficient feeding allowed for by the long neck and lack of mastication, fast growth and fast population recovery due to their many small offspring. Advantages of giant sizes would likely have included the ability to keep food inside the digestive tract for lengthy periods to extract a maximum of energy, and increased protection against predators. Sauropods were oviparous (egg-laying). In 2016, Mark Hallett and Matthew Wedel stated that the eggs of Argentinosaurus were probably only in volume, and that a hatched Argentinosaurus was no longer than and not heavier than . The largest sauropods increased their size by five orders of magnitude after hatching, more than in any other amniote animals. Hallett and Wedel argued size increases in the evolution of sauropods were commonly followed by size increases of their predators, theropod dinosaurs. Argentinosaurus might have been preyed on by Mapusaurus, which is among the largest theropods known. Mapusaurus is known from at least seven individuals found together, raising the possibility that this theropod hunted in packs to bring down large prey including Argentinosaurus.
In 2013, Sellers and colleagues used a computer model of the skeleton and muscles of Argentinosaurus to study its speed and gait. Before computer simulations, the only way of estimating speeds of dinosaurs was through studying anatomy and trackways. The computer model was based on a laser scan of a mounted skeletal reconstruction on display at the Museo Carmen Funes. Muscles and their properties were based on comparisons with living animals; the final model had a mass of . Using computer simulation and machine learning techniques, which found a combination of movements that minimised energy requirements, the digital Argentinosaurus learned to walk. The optimal gait found by the algorithms was close to a pace (forelimb and hind limb on the same side of the body move simultaneously). The model reached a top speed of just over 2 m/s (7.2 km/h, 5 mph). The authors concluded with its giant size, Argentinosaurus reached a functional limit. Much larger terrestrial vertebrates might be possible but would require different body shapes and possibly behavioural change to prevent joint collapse. The authors of the study cautioned the model is not fully realistic and too simplistic, and that it could be improved in many areas. For further studies, more data from living animals is needed to improve the soft tissue reconstruction, and the model needs to be confirmed based on more complete sauropod specimens.
Palaeoenvironment
Argentinosaurus was discovered in the Argentine Province of Neuquén. It was originally reported from the Huincul Group of the Río Limay Formation, which have since become known as the Huincul Formation and the Río Limay Subgroup, the latter of which is a subdivision of the Neuquén Group. This unit is located in the Neuquén Basin in Patagonia. The Huincul Formation is composed of yellowish and greenish sandstones of fine-to-medium grain, some of which are tuffaceous. These deposits were laid down during the Upper Cretaceous, either in the middle Cenomanian to early Turonian stages or the early Turonian to late Santonian. The deposits represent the drainage system of a braided river.
Fossilised pollen indicates a wide variety of plants were present in the Huincul Formation. A study of the El Zampal section of the formation found hornworts, liverworts, ferns, Selaginellales, possible Noeggerathiales, gymnosperms (including gnetophytes and conifers), and angiosperms (flowering plants), in addition to several pollen grains of unknown affinities. The Huincul Formation is among the richest Patagonian vertebrate associations, preserving fish including dipnoans and gar, chelid turtles, squamates, sphenodonts, neosuchian crocodilians, and a wide variety of dinosaurs. Vertebrates are most commonly found in the lower, and therefore older, part of the formation.
In addition to Argentinosaurus, the sauropods of the Huincul Formation are represented by another titanosaur, Choconsaurus, and several rebbachisaurids including Cathartesaura, Limaysaurus, and some unnamed species. Theropods including carcharodontosaurids such as Mapusaurus, abelisaurids including Skorpiovenator, Ilokelesia, and Tralkasaurus, noasaurids such as Huinculsaurus, paravians such as Overoraptor, and other theropods such as Aoniraptor and Gualicho have also been discovered there. Several iguanodonts are also present in the Huincul Formation.
| Biology and health sciences | Sauropods | Animals |
585373 | https://en.wikipedia.org/wiki/Ornithischia | Ornithischia | Ornithischia () is an extinct clade of mainly herbivorous dinosaurs characterized by a pelvic structure superficially similar to that of birds. The name Ornithischia, or "bird-hipped", reflects this similarity and is derived from the Greek stem (), meaning "bird", and (), meaning "hip". However, birds are only distantly related to this group, as birds are theropod dinosaurs.
Ornithischians with well known anatomical adaptations include the ceratopsians or "horn-faced" dinosaurs (e.g. Triceratops), the pachycephalosaurs or "thick-headed" dinosaurs, the armored dinosaurs (Thyreophora) such as stegosaurs and ankylosaurs, and the ornithopods. There is strong evidence that certain groups of ornithischians lived in herds, often segregated by age group, with juveniles forming their own flocks separate from adults. Some were at least partially covered in filamentous (hair- or feather- like) pelts, and there is much debate over whether these filaments found in specimens of Tianyulong, Psittacosaurus, and Kulindadromeus may have been primitive feathers.
Description
Ornithischia is a very large and diverse group of dinosaurs, with members known from all continents, habitats, and a very large range of sizes. They are primarily herbivorous browsers or grazers, but some members may have also been opportunistic omnivores. Ornithischians are united by multiple features of the skull, teeth, and skeleton, including especially the presence of a and , an increased number of , the absence of , and an . Early ornithischians ranged around in length, with them increasing in size over time so that the largest armoured ornithischians were around and , the largest horned ornithischians were around and , and the largest crested ornithischians were around and .
Much of the knowledge of early ornithischian anatomy comes from Lesothosaurus, which is a taxon known from multiple skulls and skeletons from the Early Jurassic of Lesotho. The rear of its skull is box-like, while the snout tapers to a point. The is small, the that opens from the side of the skull into the palate is large, shallow and triangular, the is large and round and has a palpebral creating a brow, and the lower jaw has a large .
The skulls of Emausaurus and Scelidosaurus, two early members of the armoured group Thyreophora, show similarities in the box-like skull that tapers to the front. The antorbital fossa is smaller and forming an elongate oval in both taxa, and the palpebral which is elongate and slender in Lesothosaurus is widened in Emausaurus and completely incorporated into the skull as a flat bone in Scelidosaurus. Skulls in members of the thyreophoran group Stegosauria are much longer and lower, with the width at the back being greater than the height in Stegosaurus. The snout and lower jaw are long and deep, and in some genera the does not have any teeth. As in Scelidosaurus, the palpebral forms the top border of the orbit as a flat brow bone, but the antorbital fossa is reduced to the point of absence in some genera.
Ankylosaurs, the other group of armoured ornithischians, have very robust, immobile skulls, with three significant features that separate them from other groups. The antorbital fossa, and mandibular fenestra are all closed, the sutures separating skull bones are almost completely obliterated by surface texturing, and there is bony armour above the orbits, and at the top and bottom corners of the back of the skull. Teeth are sometimes absent from the premaxilla, and both the upper and lower jaws have deeply inset teeth creating large cheeks. Ankylosaurs also have very extensive and complicated network of sinuses, formed by bone growth in the palate.
The skulls are known from many early ornithopods and some heterodontosaurids, showing similar general features. Skulls are relatively tall with shorter snouts, but the snout is elongated in some later taxa like Thescelosaurus. The orbit and antorbital fossa are large, but the nasal opening is small, and while teeth are present in the premaxilla, there is a toothless front tip that likely formed a keratinous beak. The premaxillary teeth and the first lower tooth in Heterodontosaurus are enlarged into sizeable canines. In later ornithopods, the skulls are more elongate and sometimes fully rectangular, with a very large nasal opening, and a thin, elongate palpebral that can extend the entire way across the orbit. Teeth are almost always absent from the premaxilla, the antorbital fossa is reduced and round to slit-like, the tip of the snout is sometimes flared to form a broad beak. Members of the ornithopod family Hadrosauridae show further adaptations, including the formation of where teeth are continuously replaced, and in many genera the development of prominent cranial crests formed by multiple different bones of the skull.
Pachycephalosauria, at one time thought to be close to ornithopods and now known to be related instead to ceratopsians, show a unique skull anatomy that is unlike any other ornithischian. The bones of the top of the skull are thickened and in many taxa expanded significantly to form round bony domes as the top of the head, as well as possessing small nodes or elongate spikes along the back edge of the skull. Many taxa are only known from these thick skull domes, which are fused from the and bones. As in many other ornithischians, the snout is short and tapering, the nasal opening is small, the antorbital fossa is sometimes absent, and there are premaxillary teeth, though only three. The two palpebrals are also incorporated into the skull roof as in thyreophorans, rather than free.
Ceratopsians, the sister group to pachycephalosaurs, also display many cranial adaptations, most importantly the evolution of a bone called the that forms the top beak opposite the predentary. The bones flare to the sides to create a pentagonal skull seen from above, the nasal opening is closer to the top of the snout than the teeth, and while the snout tapers in some taxa, it is very deep and short in Psittacosaurus. The ceratopsian palpebral is generally triangular, and the back edge of the skull roof forms a flat frill that is enlarged in more derived ceratopsians. The ceratopsian family Ceratopsidae progresses on these features with the addition of horns above each orbit and on the top of the snout, as well as substantial elongation of the frill and in many genera the development of two large forming holes in the frill. The skull and frill elongation makes the skulls of Torosaurus and Pentaceratops the largest of any known terrestrial vertebrate, at over long.
Early ornithischians were relatively small dinosaurs, averaging about 1–2 meters in body length, with a triangular skull that had large circular orbits on the sides. This suggests that early ornithischians had relatively huge eyes that faced laterally. The forelimbs of early ornithischians are considerably shorter than their hindlimbs. A small forelimb such as those present in early ornithischians would not have been useful for locomotion, and it is evident that early ornithischians were bipedal dinosaurs. The entire skeleton was lightly built, with a largely fenestrated skull and a very stout neck and trunk. The tail is nearly half of the dinosaurs' overall length. The long tail presumably acted as a counterbalance and as a compensating mechanism for shifts in the creature's center of gravity. The hindlimbs of early ornithischians show that the tibia is considerably longer than the femur, a feature that suggests that early ornithischians were adapted for bipedality, and were fast runners.
"Bird-hip"
The ornithischian pelvis was "opisthopubic", meaning that the pubis pointed down and backwards (posterior), parallel with the ischium (Figure 1a). Additionally, the ilium had a forward-pointing process (the preacetabular process) to support the abdomen. This resulted in a four-pronged pelvic structure. In contrast to this, the saurischian pelvis was "propubic", meaning the pubis pointed toward the head (anterior), as in ancestral reptiles (Figure 1b).
The opisthopubic pelvis independently evolved at least three times in dinosaurs (in ornithischians, birds and therizinosauroids). Some argue that the opisthopubic pelvis evolved a fourth time, in the clade Dromaeosauridae, but this is controversial, as other authors argue that dromaeosaurids are mesopubic. It has also been argued that the opisthopubic condition is basal to maniraptorans (including among others birds, therizinosauroids and dromaeosaurids), with some clades having later experienced a reversal to the propubic condition.
Classification
History
The first recognition of an herbivorous group of dinosaurs was named Orthopoda in 1866 by Edward Drinker Cope, a name that is now recognized as a synonym of Ornithischia. Discussions on the taxonomy of dinosaurs by Othniel Charles Marsh identified two major groups of herbivorous dinosaurs, Ornithopoda and Stegosauria, containing genera from a broad geographic and stratigraphic distribution. While often these groups were placed within Dinosauria, Harry Govier Seeley suggested instead in 1888 that ornithopods and stegosaurs, which shared many features in the skull, limbs, and hip, were unrelated to other dinosaurs, and so he proposed that Dinosauria was an unnatural grouping of two independently-evolved suborders, Saurischia and Ornithischia. It is from the anatomy of the hip that Seeley chose the name Ornithischia, referencing the bird-like anatomy of the ischium bone. Many researchers did not follow the division of Seeley at first, with Marsh naming the group Predentata to unite ornithopods, stegosaurs, and Ceratopsia within Dinosauria, but with additional work and new discoveries the unnatural nature of Dinosauria came to be accepted, and the names Seeley proposed found common use. After further decades, in 1974 Robert T. Bakker and Peter M. Galton provided new evidence in support of the grouping of ornithischians and saurischians together within a natural Dinosauria, which has been supported since.
The first cladistic studies on Ornithischia were published simultaneously in 1984 by David B. Norman, Andrew R. Milner, and Paul C. Sereno. These studies differed somewhat in their results, but found that Iguanodon was closer to hadrosaurs than other ornithopods, followed by Dryosaurus, Hypsilophodon and then Lesothosaurus and its relatives. While the study of Norman placed ceratopsians between Hypsilophodon and more derived ornithopods, the study of Sereno placed ceratopsians with ankylosaurs and stegosaurs. It has since been recognized by that ceratopsians are closer to ornithopods than the armoured ankylosaurs and stegosaurs, but the relationships of some groups are still in states of change, with some more consistent results than others. An early study that looked at the relationships within Ornithischia with greater detail was that of Sereno in 1986, who provided features that supported the evolution of all ornithischian groups and shared similarities with earlier studies. Sereno found that Lesothosaurus was the most primitive ornithischian, with all other ornithischians united within the clade Genasauria, which has two subgroups. The first subgroup, Thyreophora, unites ankylosaurs and stegosaurs along with more primitive taxa like Scelidosaurus, while the second subgroup, Cerapoda, contained ornithopods, ceratopsians, pachycephalosaurs, and small primitive forms. One group of the small primitive forms considered to be cerapodans by Sereno, Heterodontosauridae, has since been found to be a group of very early ornithischians of similar evolutionary status as Lesothosaurus, although this result is not definitive.
The first large-scale numerical analysis of the phylogenetics of Ornithischia was published in 2008 by Richard J. Butler and colleagues, including many primitive ornithischians and members from all of the major subgroups, to test some of the hypotheses given previously about ornithischian evolution and the relationships of the groups. Thyreophora was found to be a supported group, as well as the clade of pachycephalosaurs and ceratopsians that Sereno named Marginocephalia in 1986. Some taxa considered earlier to be ornithopods, like heterodontosaurids, Agilisaurus, Hexinlusaurus and Othnielia, were instead found to be outside of both Ornithopoda and Ceratopsia, but still closer to those two groups than thyreophorans. The early Argentinian taxon Pisanosaurus was found to be the most primitive ornithischian, but while overall results agreed with earlier studies and showed some stability, areas of the evolutionary tree were found to be problematic, and with potential for later change. In 2021, a new phylogenetic study was published authored by Paul-Emile Dieudonné and colleagues that instead found Heterodontosauridae to nest alongside Pachycephalosauria within Marginocephalia, changing the early evolution of ornithopods considerably, and showing that the evolution of ornithischians was far from definitive. Below are the cladograms of Sereno, Butler and colleagues, and Dieudonné and colleagues, restricted to the major clades of Ornithischia, Heterodontosauridae, Lesothosaurus and Pisanosaurus.
Sereno, 1986
Butler et al., 2008
Dieudonné et al., 2021
Subgroups
When Ornithischia was first named, Seeley united the orders Ornithopoda and Stegosauria of Marsh's taxonomy within the new group. Ceratopsia was then recognized as a unique group related to ornithopods and stegosaurs by Marsh by 1894, with each of the three suborders still being recognized as distinct groups today. Ceratopsians are recognized as group that grew in diversity later in the Cretaceous after evolving in the Late Jurassic, encompassing a diverse array of bodyforms from the small, bipedal Psittacosaurus up to the very large, quadrupedal, horned and frilled ceratopsids like Torosaurus, which has the longest skull of any terrestrial vertebrate. Ornithopods, which range from the Early Jurassic in some studies until the end of the Cretaceous with continuous diversity, are generally bipedal and unarmoured, though some later groups like Hadrosauridae evolved complex dental anatomy in the form of batteries of teeth. Stegosaurs are comparatively limited, restricted to a primarily Jurassic group of moderate to large, quadrupedal herbivores with two rows of vertical plates ornamenting their spine, which possibly did not go extinct until the Late Cretaceous, though at the time of Marsh Stegosauria was used for all armored and quadrupedal taxa, many of which are now separated into Ankylosauria. Ankylosaurs were only recognized as a distinct group from stegosaurs in the 1920s despite many members being known for decades before, with the group now encompassing a broad array of heavy, quadrupedal ornithischians with extensive armour covering their body and skull. The fifth recognized major subgroup of ornithischians is Pachycephalosauria, which was first named in 1974 after being confused for a long time with the theropod Troodon on account of their similarly omnivorous and unique teeth. Pachycephalosaurians are unique for their tall, thickened skulls and small, bipedal bauplan, suggesting that their domes were for sexual display or combat in the form of head-butting or flank-butting. Some taxa, particularly those at one point groupt together in the ornithopod family Hypsilophodontidae, are now recognized to not fall within any of the major ornithischian groups, and either be outside Genasauria, or on the basal stem of Neornithischia outside Cerapoda.
Following the publication of the PhyloCode to provide rules and regulations on the use of taxonomic names for groups, the internal classification of Ornithischia was revised by Daniel Madzia and colleagues in 2021 to provide a framework of definitions and taxa for other studies to follow and modify from. They names the new clade Saphornithischia to unite heterodontosaurids with more derived ornithischians to encompass the concept of the well-supported clear ornithischians, as the origins of the group and the relationships of primitive taxa like Pisanosaurus and members of Silesauridae may sometimes be found to be ornithischians outside this core grouping. Madzia and colleagues also provided a composite cladogram of Ornithischia to illustrate the consensus of internal divisions, which can be seen below. Ornithischia has been defined as all taxa closer to Iguanodon than Allosaurus or Camarasaurus. Genasauria has been defined as the smallest clade containing Ankylosaurus, Iguanodon, Stegosaurus, and Triceratops.
Multiple taxa within Ornithischia fall around the origin of the group, or cannot be classified definitively. Lesothosaurus and Laquintasaura have been found as basal thyreophorans or basal ornithischians, Chilesaurus is either a theropod or a basal ornithischian, Pisanosaurus has been found as a basal ornithischian or a non-ornithischian silesaurid, Eocursor has been a basal ornithischian or a basal member of Neornithischia, Serendipaceratops cannot be classified beyond Ornithischia as it is either an ankylosaur or a ceratopsian, and Alocodon, Fabrosaurus, Ferganocephale, Gongbusaurus, Taveirosaurus, Trimucrodon and Xiaosaurus are dubious ornithischians of uncertain basal classification. Depending on the phylogenetic results, Silesauridae could either be a clade within Ornithischia, its members could form an evolutionary gradient, or some members found form a clade while others are part of a gradient.
Evolution
For a long time, the only understanding of the origins of Ornithischia came from Lesothosaurus and Pisanosaurus, which together represented the best-known Early Jurassic and Triassic ornithischians respectively. Many suggestions of taxa and specimens that could be referred to Ornithischia from the Triassic were based on teeth and jaw bones, as they showed similar adaptations for herbivory. The genera Revueltosaurus, Galtonia, Pekinosaurus, Tecovasaurus, Lucianosaurus, Protecovasaurus, Crosbysaurus, and Azendohsaurus were all at one time considered to be Triassic ornithischians with only their teeth known, but are now recognized to be completely unrelated. The only early ornithischians that were considered to be diagnostic in a 2004 review by Norman and colleagues were Lesothosaurus, Pisanosaurus and Technosaurus, limiting the early ornithischian record to only two Triassic genera from Argentina and the United States and one Early Jurassic genus from South Africa, with all the tooth taxa being considered undiagnostic. Referrals of isolated teeth to Ornithischia based on herbivorous features began to be extensively questioned by William G. Parker and colleagues in 2005 after the discovery of skull and skeleton material clearly from Revueltosaurus showing that the "ornithischian-like" teeth were from an animal more closely related to crocodiles than birds, and there were multiple occurrences of herbivory throughout Triassic reptiles. Removing the list of Triassic tooth taxa from Ornithischia, the early diversity of the group was substantially reduced, especially in comparison to the known Triassic diversity of theropods and sauropodomorphs. If Pisanosaurus represented the earliest ornithischian, there would be at least a 20 million year gap in the evolution of Ornithischia until Lesothosaurus and heterodontosaurids. It is possible that the limited early record of ornithischians is due to them inhabiting environments that were less conducive to fossilization, or that the phylogenetics of the group were incorrect and that early ornithischians were already known but identified as members of other groups.
First noted in the 2003 naming of the early taxon Silesaurus, some taxa generally considered non-dinosaurs show similarities to ornithischians in the teeth and jaw anatomy. These basal taxa, which were then grouped within Silesauridae and commonly as the sister group to Dinosauria, may instead be the earliest ornithischians. They show adaptations for the evolution of herbivory, and can fill in the gap in early evolution of ornithischians that were otherwise only clearly known since the beginning of the Jurassic. This hypothesis has found support in multiple different phylogenetic analyses, but the results are not yet accepted as definitive enough to contradict other possible evolutionary strategies of dinosaurs. Alternatively, and more in line with earlier studies on dinosaur evolution, silesaurids may be the sister taxa to the Saurischia-Ornithischia split, or even other arrangements of the three main dinosaur groups Ornithischia, Sauropodomorpha, and Theropoda. The 2017 phylogenetic study of Matthew G. Baron and colleagues suggested that instead of a Saurischia-Ornithischia split, ornithischians were instead closest to theropods in the clade Ornithoscelida, with sauropodomorphs being outside the grouping. Under this case, the omnivory in the earliest sauropodomorphs and ornithischians would be the ancestral condition for dinosaurs, along with the grasping abilities seen in the earliest ornithischians and theropods. While Ornithoscelida is a possible hypothesis for the evolution of dinosaurs and the close relationships of Ornithischia, follow-up studies have not found it statistically more likely than the traditional dichotomy of Ornithischia and Saurischia, or the third alternative, Phytodinosauria, where ornithischians and sauropodomorphs are closer to each other than theropods.
Along with Pisanosaurus, which was supported as the earliest ornithischian for a time before being considered just as likely to be a silesaur rather than an ornithischian, an additional problematic taxon is Chilesaurus from the Late Jurassic of Chile. While it was originally named as a derived theropod with unique anatomy, it was found in studies based on Baron and colleagues results to instead be either the basalmost ornithischian, or a sauropodomorph. As the earliest ornithischian, Chilesaurus tied multiple details of ornithischian and theropod anatomy together supporting their union in Ornithoscelida, though when it is not the basalmost ornithischian, a traditional Saurischia is recovered. The problematic nature of Chilesaurus requires further revisiting of its anatomy, but the details of vertebral air pockets, pelvis shape, and hand support it as a theropod. Daemonosaurus, typically a theropod or close relative of herrerasaurs, has also been found as the basalmost ornithischian at times when Ornithoscelida is recovered, but it does not share any unique features with ornithischians and redescribing its anatomy found it fairly confidently to be a basal dinosaur not related closely to Ornithischia.
The phylogenetic analysis of Norman and colleagues in 2022 recovered the members of Silesauridae as forming an ancestral grade within Ornithischia even with the inclusion of Chilesaurus, supporting the earlier results of Müller and Garcia and their evolutionary trends for early ornithischian anatomy. Norman and colleagues used Prionodontia over both Saphornithischia and Genasauria, since all were recovered as encompassing the same node. The earliest ornithischians under this reconstruction were faunivorous, as seen by Lewisuchus, which has typical teeth like theropods. Serrations on teeth become larger for taxa more derived than Asilisaurus, the development of a cingulum in teeth is seen in Technosaurus and later ornithischians, the lower jaw becomes more elongate in taxa above Silesaurus, and core ornithischians are united by the pubic bone angling backwards, and the modification of the ankle joint.
Palaeoecology
Ornithischians shifted from bipedal to quadrupedal posture at least three times in their evolutionary history and it has been shown primitive members may have been capable of both forms of movement.
Most ornithischians were herbivorous. In fact, most of the unifying characters of Ornithischia are thought to be related to this herbivory. For example, the shift to an opisthopubic pelvis is thought to be related to the development of a large stomach or stomachs and gut which would allow ornithischians to more effectively digest plant matter. The smallest known ornithischian is Fruitadens haagarorum. The largest Fruitadens individuals reached just 65–75 cm. Previously, only carnivorous, saurischian theropods were known to reach such small sizes. At the other end of the spectrum, the largest known ornithischians reach about 15 meters (smaller than the largest saurischians).
However, not all ornithischians were strictly herbivorous. Some groups, like the heterodontosaurids, were likely omnivores. At least one species of ankylosaurian, Liaoningosaurus paradoxus, appears to have been at least partially carnivorous, with hooked claws, fork-like teeth, and stomach contents suggesting that it may have fed on fish. The members of Genasauria were primarily herbivores. Genasaurians most often had their head at the level of one meter, which suggests they were feeding primarily on “ground-level plants such as ferns, cycads, and other herbaceous gymnosperms."
There is strong evidence that some ornithischians lived in herds. This evidence consists of multiple bone beds where large numbers of individuals of the same species and of different age groups died simultaneously.
| Biology and health sciences | Ornitischians | Animals |
585383 | https://en.wikipedia.org/wiki/Pulmonology | Pulmonology | Pulmonology (, , from Latin pulmō, -ōnis "lung" and the Greek suffix "study of"), pneumology (, built on Greek πνεύμων "lung") or pneumonology () is a medical specialty that deals with diseases involving the respiratory tract. It is also known as respirology, respiratory medicine, or chest medicine in some countries and areas.
Pulmonology is considered a branch of internal medicine, and is related to intensive care medicine. Pulmonology often involves managing patients who need life support and mechanical ventilation. Pulmonologists are specially trained in diseases and conditions of the chest, particularly pneumonia, asthma, tuberculosis, emphysema, and complicated chest infections.
Pulmonology/respirology departments work especially closely with certain other specialties: cardiothoracic surgery departments and cardiology departments.
Journals of pulmonology
American Association for Respiratory Care
American College of Chest Physicians
American Lung Association
American Thoracic Society
British Thoracic Society
European Respiratory Society
History of pulmonology
One of the first major discoveries relevant to the field of pulmonology was the discovery of pulmonary circulation. Originally, it was thought that blood reaching the right side of the heart passed through small 'pores' in the septum into the left side to be oxygenated, as theorized by Galen; however, the discovery of pulmonary circulation disproves this theory, which had previously been accepted since the 2nd century. Thirteenth-century anatomist and physiologist Ibn Al-Nafis accurately theorized that there was no 'direct' passage between the two sides (ventricles) of the heart. He believed that the blood must have passed through the pulmonary artery, through the lungs, and back into the heart to be pumped around the body. This is believed by many to be the first scientific description of pulmonary circulation.
Although pulmonary medicine only began to evolve as a medical specialty in the 1950s, William Welch and William Osler founded the 'parent' organization of the American Thoracic Society, the National Association for the Study and Prevention of Tuberculosis. The care, treatment, and study of tuberculosis of the lung is recognised as a discipline in its own right, phthisiology. When the specialty did begin to evolve, several discoveries were being made linking the respiratory system and the measurement of arterial blood gases, attracting more and more physicians and researchers to the developing field.
Pulmonology and its relevance in other medical fields
Surgery of the respiratory tract is generally performed by specialists in cardiothoracic surgery (or thoracic surgery), though minor procedures may be performed by pulmonologists. Pulmonology is closely related to critical care medicine when dealing with patients who require mechanical ventilation. As a result, many pulmonologists are certified to practice critical care medicine in addition to pulmonary medicine. There are fellowship programs that allow physicians to become board certified in pulmonary and critical care medicine simultaneously. Interventional pulmonology is a relatively new field within pulmonary medicine that deals with the use of procedures such as bronchoscopy and pleuroscopy to treat several pulmonary diseases. Interventional pulmonology is increasingly recognized as a specific medical specialty.
Diagnosis
The pulmonologist begins the diagnostic process with a general review focusing on:
hereditary diseases affecting the lungs (cystic fibrosis, alpha 1-antitrypsin deficiency)
exposure to toxicants (tobacco smoke, asbestos, exhaust fumes, coal mining fumes, e-cigarette aerosol,)
exposure to infectious agents (certain types of birds, malt processing)
an autoimmune diathesis that might predispose to certain conditions (pulmonary fibrosis, pulmonary hypertension)
Physical diagnostics are as important as in other fields of medicine.
Inspection of the hands for signs of cyanosis or clubbing, chest wall, and respiratory rate.
Palpation of the cervical lymph nodes, trachea and chest wall movement.
Percussion of the lung fields for dullness or hyper-resonance.
Auscultation (with a stethoscope) of the lung fields for diminished or unusual breath sounds.
Rales or rhonchi heard over lung fields with a stethoscope.
As many heart diseases can give pulmonary signs, a thorough cardiac investigation is usually included.
Procedures
Clinical procedures
Pulmonary clinical procedures include the following pulmonary tests and procedures:
Medical laboratory investigation of blood (blood tests). Sometimes arterial blood gas tests are also required.
Spirometry the determination of maximum airflow at a given lung volume as measured by breathing into a dedicated machine; this is the key test to diagnose airflow obstruction.
Pulmonary function testing including spirometry, as above, plus response to bronchodilators, lung volumes, and diffusion capacity, the latter being a measure of lung oxygen absorptive area
Bronchoscopy with bronchoalveolar lavage (BAL), endobronchial and transbronchial biopsy and epithelial brushing
Chest X-rays
CT scan
Scintigraphy and other methods of nuclear medicine
Positron emission tomography (especially in lung cancer)
Polysomnography (sleep studies) commonly used for the diagnosis of sleep apnea
Surgical procedures
Major surgical procedures on the heart and lungs are performed by a thoracic surgeon. Pulmonologists often perform specialized procedures to get samples from the inside of the chest or inside of the lung. They use radiographic techniques to view vasculature of the lungs and heart to assist with diagnosis.
Treatment and therapeutics
Medication is the most important treatment of most diseases of pulmonology, either by inhalation (bronchodilators and steroids) or in oral form (antibiotics, leukotriene antagonists). A common example being the usage of inhalers in the treatment of inflammatory lung conditions such as asthma or chronic obstructive pulmonary disease. Oxygen therapy is often necessary in severe respiratory disease (emphysema and pulmonary fibrosis). When this is insufficient, the patient might require mechanical ventilation.
Pulmonary rehabilitation has been defined as a multidimensional continuum of services directed to persons with pulmonary disease and their families, usually by an interdisciplinary team of specialists, with the goal of achieving and maintaining the individual's maximum level of independence and functioning in the community. Pulmonary rehabilitation is intended to educate the patient, the family, and improve the overall quality of life and prognosis for the patient. Interventions can include exercise, education, emotional support, oxygen, noninvasive mechanical ventilation, optimization of airway secretion clearance, promoting compliance with medical care to reduce numbers of exacerbations and hospitalizations, and returning to work and/or a more active and emotionally satisfying life. These goals are appropriate for any patients with diminished respiratory reserve whether due to obstructive or intrinsic pulmonary diseases (oxygenation impairment) or neuromuscular weakness (ventilatory impairment). A pulmonary rehabilitation team may include a rehabilitation physician, a pulmonary medicine specialist, physician assistant and allied health professionals including a rehabilitation nurse, a respiratory therapist, a speech-language pathologist, a physical therapist, an occupational therapist, a psychologist, and a social worker among others. Additionally, breathing games are used to motivate children to perform pulmonary rehabilitation.
Education and training
Pulmonologist
In the United States, pulmonologists are physicians who, after receiving a medical degree (MD or DO), complete residency training in internal medicine, followed by at least two additional years of subspeciality fellowship training in pulmonology. After satisfactorily completing a fellowship in pulmonary medicine, the physician is permitted to take the board certification examination in pulmonary medicine. After passing this exam, the physician is then board certified as a pulmonologist. Most pulmonologists complete three years of combined subspecialty fellowship training in pulmonary medicine and critical care medicine.
Pediatric pulmonologist
In the United States, pediatric pulmonologists are physicians who, after receiving a medical degree (MD, DO, MBBS, MBBCh, etc.), complete residency training in pediatrics, followed by at least three additional years of subspeciality fellowship training in pulmonology. Pediatric pulmonologists treat diseases of the airways, lungs, respiratory mechanics and aerodigestive system.
Scientific research
Pulmonologists are involved in both clinical and basic research of the respiratory system, ranging from the anatomy of the respiratory epithelium to the most effective treatment of pulmonary hypertension. Scientific research also takes place to look for causes and possible treatment in diseases such as pulmonary tuberculosis and lung cancer.
| Biology and health sciences | Fields of medicine | null |
585460 | https://en.wikipedia.org/wiki/Cherimoya | Cherimoya | The cherimoya (Annona cherimola), also spelled chirimoya and called chirimuya by the Quechua people, is a species of edible fruit-bearing plant in the genus Annona, from the family Annonaceae, which includes the closely related sweetsop and soursop. The plant has long been believed to be native to Ecuador and Peru, with cultivation practised in the Andes and Central America, although a recent hypothesis postulates Central America as the origin instead, because many of the plant's wild relatives occur in this area.
Cherimoya is grown in tropical and subtropical regions throughout the world including Central America, northern South America, Southern California, South Asia, Australia, the Mediterranean region, and North Africa. American writer Mark Twain called the cherimoya "the most delicious fruit known to men". The creamy texture of the flesh gives the fruit its secondary name, the custard apple.
Etymology
The name is derived from the Quechua word , which means "cold seeds". The plant grows at high altitudes, where the weather is colder, and the seeds will germinate at higher altitudes. In Bolivia, Chile, Colombia, Ecuador, Peru, and Venezuela, the fruit is commonly known as chirimoya (spelled according to the rules of the Spanish language).
Description
Annona cherimola is a fairly dense, fast-growing, woody,
briefly deciduous
but mostly evergreen, low-branched, spreading tree
or shrub, tall.
Mature branches are sappy and woody. Young branches and twigs have a matting of short, fine, rust-colored hairs. The leathery leaves are long wide, and mostly elliptic, pointed at the ends and rounded near the leaf stalk. When young, they are covered with soft, fine, tangled, rust-colored hairs. When mature, the leaves bear hairs only along the veins on the undersurface. The tops are hairless and a dull medium green with paler veins, the backs are velvety, dull grey-green with raised pale green veins. New leaves are whitish below.
Leaves are single and alternate, dark green, and slightly hairy on the top surface. They attach to branches with stout long and densely hairy leaf stalks.
Cherimoya trees bear very pale green, fleshy flowers. They are long with a very strong, fruity odor. Each flower has three outer, greenish, fleshy, oblong, downy petals and three smaller, pinkish inner petals with yellow or brown, finely matted hairs outside, whitish with purple spots and many stamens on the inside. Flowers appear on the branches opposite to the leaves, solitary or in pairs or groups of three, on flower stalks that are covered densely with fine rust-colored hairs, long. Buds are long and wide at the base. The pollen is shed as permanent tetrads.
Fruits
The edible cherimoya fruit is a large, green, conical or heart-shaped compound fruit, long, with diameters of , and skin that gives the appearance of having overlapping scales or knobby warts. They ripen to brown with a fissured surface in late winter and early spring; they weigh on the average , but extra-large specimens may weigh or more.
Cherimoya fruits are commercially classified according to degree of surface irregularity, as follows: 'Lisa', almost smooth, difficult to discern areoles; 'Impresa', with "fingerprint" depressions; 'Umbonata', with rounded protrusions at the apex of each areole; 'Mamilata' with fleshy, nipple-like protrusions; or 'Tuberculata', with conical protrusions having wart-like tips.
The flesh of the cherimoya contains numerous hard, inedible, black, bean-like, glossy seeds, long and about half as wide. Cherimoya seeds are poisonous if crushed open. Like other members of the family Annonaceae, the entire plant contains small amounts of neurotoxic acetogenins, such as annonacin, which appear to be linked to atypical parkinsonism in Guadeloupe. Moreover, an extract of the bark can induce paralysis if injected.
Distribution and habitat
Widely cultivated now, A. cherimola is believed to have originated in the Andes of South America at altitudes of , although an alternative hypothesis postulates Central America as the origin, instead, because many of the plant's wild relatives occur in this area. From there it was taken by Europeans to various parts of the tropics. Unlike other Annona species, A. cherimola has not successfully naturalized in West Africa, and Annona glabra is often misidentified as this species in Australasia.
Native
Neotropic:
Western South America: Ecuador, Peru
Current (naturalized and native)
Neotropic:
Caribbean: Florida, Cuba, Dominican Republic, Haiti, Jamaica, Puerto Rico
Central America: Belize, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, Panama
Northern South America: Guyana, Venezuela
Southern North America: Mexico
Western South America: Bolivia, Colombia, Ecuador, Peru
Southern South America: Chile, Brazil
Palearctic: Algeria, Egypt, Libya, France, Italy, Spain, Madeira, Azores
Afrotropic: Eritrea, Somalia, Tanzania,
Indomalaya: India, Singapore, Thailand
Australia
A. cherimola is not native to Chile. When it was introduced is unknown, but it happened likely in pre-Hispanic times. Traditionally, it has been cultivated in the valleys and oases of the north, as far south as the valley of Aconcagua.
Ecology
Pollination
The flowers of A. cherimola are hermaphroditic and have a mechanism to avoid self-pollination. The short-lived flowers open as female, then progress to a later, male stage in a matter of hours. This requires a separate pollinator that not only can collect the pollen from flowers in the male stage, but also deposit it in flowers in the female stage. Studies of which insect(s) serve as the natural pollinator in the cherimoya's native region have been inconclusive; some form of beetle is suspected.
Quite often, the female flower is receptive in the early part of the first day, but pollen is not produced in the male stage until the late afternoon of the second day. Honey bees are not good pollinators of this plant, for example, because their bodies are too large to fit between the fleshy petals of the female flower. Female flowers have the petals only partially separated, and the petals separate widely when they become male flowers. So, the bees pick up pollen from the male flowers, but are unable to transfer this pollen to the female flowers. The small beetles which are suspected to pollinate cherimoya in its land of origin must therefore be much smaller than bees.
For fruit production outside the cherimoya's native region, cultivators must either rely upon the wind to spread pollen in dense orchards or else use hand pollination. Pollinating by hand requires a paint brush. Briefly, to increase fruit production, growers collect the pollen from the male plants with the brush, and then transfer it to the female flowers immediately or store it in the refrigerator overnight. Cherimoya pollen has a short life, but it can be extended with refrigeration.
Climate requirements
The evaluation of 20 locations in Loja Province, Ecuador, indicated certain growing preferences of wild cherimoya, including altitude between , optimum annual temperature range between , annual precipitation between , and soils with high sand content and slightly acidic properties with pH between 5.0 and 6.5.
In Western horticulture, growers are often advised to grow cherimoya in full sun, while the plant has been considered shade-tolerant in Japan. In 2001, a study conducted by Kyoto University showed shading of 50–70% sunlight was adequate to obtain an optimal light environment.
Cultivation
Cultivars
The cherimoya of the Granada-Málaga tropical coast in Spain is a fruit of the cultivar 'Fino de Jete' with the EU's protected designation of origin appellation. 'Fino de Jete' fruits have skin type Impressa and are smooth or slightly concave at the edges. The fruit is round, oval, heart-shaped, or kidney-shaped. The seeds are enclosed in the carpels and so do not detach easily. The flavor balances intense sweetness with slight acidity and the soluble sugar content exceeds 17° Bx. This variety is prepared and packed in the geographical area because "it is a very delicate perishable fruit and its skin is very susceptible to browning caused by mechanical damage, such as rubbing, knocks, etc. The fruit must be handled with extreme care, from picking by hand in the field to packing in the warehouse, which must be carried out within 24 hours. Repacking or further handling is strictly forbidden."
Annona cherimola, preferring the cool Andean altitudes, readily hybridizes with other Annona species. A hybrid with A. squamosa called atemoya has received some attention in West Africa, Australia, Brazil, and Florida.
Propagation
The tree thrives throughout the tropics at altitudes of . Though sensitive to frost, it must have periods of cool temperatures or the tree will gradually go dormant. The indigenous inhabitants of the Andes say the cherimoya cannot tolerate snow.
In the Mediterranean region, it is cultivated mainly in southern Spain and Portugal, where it was introduced between 1751 and 1797, after which it was carried to Italy, but now can also be found in several countries of Africa, the Middle East, and Oceania. It is cultivated throughout the Americas, including Hawaii since 1790 and California, where it was introduced in 1871.
Harvest
Large fruits which are uniformly green, without cracks or mostly browned skin, are best. The optimum temperature for storage is , depending on cultivar, ripeness stage, and duration, with an optimum relative humidity of 90–95%. Unripe cherimoyas will ripen at room temperature, when they will yield to gentle pressure. Exposure to ethylene (100 ppm for one to two days) accelerates ripening of mature green cherimoya and other Annona fruits; they can ripen in about five days if kept at . Ethylene removal can also be helpful in slowing the ripening of mature green fruits.
Nutrition and edibility
Raw cherimoya fruit is 79% water, 18% carbohydrate, 2% protein, and 1% fat (table). In a 100-gram reference amount providing 75 calories, cherimoya is a rich source (20% or more of the Daily Value, DV) of vitamin B6 and a moderate source (10–19% DV) of vitamin C, dietary fiber, and riboflavin (table).
"The pineapple, the mangosteen, and the cherimoya", wrote the botanist Berthold Carl Seemann, "are considered the finest fruits in the world, and I have tasted them in those localities where they are supposed to attain their highest perfection – the pineapple in Guayaquil, the mangosteen in the Indian Archipelago, and the cherimoya on the slopes of the Andes, and if I were asked which would be the best fruit, I would choose without hesitation, cherimoya. Its taste, indeed, surpasses that of every other fruit, and Haenke was quite right when he called it the masterpiece of Nature."
Fruits require storage at to inhibit softening and maintain edibility. Different varieties have different flavors, textures, and shapes. The flavor of the flesh ranges from mellow sweet to tangy or acidic sweet, with variable suggestions of pineapple, banana, pear, papaya, strawberry or other berry, and apple, depending on the variety. The ripened flesh is creamy white. When ripe, the skin is green and gives slightly to pressure. Some characterize the fruit flavor as a blend of banana, pineapple, papaya, peach, and strawberry. The fruit can be chilled and eaten with a spoon, which has earned it another nickname, the "ice cream fruit". In Chile and Peru, it is commonly used in ice creams and yogurt.
When the fruit is ripe and still has the fresh, fully mature green-yellow skin color, the texture is like that of a soft ripe pear or papaya. When the skin turns brown at room temperature, the fruit is no longer good for human consumption.
Brand
Chirimoya Cumbe is a well-known case involving collective marks in trademark law. The World Intellectual Property Organization has defined these collective marks as “signs which distinguish the geographical origin, material, mode of manufacturing or other common characteristics of goods or services of different enterprises using the collective mark.” The owner of a collective mark are members of an association of such enterprises.
Cumbe is a valley in the Huarochiri province of Peru where the climatic conditions are favourable for growing chirimoya. The fruit produced in the Cumbe valley is considered of superior quality, with a large fruit size, soft skin, low seed index (number of seeds per 100 grams of fruit), and high nutrient value.
In 1997, Matildo Pérez, a peasant from a village community in the heights of Lima, decided to apply personally to the National Institute for the Defense of Competition and Intellectual Property of Peru (INDECOPI) for the registration of the trademark "Chirimoya Cumbe." The application was refused since no exclusive rights in generic names can be granted to a single person. Mr. Pérez appeared at INDECOPI again, this time with a delegation headed by the Deputy Mayor of Cumbe, to register the “Chirimoya Cumbe” as a trademark which would give the community in Lima exclusive rights with respect to the name “Cumbe”.
The INDECOPI officials explained that "Chirimoya Cumbe" is in fact an appellation of origin, not a trademark. To be more precise, the word “Cumbe” is an appellation of Peruvian origin, because the valley of Cumbe is a geographical area that gives certain distinctive properties to the Chirimoya grown there.
The people of Cumbe declined the proposition of appellation of origin: "It is said that with appellations of origin the State is the owner, and it is the State that authorizes use, and that is why we are saying no. We do not want the State to be the owner of the ‘Cumbe’ name."
After lengthy search for solutions, it was suggested that “Chirimoya Cumbe” should be registered as a “collective mark”, the owners of which would be the people of Cumbe and which would be used according to rules that they themselves would lay down.
In 2022, the name "Chirimoya Cumbe" has its own characteristic logo and is registered as a collective mark in the name of the village of Santo Toribio de Cumbe (in Class 31 of the International Classification).
Culture
The Moche culture of Peru had a fascination with agriculture and represented fruits and vegetables in their art; cherimoyas were often depicted in their ceramics.
Gallery
| Biology and health sciences | Other culinary fruits | Plants |
585468 | https://en.wikipedia.org/wiki/Thyreophora | Thyreophora | Thyreophora ("shield bearers", often known simply as "armored dinosaurs") is a group of armored ornithischian dinosaurs that lived from the Early Jurassic until the end of the Cretaceous.
Thyreophorans are characterized by the presence of body armor lined up in longitudinal rows along the body. Primitive forms had simple, low, keeled scutes or osteoderms, whereas more derived forms developed more elaborate structures including spikes and plates. Most thyreophorans were herbivorous and had relatively small brains for their body size.
Thyreophora includes two major subgroups, Ankylosauria and Stegosauria. In both clades, the forelimbs were much shorter than the hindlimbs, particularly in stegosaurs. Thyreophora has been defined as the group consisting of all species more closely related to Ankylosaurus and Stegosaurus than to Iguanodon and Triceratops. It is the sister group of Cerapoda within Genasauria.
Characteristics
Members of Thyreophora are characterised by the presence of osteoderms (bony growths within the skin), with these osteoderms having lateral keels. Characters of the skull and jaws distinctive (synapomorphic) of thyreophorans include "absence of a deep elliptic fossa along the sutural line of the nasals, presence of a wide jugal, remodeling of skull dermal bone, down-turned dentary tooth row". Among primitive thyreophorans, Scutellosaurus was likely primarily bipedal, while the more quadrupedally adapted Scelidosaurus may have been bipedal for some of the time, particularly as a juvenile. Stegosaurs and ankylosaurs are thought to have been obligately quadrupedal.
Classification
Taxonomy
While ranked taxonomy has largely fallen out of favor among dinosaur paleontologists, a few 21st century publications have retained the use of ranks, though sources have differed on what its rank should be. Most have listed Thyreophora as an unranked taxon containing the traditional suborders Stegosauria and Ankylosauria, though Thyreophora is also sometimes classified as a suborder, with Ankylosauria and Stegosauria as infraorders.
Phylogeny
Thyreophora was first named by Nopcsa in 1915. Thyreophora was defined as a clade by Paul Sereno in 1998, as "all genasaurs more closely related to Ankylosaurus than to Triceratops". Thyreophoroidea was first named by Nopcsa in 1928 and defined by Sereno in 1986, as "Scelidosaurus, Ankylosaurus, their most recent common ancestor and all of its descendants". Eurypoda was first named by Sereno in 1986 and defined by him in 1998, as "Stegosaurus, Ankylosaurus, their most recent common ancestor and all of their descendants".
In 2021, an international group of researchers led by Daniel Madzia registered almost all of the most commonly used ornithischian clades under the International Code of Phylogenetic Nomenclature, with the intent of standardizing their definitions. According to Madzia et al., Thyreophora is defined as the largest clade containing Ankylosaurus magniventris and Stegosaurus stenops but not Iguanodon bernissartensis and Triceratops horridus. They also defined the less inclusive Eurypoda as "the smallest clade containing Ankylosaurus magniventris and Stegosaurus stenops" to include the ankylosaurs and stegosaurs to the exclusion of basal thyreophorans. A later study conducted by André Fonseca and colleagues in 2024 gave a formal definition for Thyreophoroidea in the PhyloCode as "the smallest clade containing Ankylosaurus magniventris, Scelidosaurus harrisonii, and Stegosaurus stenops".
The following cladogram shows the results of the phylogenetic analysis Soto-Acuña et al. (2021). In their description of Jakapil the following year, Riguetti et al modified the same matrix and found it to occupy a position as the sister taxon to the Eurypoda. A similar result was found by Fonseca et al. in 2024.
In 2020, as part of his monograph on Scelidosaurus, David Norman revised the relationships of early thyreophorans, finding that Stegosauria was the most basal branch, with Scutellosaurus, Emausaurus and Scelidosaurus being progressive stem groups to Ankylosauria, rather than to Stegosauria+Ankylosauria. A cladogram is given below:
| Biology and health sciences | Ornitischians | Animals |
326787 | https://en.wikipedia.org/wiki/Teleost | Teleost | Teleostei (; Greek teleios "complete" + osteon "bone"), members of which are known as teleosts (), is, by far, the largest group of ray-finned fishes (class Actinopterygii), and contains 96% of all extant species of fish. The Teleostei, which is variously considered a division or an infraclass in different taxonomic systems, include over 26,000 species that are arranged in about 40 orders and 448 families. Teleosts range from giant oarfish measuring or more, and ocean sunfish weighing over , to the minute male anglerfish Photocorynus spiniceps, just long. Including not only torpedo-shaped fish built for speed, teleosts can be flattened vertically or horizontally, be elongated cylinders or take specialised shapes as in anglerfish and seahorses.
The difference between teleosts and other bony fish lies mainly in their jaw bones; teleosts have a movable premaxilla and corresponding modifications in the jaw musculature which make it possible for them to protrude their jaws outwards from the mouth. This is of great advantage, enabling them to grab prey and draw it into the mouth. In more derived teleosts, the enlarged premaxilla is the main tooth-bearing bone, and the maxilla, which is attached to the lower jaw, acts as a lever, pushing and pulling the premaxilla as the mouth is opened and closed. Other bones further back in the mouth serve to grind and swallow food. Another difference is that the upper and lower lobes of the tail (caudal) fin are about equal in size. The spine ends at the caudal peduncle, distinguishing this group from other fish in which the spine extends into the upper lobe of the tail fin.
Teleosts have adopted a range of reproductive strategies. Most use external fertilisation: the female lays a batch of eggs, the male fertilises them and the larvae develop without any further parental involvement. A fair proportion of teleosts are sequential hermaphrodites, starting life as females and transitioning to males at some stage, with a few species reversing this process. A small percentage of teleosts are viviparous and some provide parental care with typically the male fish guarding a nest and fanning the eggs to keep them well-oxygenated.
Teleosts are economically important to humans, as is shown by their depiction in art over the centuries. The fishing industry harvests them for food, and anglers attempt to capture them for sport. Some species are farmed commercially, and this method of production is likely to be increasingly important in the future. Others are kept in aquariums or used in research, especially in the fields of genetics and developmental biology.
Anatomy
Distinguishing features of the teleosts are mobile premaxilla, elongated neural arches at the end of the caudal fin and unpaired basibranchial toothplates. The premaxilla is unattached to the neurocranium (braincase); it plays a role in protruding the mouth and creating a circular opening. This lowers the pressure inside the mouth, sucking the prey inside. The lower jaw and maxilla are then pulled back to close the mouth, and the fish is able to grasp the prey. By contrast, mere closure of the jaws would risk pushing food out of the mouth. In more advanced teleosts, the premaxilla is enlarged and has teeth, while the maxilla is toothless. The maxilla functions to push both the premaxilla and the lower jaw forward. To open the mouth, an adductor muscle pulls back the top of the maxilla, pushing the lower jaw forward. In addition, the maxilla rotates slightly, which pushes forward a bony process that interlocks with the premaxilla.
The pharyngeal jaws of teleosts, a second set of jaws contained within the throat, are composed of five branchial arches, loops of bone which support the gills. The first three arches include a single basibranchial surrounded by two hypobranchials, ceratobranchials, epibranchials and pharyngobranchials. The median basibranchial is covered by a toothplate. The fourth arch is composed of pairs of ceratobranchials and epibranchials, and sometimes additionally, some pharyngobranchials and a basibranchial. The base of the lower pharyngeal jaws is formed by the fifth ceratobranchials while the second, third and fourth pharyngobranchials create the base of the upper. In the more basal teleosts the pharyngeal jaws consist of well-separated thin parts that attach to the neurocranium, pectoral girdle, and hyoid bar. Their function is limited to merely transporting food, and they rely mostly on lower pharyngeal jaw activity. In more derived teleosts the jaws are more powerful, with left and right ceratobranchials fusing to become one lower jaw; the pharyngobranchials fuse to create a large upper jaw that articulates with the neurocranium. They have also developed a muscle that allows the pharyngeal jaws to have a role in grinding food in addition to transporting it.
The caudal fin is homocercal, meaning the upper and lower lobes are about equal in size. The spine ends at the caudal peduncle, the base of the caudal fin, distinguishing this group from those in which the spine extends into the upper lobe of the caudal fin, such as most fish from the Paleozoic (541 to 252 million years ago). The neural arches are elongated to form uroneurals which provide support for this upper lobe.
Teleosts tend to be quicker and more flexible than more basal bony fishes. Their skeletal structure has evolved towards greater lightness. While teleost bones are well calcified, they are constructed from a scaffolding of struts, rather than the dense cancellous bones of holostean fish. In addition, the lower jaw of the teleost is reduced to just three bones; the dentary, the angular bone and the articular bone. The genital and urinary tracts end behind the anus in the genital papilla; this is observed to sex teleosts.
Evolution and phylogeny
External relationships
The teleosts were first recognised as a distinct group by the German ichthyologist Johannes Peter Müller in 1845. The name is from Greek teleios, "complete" + osteon, "bone". Müller based this classification on certain soft tissue characteristics, which would prove to be problematic, as it did not take into account the distinguishing features of fossil teleosts. In 1966, Greenwood et al. provided a more solid classification. The oldest fossils of teleosteomorphs (the stem group from which teleosts later evolved) date back to the Triassic period (Prohalecites, Pholidophorus). However, it has been suggested that teleosts probably first evolved already during the Paleozoic era. During the Mesozoic and Cenozoic eras they diversified widely, and as a result, 96% of all living fish species are teleosts.
The cladogram below shows the evolutionary relationships of the teleosts to other extant clades of bony fish, and to the four-limbed vertebrates (tetrapods) that evolved from a related group of bony fish during the Devonian period. Approximate divergence dates (in millions of years, mya) are from Near et al., 2012.
Internal relationships
The phylogeny of the teleosts has been subject to long debate, without consensus on either their phylogeny or the timing of the emergence of the major groups before the application of modern DNA-based cladistic analysis. Near et al. (2012) explored the phylogeny and divergence times of every major lineage, analysing the DNA sequences of 9 unlinked genes in 232 species. They obtained well-resolved phylogenies with strong support for the nodes (so, the pattern of branching shown is likely to be correct). They calibrated (set actual values for) branching times in this tree from 36 reliable measurements of absolute time from the fossil record. The teleosts are divided into the major clades shown on the cladogram, with dates, following Near et al. More recent research divide the teleosts into two major groups: Eloposteoglossocephala (Elopomorpha + Osteoglossomorpha) and Clupeocephala (the rest of the teleosts).
The most diverse group of teleost fish today are the Percomorpha, which include, among others, the tuna, seahorses, gobies, cichlids, flatfish, wrasse, perches, anglerfish, and pufferfish. Teleosts, and percomorphs in particular, thrived during the Cenozoic era. Fossil evidence shows that there was a major increase in size and abundance of teleosts immediately after the mass extinction event at the Cretaceous-Paleogene boundary ca. 66 mya.
Evolutionary trends
The first fossils assignable to this diverse group appear in the Early Triassic, after which teleosts accumulated novel body shapes predominantly gradually for the first 150 million years of their evolution (Early Triassic through early Cretaceous).
The most basal of the living teleosts are the Elopomorpha (eels and allies) and the Osteoglossomorpha (elephantfishes and allies). There are 800 species of elopomorphs. They have thin leaf-shaped larvae known as leptocephali, specialised for a marine environment. Among the elopomorphs, eels have elongated bodies with lost pelvic girdles and ribs and fused elements in the upper jaw. The 200 species of osteoglossomorphs are defined by a bony element in the tongue. This element has a basibranchial behind it, and both structures have large teeth which are paired with the teeth on the parasphenoid in the roof of the mouth. The clade Otocephala includes the Clupeiformes (herrings) and Ostariophysi (carps, catfishes and allies). Clupeiformes consists of 350 living species of herring and herring-like fishes. This group is characterised by an unusual abdominal scute and a different arrangement of the hypurals. In most species, the swim bladder extends to the braincase and plays a role in hearing. Ostariophysi, which includes most freshwater fishes, includes species that have developed some unique adaptations. One is the Weberian apparatus, an arrangement of bones (Weberian ossicles) connecting the swim bladder to the inner ear. This enhances their hearing, as sound waves make the bladder vibrate, and the bones transport the vibrations to the inner ear. They also have a chemical alarm system; when a fish is injured, the warning substance gets in the water, alarming nearby fish.
The majority of teleost species belong to the clade Euteleostei, which consists of 17,419 species classified in 2,935 genera and 346 families. Shared traits of the euteleosts include similarities in the embryonic development of the bony or cartilaginous structures located between the head and dorsal fin (supraneural bones), an outgrowth on the stegural bone (a bone located near the neural arches of the tail), and caudal median cartilages located between hypurals of the caudal base. The majority of euteleosts are in the clade Neoteleostei. A derived trait of neoteleosts is a muscle that controls the pharyngeal jaws, giving them a role in grinding food. Within neoteleosts, members of the Acanthopterygii have a spiny dorsal fin which is in front of the soft-rayed dorsal fin. This fin helps provide thrust in locomotion and may also play a role in defense. Acanthomorphs have developed spiny ctenoid scales (as opposed to the cycloid scales of other groups), tooth-bearing premaxilla and greater adaptations to high speed swimming.
The adipose fin, which is present in over 6,000 teleost species, is often thought to have evolved once in the lineage and to have been lost multiple times due to its limited function. A 2014 study challenges this idea and suggests that the adipose fin is an example of convergent evolution. In Characiformes, the adipose fin develops from an outgrowth after the reduction of the larval fin fold, while in Salmoniformes, the fin appears to be a remnant of the fold.
Diversity
There are over 26,000 species of teleosts, in about 40 orders and 448 families, making up 96% of all extant species of fish. Approximately 12,000 of the total 26,000 species are found in freshwater habitats. Teleosts are found in almost every aquatic environment and have developed specializations to feed in a variety of ways as carnivores, herbivores, filter feeders and parasites. The longest teleost is the giant oarfish, reported at and more, but this is dwarfed by the extinct Leedsichthys, one individual of which has been estimated to have a length of . The heaviest teleost is believed to be the ocean sunfish, with a specimen landed in 2003 having an estimated weight of , while the smallest fully mature adult is the male anglerfish Photocorynus spiniceps which can measure just , though the female at is much larger. The stout infantfish is the smallest and lightest adult fish and is in fact the smallest vertebrate in the world; the females measures and the male just .
Open water fish are usually streamlined like torpedoes to minimize turbulence as they move through the water. Reef fish live in a complex, relatively confined underwater landscape and for them, manoeuvrability is more important than speed, and many of them have developed bodies which optimize their ability to dart and change direction. Many have laterally compressed bodies (flattened from side to side) allowing them to fit into fissures and swim through narrow gaps; some use their pectoral fins for locomotion and others undulate their dorsal and anal fins. Some fish have grown dermal (skin) appendages for camouflage; the prickly leather-jacket is almost invisible among the seaweed it resembles and the tasselled scorpionfish invisibly lurks on the seabed ready to ambush prey. Some like the foureye butterflyfish have eyespots to startle or deceive, while others such as lionfish have aposematic coloration to warn that they are toxic or have venomous spines.
Flatfish are demersal fish (bottom-feeding fish) that show a greater degree of asymmetry than any other vertebrates. The larvae are at first bilaterally symmetrical but they undergo metamorphosis during the course of their development, with one eye migrating to the other side of the head, and they simultaneously start swimming on their side. This has the advantage that, when they lie on the seabed, both eyes are on top, giving them a broad field of view. The upper side is usually speckled and mottled for camouflage, while the underside is pale.
Some teleosts are parasites. Remoras have their front dorsal fins modified into large suckers with which they cling onto a host animal such as a whale, sea turtle, shark or ray, but this is probably a commensal rather than parasitic arrangement because both remora and host benefit from the removal of ectoparasites and loose flakes of skin. More harmful are the catfish that enter the gill chambers of fish and feed on their blood and tissues. The snubnosed eel, though usually a scavenger, sometimes bores into the flesh of a fish, and has been found inside the heart of a shortfin mako shark.
Some species, such as electric eels, can produce powerful electric currents, strong enough to stun prey. Other fish, such as knifefish, generate and sense weak electric fields to detect their prey; they swim with straight backs to avoid distorting their electric fields. These currents are produced by modified muscle or nerve cells.
Distribution
Teleosts are found worldwide and in most aquatic environments, including warm and cold seas, flowing and still freshwater, and even, in the case of the desert pupfish, isolated and sometimes hot and saline bodies of water in deserts. Teleost diversity becomes low at extremely high latitudes; at Franz Josef Land, up to 82°N, ice cover and water temperatures below for a large part of the year limit the number of species; 75 percent of the species found there are endemic to the Arctic.
Of the major groups of teleosts, the Elopomorpha, Clupeomorpha and Percomorpha (perches, tunas and many others) all have a worldwide distribution and are mainly marine; the Ostariophysi and Osteoglossomorpha are worldwide but mainly freshwater, the latter mainly in the tropics; the Atherinomorpha (guppies, etc.) have a worldwide distribution, both fresh and salt, but are surface-dwellers. In contrast, the Esociformes (pikes) are limited to freshwater in the Northern Hemisphere, while the Salmoniformes (salmon, trout) are found in both Northern and Southern temperate zones in freshwater, some species migrating to and from the sea. The Paracanthopterygii (cods, etc.) are Northern Hemisphere fish, with both salt and freshwater species.
Some teleosts are migratory; certain freshwater species move within river systems on an annual basis; other species are anadromous, spending their lives at sea and moving inland to spawn, salmon and striped bass being examples. Others, exemplified by the eel, are catadromous, doing the reverse. The fresh water European eel migrates across the Atlantic Ocean as an adult to breed in floating seaweed in the Sargasso Sea. The adults spawn here and then die, but the developing young are swept by the Gulf Stream towards Europe. By the time they arrive, they are small fish and enter estuaries and ascend rivers, overcoming obstacles in their path to reach the streams and ponds where they spend their adult lives.
Teleosts including the brown trout and the scaly osman are found in mountain lakes in Kashmir at altitudes as high as . Teleosts are found at extreme depths in the oceans; the hadal snailfish has been seen at a depth of , and a related (unnamed) species has been seen at .
Physiology
Respiration
The major means of respiration in teleosts, as in most other fish, is the transfer of gases over the surface of the gills as water is drawn in through the mouth and pumped out through the gills. Apart from the swim bladder, which contains a small amount of air, the body does not have oxygen reserves, and respiration needs to be continuous over the fish's life. Some teleosts exploit habitats where the oxygen availability is low, such as stagnant water or wet mud; they have developed accessory tissues and organs to support gas exchange in these habitats.
Several genera of teleosts have independently developed air-breathing capabilities, and some have become amphibious. Some combtooth blennies emerge to feed on land, and freshwater eels are able to absorb oxygen through damp skin. Mudskippers can remain out of water for considerable periods, exchanging gases through skin and mucous membranes in the mouth and pharynx. Swamp eels have similar well-vascularised mouth-linings, and can remain out of water for days and go into a resting state (aestivation) in mud. The anabantoids have developed an accessory breathing structure known as the labyrinth organ on the first gill arch and this is used for respiration in air, and airbreathing catfish have a similar suprabranchial organ. Certain other catfish, such as the Loricariidae, are able to respire through air held in their digestive tracts.
Sensory systems
Teleosts possess highly developed sensory organs. Nearly all daylight fish have colour vision at least as good as a normal human's. Many fish also have chemoreceptors responsible for acute senses of taste and smell. Most fish have sensitive receptors that form the lateral line system, which detects gentle currents and vibrations, and senses the motion of nearby fish and prey. Fish sense sounds in a variety of ways, using the lateral line, the swim bladder, and in some species the Weberian apparatus. Fish orient themselves using landmarks, and may use mental maps based on multiple landmarks or symbols. Experiments with mazes show that fish possess the spatial memory needed to make such a mental map.
Osmoregulation
The skin of a teleost is largely impermeable to water, and the main interface between the fish's body and its surroundings is the gills. In freshwater, teleost fish gain water across their gills by osmosis, while in seawater they lose it. Similarly, salts diffuse outwards across the gills in freshwater and inwards in salt water. The European flounder spends most of its life in the sea but often migrates into estuaries and rivers. In the sea in one hour, it can gain Na+ ions equivalent to forty percent of its total free sodium content, with 75 percent of this entering through the gills and the remainder through drinking. By contrast, in rivers there is an exchange of just two percent of the body Na+ content per hour. As well as being able to selectively limit salt and water exchanged by diffusion, there is an active mechanism across the gills for the elimination of salt in sea water and its uptake in fresh water.
Thermoregulation
Fish are cold-blooded, and in general their body temperature is the same as that of their surroundings. They gain and lose heat through their skin, and regulate their circulation in response to changes in water temperature by increasing or reducing the blood flow to the gills. Metabolic heat generated in the muscles or gut is quickly dissipated through the gills, with blood being diverted away from the gills during exposure to cold. Because of their relative inability to control their blood temperature, most teleosts can only survive in a small range of water temperatures.
Teleost species that inhabit colder waters have a higher proportion of unsaturated fatty acids in brain cell membranes compared to fish from warmer waters, which allows them to maintain appropriate membrane fluidity in the environments in which they live. When cold acclimated, teleost fish show physiological changes in skeletal muscle that include increased mitochondrial and capillary density. This reduces diffusion distances and aids in the production of aerobic ATP, which helps to compensate for the drop in metabolic rate associated with colder temperatures.
Tuna and other fast-swimming ocean-going fish maintain their muscles at higher temperatures than their environment for efficient locomotion. Tuna achieve muscle temperatures or even higher above the surroundings by having a counterflow system in which the metabolic heat produced by the muscles and present in the venous blood, pre-warms the arterial blood before it reaches the muscles. Other adaptations of tuna for speed include a streamlined, spindle-shaped body, fins designed to reduce drag, and muscles with a raised myoglobin content, which gives these a reddish colour and makes for a more efficient use of oxygen. In polar regions and in the deep ocean, where the temperature is a few degrees above freezing point, some large fish, such as the swordfish, marlin and tuna, have a heating mechanism which raises the temperature of the brain and eye, allowing them significantly better vision than their cold-blooded prey.
Buoyancy
The body of a teleost is denser than water, so fish must compensate for the difference, or they will sink. A defining feature of Actinopteri (Chondrostei, Holostei and teleosts) is the swim bladder. Originally present in the last common ancestor of the teleosts, it has since been lost independently at least 30–32 times in at least 79 of 425 families of teleosts where the swim bladder is absent in one or more species. This absence is often the case in fast-swimming fishes such as the tuna and mackerel. The swim bladder helps fish adjusting their buoyancy through manipulation of gases, which allows them to stay at the current water depth, or ascend or descend without having to waste energy in swimming. In the more primitive groups like some minnows, the swim bladder is open (physostomous) to the esophagus. In fish where the swim bladder is closed (physoclistous), the gas content is controlled through the rete mirabilis, a network of blood vessels serving as a countercurrent gas exchanger between the swim bladder and the blood.
Locomotion
A typical teleost fish has a streamlined body for rapid swimming, and locomotion is generally provided by a lateral undulation of the hindmost part of the trunk and the tail, propelling the fish through the water. There are many exceptions to this method of locomotion, especially where speed is not the main objective; among rocks and on coral reefs, slow swimming with great manoeuvrability may be a desirable attribute. Eels locomote by wiggling their entire bodies. Living among seagrasses and algae, the seahorse adopts an upright posture and moves by fluttering its pectoral fins, and the closely related pipefish moves by rippling its elongated dorsal fin. Gobies "hop" along the substrate, propping themselves up and propelling themselves with their pectoral fins. Mudskippers move in much the same way on terrestrial ground. In some species, a pelvic sucker allows them to climb, and the Hawaiian freshwater goby climbs waterfalls while migrating. Gurnards have three pairs of free rays on their pectoral fins which have a sensory function but on which they can walk along the substrate. Flying fish launch themselves into the air and can glide on their enlarged pectoral fins for hundreds of metres.
Sound production
The ability to produce sound for communication appears to have evolved independently in several teleost lineages. Sounds are produced either by stridulation or by vibrating the swim bladder. In the Sciaenidae, the muscles that attach to the swim bladder cause it to oscillate rapidly, creating drumming sounds. Marine catfishes, sea horses and grunts stridulate by rubbing together skeletal parts, teeth or spines. In these fish, the swim bladder may act as a resonator. Stridulation sounds are predominantly from 1000–4000 Hz, though sounds modified by the swim bladder have frequencies lower than 1000 Hz.
Reproduction and lifecycle
Most teleost species are oviparous, having external fertilisation with both eggs and sperm being released into the water for fertilisation. Internal fertilisation occurs in 500 to 600 species of teleosts but is more typical for Chondrichthyes and many tetrapods. This involves the male inseminating the female with an intromittent organ. Fewer than one in a million of externally fertilised eggs survives to develop into a mature fish, but there is a much better chance of survival among the offspring of members of about a dozen families which are viviparous. In these, the eggs are fertilised internally and retained in the female during development. Some of these species, like the live-bearing aquarium fish in the family Poeciliidae, are ovoviviparous; each egg has a yolk sac which nourishes the developing embryo, and when this is exhausted, the egg hatches and the larva is expelled into the water column. Other species, like the splitfins in the family Goodeidae, are fully viviparous, with the developing embryo nurtured from the maternal blood supply via a placenta-like structure that develops in the uterus. Oophagy is practised by a few species, such as Nomorhamphus ebrardtii; the mother lays unfertilised eggs on which the developing larvae feed in the uterus, and intrauterine cannibalism has been reported in some halfbeaks.
There are two major reproductive strategies of teleosts; semelparity and iteroparity. In the former, an individual breeds once after reaching maturity and then dies. This is because the physiological changes that come with reproduction eventually lead to death. Salmon of the genus Oncorhynchus are well known for this feature; they hatch in fresh water and then migrate to the sea for up to four years before travelling back to their place of birth where they spawn and die. Semelparity is also known to occur in some eels and smelts. The majority of teleost species have iteroparity, where mature individuals can breed multiple times during their lives.
Sex identity and determination
88 percent of teleost species are gonochoristic, having individuals that remain either male or female throughout their adult lives. The sex of an individual can be determined genetically as in birds and mammals, or environmentally as in reptiles. In some teleosts, both genetics and the environment play a role in determining sex. For species whose sex is determined by genetics, it can come in three forms. In monofactorial sex determination, a single-locus determines sex inheritance. Both the XY sex-determination system and ZW sex-determination system exist in teleost species. Some species, such as the southern platyfish, have both systems and a male can be determined by XY or ZZ depending on the population.
Multifactorial sex determination occurs in numerous Neotropical species and involves both XY and ZW systems. Multifactorial systems involve rearrangements of sex chromosomes and autosomes. For example, the darter characine has a ZW multifactorial system where the female is determined by ZW1W2 and the male by ZZ. The wolf fish has a XY multifactorial system where females are determined by X1X1X2X2 and the male by X1X2Y. Some teleosts, such as zebrafish, have a polyfactorial system, where there are several genes which play a role in determining sex. Environment-dependent sex determination has been documented in at least 70 species of teleost. Temperature is the main factor, but pH levels, growth rate, density and social environment may also play a role. For the Atlantic silverside, spawning in colder waters creates more females, while warmer waters create more males.
Hermaphroditism
Some teleost species are hermaphroditic, which can come in two forms: simultaneous and sequential. In the former, both spermatozoa and eggs are present in the gonads. Simultaneous hermaphroditism typically occurs in species that live in the ocean depths, where potential mates are sparsely dispersed. Self-fertilisation is rare and has only been recorded in two species, Kryptolebias marmoratus and Kryptolebias hermaphroditus. With sequential hermaphroditism, individuals may function as one sex early in their adult life and switch later in life. Species with this condition include parrotfish, wrasses, sea basses, flatheads, sea breams and lightfishes.
Protandry is when an individual starts out male and becomes female while the reverse condition is known as protogyny, the latter being more common. Changing sex can occur in various contexts. In the bluestreak cleaner wrasse, where males have harems of up to ten females, if the male is removed the largest and most dominant female develops male-like behaviour and eventually testes. If she is removed, the next ranking female takes her place. In the species Anthias squamipinnis, where individuals gather into large groups and females greatly outnumber males, if a certain number of males are removed from a group, the same number of females change sex and replace them. In clownfish, individuals live in groups and only the two largest in a group breed: the largest female and the largest male. If the female dies, the male switches sexes and the next largest male takes his place.
In deep-sea anglerfish (sub-order Ceratioidei), the much smaller male becomes permanently attached to the female and degenerates into a sperm-producing attachment. The female and their attached male become a "semi-hermaphroditic unit".
Mating tactics
There are several different mating systems among teleosts. Some species are promiscuous, where both males and females breed with multiple partners and there are no obvious mate choices. This has been recorded in Baltic herring, Guppies, Nassau groupers, humbug damselfish, cichlids and creole wrasses. Polygamy, where one sex has multiple partners can come in many forms. Polyandry consists of one adult female breeding with multiple males, which only breed with that female. This is rare among teleosts, and fish in general, but is found in the clownfish. In addition, it may also exist to an extent among anglerfish, where some females have more than one male attached to them. Polygyny, where one male breeds with multiple females, is much more common. This is recorded in Sculpins, sunfish, darters, damselfish and cichlids where multiple females may visit a territorial male that guards and takes care of eggs and young. Polygyny may also involve a male guarding a harem of several females. This occurs in coral reef species, such as damselfishes, wrasses, parrotfishes, surgeonfishes, triggerfishes and tilefishes.
Lek breeding, where males congregate to display to females, has been recorded in at least one species Cyrtocara eucinostomus. Lek-like breeding systems have also been recorded in several other species. In monogamous species, males and females may form pair bonds and breed exclusively with their partners. This occurs in North American freshwater catfishes, many butterflyfishes, sea horses and several other species. Courtship in teleosts plays a role in species recognition, strengthening pair bonds, spawning site position and gamete release synchronisation. This includes colour changes, sound production and visual displays (fin erection, rapid swimming, breaching), which is often done by the male. Courtship may be done by a female to overcome a territorial male that would otherwise drive her away.
Sexual dimorphism exists in some species. Individuals of one sex, usually males develop secondary sexual characteristics that increase their chances of reproductive success. In dolphinfish, males have larger and blunter heads than females. In several minnow species, males develop swollen heads and small bumps known as breeding tubercles during the breeding season. The male green humphead parrotfish has a more well-developed forehead with an "ossified ridge" which plays a role in ritualised headbutting. Dimorphism can also take the form of differences in coloration. Again, it is usually the males that are brightly coloured; in killifishes, rainbowfishes and wrasses the colours are permanent while in species like minnows, sticklebacks, darters and sunfishes, the colour changes with seasons. Such coloration can be very conspicuous to predators, showing that the drive to reproduce can be stronger than that to avoid predation.
Males that have been unable to court a female successfully may try to achieve reproductive success in other ways. In sunfish species, like the bluegill, larger, older males known as parental males, which have successfully courted a female, construct nests for the eggs they fertilise. Smaller satellite males mimic female behaviour and coloration to access a nest and fertilise the eggs. Other males, known as sneaker males, lurk nearby and then quickly dash to the nest, fertilising on the run. These males are smaller than satellite males. Sneaker males also exist in Oncorhynchus salmon, where small males that were unable to establish a position near a female dash in while the large dominant male is spawning with the female.
Spawning sites and parental care
Teleosts may spawn in the water column or, more commonly, on the substrate. Water column spawners are mostly limited to coral reefs; the fish will rush towards the surface and release their gametes. This appears to protect the eggs from some predators and allow them to disperse widely via currents. They receive no parental care. Water column spawners are more likely than substrate spawners to spawn in groups. Substrate spawning commonly occurs in nests, rock crevices or even burrows. Some eggs can stick to various surfaces like rocks, plants, wood or shells.
Of the oviparous teleosts, most (79 percent) do not provide parental care. Male care is far more common than female care. Male territoriality "preadapts" a species to evolve male parental care. One unusual example of female parental care is in discuses, which provide nutrients for their developing young in the form of mucus. Some teleost species have their eggs or young attached to or carried in their bodies. For sea catfishes, cardinalfishes, jawfishes and some others, the egg may be incubated or carried in the mouth, a practice known as mouthbrooding. In some African cichlids, the eggs may be fertilised there. In species like the banded acara, young are brooded after they hatch and this may be done by both parents. The timing of the release of young varies between species; some mouthbrooders release new-hatched young while other may keep then until they are juveniles. In addition to mouthbrooding, some teleost have also developed structures to carry young. Male nurseryfish have a bony hook on their foreheads to carry fertilised eggs; they remain on the hook until they hatch. For seahorses, the male has a brooding pouch where the female deposits the fertilised eggs and they remain there until they become free-swimming juveniles. Female banjo catfishes have structures on their belly to which the eggs attach.
In some parenting species, young from a previous spawning batch may stay with their parents and help care for the new young. This is known to occur in around 19 species of cichlids in Lake Tanganyika. These helpers take part in cleaning and fanning eggs and larvae, cleaning the breeding hole and protecting the territory. They have reduced growth rate but gain protection from predators. Brood parasitism also exists among teleosts; minnows may spawn in sunfish nests as well as nests of other minnow species. The cuckoo catfish is known for laying eggs on the substrate as mouthbrooding cichclids collect theirs and the young catfish will eat the cichlid larvae. Filial cannibalism occurs in some teleost families and may have evolved to combat starvation.
Growth and development
Teleosts have four major life stages: the egg, the larva, the juvenile and the adult. Species may begin life in a pelagic environment or a demersal environment (near the seabed). Most marine teleosts have pelagic eggs, which are light, transparent and buoyant with thin envelopes. Pelagic eggs rely on the ocean currents to disperse and receive no parental care. When they hatch, the larvae are planktonic and unable to swim. They have a yolk sac attached to them which provides nutrients. Most freshwater species produce demersal eggs which are thick, pigmented, relatively heavy and able to stick to substrates. Parental care is much more common among freshwater fish. Unlike their pelagic counterparts, demersal larvae are able to swim and feed as soon as they hatch. Larval teleosts often look very different from adults, particularly in marine species. Some larvae were even considered different species from the adults. Larvae have high mortality rates, most die from starvation or predation within their first week. As they grow, survival rates increase and there is greater physiological tolerance and sensitivity, ecological and behavioural competence.
At the juvenile stage, a teleost looks more like its adult form. At this stage, its axial skeleton, internal organs, scales, pigmentation and fins are fully developed. The transition from larvae to juvenile can be short and fairly simple, lasting minutes or hours as in some damselfish, while in other species, like salmon, squirrelfish, gobies and flatfishes, the transition is more complex and takes several weeks to complete. At the adult stage, a teleost is able to produce viable gametes for reproduction. Like many fish, teleosts continue to grow throughout their lives. Longevity depends on the species with some gamefish like European perch and largemouth bass living up to 25 years. Rockfish appear to be the longest living teleosts with some species living over 100 years.
Shoaling and schooling
Many teleosts form shoals, which serve multiple purposes in different species. Schooling is sometimes an antipredator adaptation, offering improved vigilance against predators. It is often more efficient to gather food by working as a group, and individual fish optimise their strategies by choosing to join or leave a shoal. When a predator has been noticed, prey fish respond defensively, resulting in collective shoal behaviours such as synchronised movements. Responses do not consist only of attempting to hide or flee; antipredator tactics include for example scattering and reassembling. Fish also aggregate in shoals to spawn.
Relationship with humans
Economic importance
Teleosts are economically important in different ways. They are captured for food around the world. A small number of species such as herring, cod, pollock, anchovy, tuna and mackerel provide people with millions of tons of food per year, while many other species are fished in smaller amounts. They provide a large proportion of the fish caught for sport. Commercial and recreational fishing together provide millions of people with employment.
A small number of productive species including carp, salmon, tilapia and catfish are farmed commercially, producing millions of tons of protein-rich food per year. The UN's Food and Agriculture Organization expects production to increase sharply so that by 2030, perhaps sixty-two percent of food fish will be farmed.
Fish are consumed fresh, or may be preserved by traditional methods, which include combinations of drying, smoking, and salting, or fermentation. Modern methods of preservation include freezing, freeze-drying, and heat processing (as in canning). Frozen fish products include breaded or battered fillets, fish fingers and fishcakes. Fish meal is used as a food supplement for farmed fish and for livestock. Fish oils are made either from fish liver, especially rich in vitamins A and D, or from the bodies of oily fish such as sardine and herring, and used as food supplements and to treat vitamin deficiencies.
Some smaller and more colourful species serve as aquarium specimens and pets. Sea wolves are used in the leather industry. Isinglass is made from thread fish and drum fish.
Impact on stocks
Human activities have affected stocks of many species of teleost, through overfishing, pollution and global warming. Among many recorded instances, overfishing caused the complete collapse of the Atlantic cod population off Newfoundland in 1992, leading to Canada's indefinite closure of the fishery. Pollution, especially in rivers and along coasts, has harmed teleosts as sewage, pesticides and herbicides have entered the water. Many pollutants, such as heavy metals, organochlorines, and carbamates interfere with teleost reproduction, often by disrupting their endocrine systems. In the roach, river pollution has caused the intersex condition, in which an individual's gonads contain both cells that can make male gametes (such as spermatogonia) and cells that can make female gametes (such as oogonia). Since endocrine disruption also affects humans, teleosts are used to indicate the presence of such chemicals in water. Water pollution caused local extinction of teleost populations in many northern European lakes in the second half of the twentieth century.
The effects of climate change on teleosts could be powerful but are complex. For example, increased winter precipitation (rain and snow) could harm populations of freshwater fish in Norway, whereas warmer summers could increase growth of adult fish. In the oceans, teleosts may be able to cope with warming, as it is simply an extension of natural variation in climate. It is uncertain how ocean acidification, caused by rising carbon dioxide levels, might affect teleosts.
Other interactions
A few teleosts are dangerous. Some, like eeltail catfish (Plotosidae), scorpionfish (Scorpaenidae) or stonefish (Synanceiidae) have venomous spines that can seriously injure or kill humans. Some, like the electric eel and the electric catfish, can give a severe electric shock. Others, such as the piranha and barracuda, have a powerful bite and have sometimes attacked human bathers. Reports indicate that some of the catfish family can be large enough to prey on human bathers.
Medaka and zebrafish are used as research models for studies in genetics and developmental biology. The zebrafish is the most commonly used laboratory vertebrate, offering the advantages of genetic similarity to mammals, small size, simple environmental needs, transparent larvae permitting non-invasive imaging, plentiful offspring, rapid growth, and the ability to absorb mutagens added to their water.
In art
Teleost fishes have been frequent subjects in art, reflecting their economic importance, for at least 14,000 years. They were commonly worked into patterns in Ancient Egypt, acquiring mythological significance in Ancient Greece and Rome, and from there into Christianity as a religious symbol; artists in China and Japan similarly use fish images symbolically. Teleosts became common in Renaissance art, with still life paintings reaching a peak of popularity in the Netherlands in the 17th century. In the 20th century, different artists such as Klee, Magritte, Matisse and Picasso used representations of teleosts to express radically different themes, from attractive to violent. The zoologist and artist Ernst Haeckel painted teleosts and other animals in his 1904 Kunstformen der Natur. Haeckel had become convinced by Goethe and Alexander von Humboldt that by making accurate depictions of unfamiliar natural forms, such as from the deep oceans, he could not only discover "the laws of their origin and evolution but also to press into the secret parts of their beauty by sketching and painting".
| Biology and health sciences | Actinopterygii | Animals |
326837 | https://en.wikipedia.org/wiki/Toothed%20whale | Toothed whale | The toothed whales (also called odontocetes, systematic name Odontoceti) are a clade of cetaceans that includes dolphins, porpoises, and all other whales with teeth, such as beaked whales and the sperm whales. 73 species of toothed whales are described. They are one of two living groups of cetaceans, the other being the baleen whales (Mysticeti), which have baleen instead of teeth. The two groups are thought to have diverged around 34 million years ago (mya).
Toothed whales range in size from the and vaquita to the and sperm whale. Several species of odontocetes exhibit sexual dimorphism, in that there are size or other morphological differences between females and males. They have streamlined bodies and two limbs that are modified into flippers. Some can travel at up to 30 knots. Odontocetes have conical teeth designed for catching fish or squid. They have well-developed hearing that is well adapted for both air and water, so much so that some can survive even if they are blind. Some species are well adapted for diving to great depths. Almost all have a layer of fat, or blubber, under the skin to keep warm in the cold water, with the exception of river dolphins.
Toothed whales consist of some of the most widespread mammals, but some, as with the vaquita, are restricted to certain areas. Odontocetes feed largely on fish and squid, but a few, like the orca, feed on mammals, such as pinnipeds. Males typically mate with multiple females every year, making them polygynous. Females mate every two to three years. Calves are typically born in the spring and summer, and females bear the responsibility for raising them, but more sociable species rely on the family group to care for calves. Many species, mainly dolphins, are highly sociable, with some pods reaching over a thousand individuals.
Once hunted for their products, cetaceans are now protected by international law. Some species are very intelligent. At the 2012 meeting of the American Association for the Advancement of Science, support was reiterated for a cetacean bill of rights, listing cetaceans as nonhuman persons. Besides whaling and drive hunting, they also face threats from bycatch and marine pollution. The baiji, for example, is considered functionally extinct by IUCN, with the last sighting in 2004, due to heavy pollution to the Yangtze River. Whales sometimes feature in literature and film, as in the great white sperm whale of Herman Melville's Moby-Dick. Small odontocetes, mainly dolphins, are kept in captivity and trained to perform tricks. Whale watching has become a form of tourism around the world.
Taxonomy
Research history
In Aristotle's time, the fourth century BC, whales were regarded as fish due to their superficial similarity. Aristotle, however, could already see many physiological and anatomical similarities with the terrestrial vertebrates, such as blood (circulation), lungs, uterus, and fin anatomy. His detailed descriptions were assimilated by the Romans, but mixed with a more accurate knowledge of the dolphins, as mentioned by Pliny the Elder in his Natural history. In the art of this and subsequent periods, dolphins are portrayed with a high-arched head (typical of porpoises) and a long snout. The harbor porpoise is one of the most accessible species for early cetologists, because it could be seen very close to land, inhabiting shallow coastal areas of Europe. Many of the findings that apply to all cetaceans were therefore first discovered in the porpoises. One of the first anatomical descriptions of the airways of the whales on the basis of a harbor porpoise dates from 1671 by John Ray. It nevertheless referred to the porpoise as a fish.
Evolution
Toothed whales, as well as baleen whales, are descendants of land-dwelling mammals of the artiodactyl order (even-toed ungulates). They are closely related to the hippopotamus, sharing a common ancestor that lived around 54 million years ago (mya).
The primitive cetaceans, or archaeocetes, first took to the sea approximately 49 mya and became fully aquatic by 5–10 million years later. The ancestors of toothed whales and baleen whales diverged in the early Oligocene. This was due to a change in the climate of the southern oceans that affected where the environment of the plankton that these whales ate.
The adaptation of echolocation and enhanced fat synthesis in blubber occurred when toothed whales split apart from baleen whales, and distinguishes modern toothed whales from fully aquatic archaeocetes. This happened around 34 mya. Unlike toothed whales, baleen whales do not have wax ester deposits nor branched fatty chain acids in their blubber. Thus, more recent evolution of these complex blubber traits occurred after baleen whales and toothed whales split, and only in the toothed whale lineage.
Modern toothed whales do not rely on their sense of sight, but rather on their sonar to hunt prey. Echolocation also allowed toothed whales to dive deeper in search of food, with light no longer necessary for navigation, which opened up new food sources. Toothed whales (Odontocetes) echolocate by creating a series of clicks emitted at various frequencies. Sound pulses are emitted through the melon-shaped forehead, reflected off objects, and retrieved through the lower jaw. Skulls of Squalodon show evidence for the first hypothesized appearance of echolocation. Squalodon lived from the early to middle Oligocene to the middle Miocene, around 33-14 mya. Squalodon featured several commonalities with modern Odontocetes. The cranium was well compressed, the rostrum telescoped outward (a characteristic of the modern parvorder Odontoceti), giving Squalodon an appearance similar to that of modern toothed whales. However, it is thought unlikely that squalodontids are direct ancestors of living dolphins.
Biology
Anatomy
Toothed whales have torpedo-shaped bodies with usually inflexible necks, limbs modified into flippers, no outer ears, a large tail fin, and bulbous heads (with the exception of the sperm whale family). Their skulls have small eye orbits, long beaks (with the exception sperm whales), and eyes placed on the sides of their heads. Toothed whales range in size from the and vaquita to the and sperm whale. Overall, they tend to be dwarfed by their relatives, the baleen whales (Mysticeti). Several species have sexual dimorphism, with the females being larger than the males. One exception is with the sperm whale, which has males larger than the females.
Odontocetes possess teeth with cementum cells overlying dentine cells. Unlike human teeth, which are composed mostly of enamel on the portion of the tooth outside of the gum, whale teeth have cementum outside the gum. Only in larger whales, where the cementum is worn away on the tip of the tooth, does enamel show. There is only a single set of functional teeth (monophyodont dentition). Except for the sperm whale, most toothed whales are smaller than the baleen whales. The teeth differ considerably among the species. They may be numerous, with some dolphins bearing over 100 teeth in their jaws. At the other extreme are the narwhals with their single long tusks and the almost toothless beaked whales with tusk-like teeth only in males. In most beaked whales the teeth are seen to erupt in the lower jaw, and primarily occurs at the males sexual maturity. Not all species are believed to use their teeth for feeding. For instance, the sperm whale likely uses its teeth for aggression and showmanship.
Breathing involves expelling stale air from their one blowhole, forming an upward, steamy spout, followed by inhaling fresh air into the lungs. Spout shapes differ among species, which facilitates identification. The spout only forms when warm air from the lungs meets cold air, so it does not form in warmer climates, as with river dolphins.
Almost all cetaceans have a thick layer of blubber, except for river dolphins. In species that live near the poles, the blubber can be as thick as . This blubber can help with buoyancy, protection to some extent as predators would have a hard time getting through a thick layer of fat, energy for fasting during leaner times, and insulation from the harsh climate. Calves are born with only a thin layer of blubber, but some species compensate for this with thick lanugos.
Toothed whales have also evolved the ability to store large amounts of wax esters in their adipose tissue as an addition to or in complete replacement of other fats in their blubber. They can produce isovaleric acid from branched chain fatty acids (BCFA). These adaptations are unique, are only in more recent, derived lineages and were likely part of the transition for species to become deeper divers as the families of toothed whales (Physeteridae, Kogiidae, and Ziphiidae) that have the highest quantities of wax esters and BCFAs in their blubber are also the species that dive the deepest and for the longest amount of time.
Toothed whales have a two-chambered stomach similar in structure to terrestrial carnivores. They have fundic and pyloric chambers.
Locomotion
Cetaceans have two flippers on the front, and a tail fin. These flippers contain four digits. Although toothed whales do not possess fully developed hind limbs, some, such as the sperm whale, possess discrete rudimentary appendages, which may contain feet and digits. Toothed whales are fast swimmers in comparison to seals, which typically cruise at 5–15 knots, or ; the sperm whale, in comparison, can travel at speeds of up to . The fusing of the neck vertebrae, while increasing stability when swimming at high speeds, decreases flexibility, rendering them incapable of turning their heads; river dolphins, however, have unfused neck vertebrae and can turn their heads. When swimming, toothed whales rely on their tail fins to propel them through the water. Flipper movement is continuous. They swim by moving their tail fin and lower body up and down, propelling themselves through vertical movement, while their flippers are mainly used for steering. Some species log out of the water, which may allow them to travel faster. Their skeletal anatomy allows them to be fast swimmers. Most species have a dorsal fin.
Most toothed whales are adapted for diving to great depths, porpoises are one exception. In addition to their streamlined bodies, they can slow their heart rate to conserve oxygen; blood is rerouted from tissue tolerant of water pressure to the heart and brain among other organs; haemoglobin and myoglobin store oxygen in body tissue; and they have twice the concentration of myoglobin than haemoglobin. Before going on long dives, many toothed whales exhibit a behaviour known as sounding; they stay close to the surface for a series of short, shallow dives while building their oxygen reserves, and then make a sounding dive.
Senses
Toothed whale eyes are relatively small for their size, yet they do retain a good degree of eyesight. Also, the eyes are on the sides of the head, so their vision consists of two fields, rather than a binocular view as humans have. When a beluga surfaces, its lenses and corneas correct the nearsightedness that results from the refraction of light; they contain both rod and cone cells, meaning they can see in both dim and bright light. They do, however, lack short wavelength-sensitive visual pigments in their cone cells, indicating a more limited capacity for colour vision than most mammals. Most toothed whales have slightly flattened eyeballs, enlarged pupils (which shrink as they surface to prevent damage), slightly flattened corneas, and a tapetum lucidum; these adaptations allow for large amounts of light to pass through the eye, and, therefore, a very clear image of the surrounding area. In water, a whale can see around ahead of itself, but they have a smaller range above water. They also have glands on the eyelids and outer corneal layer that act as protection for the cornea.
The olfactory lobes are absent in toothed whales, and unlike baleen whales, they lack the vomeronasal organ, suggesting they have no sense of smell.
Toothed whales are not thought to have a good sense of taste, as their taste buds are atrophied or missing altogether. However, some dolphins have preferences between different kinds of fish, indicating some sort of attachment to taste.
Echolocation
Toothed whales are capable of making a broad range of sounds using nasal airsacs located just below the blowhole. Clicks are directional and are used for echolocation, often occurring in a short series called a click train. The click rate increases when approaching an object of interest. Toothed whale biosonar clicks are amongst the loudest sounds made by marine animals.
The cetacean ear has specific adaptations to the marine environment. In humans, the middle ear works as an impedance equalizer between the outside air's low impedance and the cochlear fluid's high impedance. In whales, and other marine mammals, no great difference exists between the outer and inner environments. Instead of sound passing through the outer ear to the middle ear, whales receive sound through the throat, from which it passes through a low-impedance, fat-filled cavity to the inner ear. The ear is acoustically isolated from the skull by air-filled sinus pockets, which allow for greater directional hearing underwater.
Odontocetes generate sounds independently of respiration using recycled air that passes through air sacs and phonic (alternatively monkey) lips. Integral to the lips are oil-filled organs called dorsal bursae that have been suggested to be homologous in the dolphin to the sperm whale's spermaceti organ. These send out high-frequency clicks through the sound-modifying organs of the extramandibular fat body, intramandibular fat body and the melon.
The melon consists of fat, and the skull of any such creature containing a melon will have a large depression. The melon size varies between species, the bigger it is, the more dependent they are on it. A beaked whale, for example, has a small bulge sitting on top of its skull, whereas a sperm whale's head is filled mainly with the melon. Directional asymmetry in the skull has been seen amongst many generations, used for echolocation. This asymmetry is useful in focusing the use of bio sonar effectively when deep diving for prey. Odontocetes are well adapted to hear sounds at ultrasonic frequencies, as opposed to mysticetes who generally hear sounds within the range of infrasonic frequencies.
Communication calls
Bottlenose dolphins have been found to have signature whistles unique to each individual. Dolphins use these whistles to communicate with one another by identifying an individual. It can be seen as the dolphin equivalent of a name for humans.
Because dolphins generally live in groups, communication is necessary. Signal masking is when other similar sounds (conspecific sounds) interfere with the original sound. In larger groups, individual whistle sounds are less prominent. Dolphins tend to travel in pods, sometimes of up to 600 members.
Life history and behaviour
Intelligence
Cetaceans are known to communicate and therefore are able to teach, learn, cooperate, scheme, and grieve. The neocortex of many species of dolphins is home to elongated spindle neurons that, prior to 2007, were known only in hominids. In humans, these cells are involved in social conduct, emotions, judgement, and theory of mind. Dolphin spindle neurons are found in areas of the brain homologous to where they are found in humans, suggesting they perform a similar function.
Brain size was previously considered a major indicator of the intelligence of an animal. Since most of the brain is used for maintaining bodily functions, greater ratios of brain to body mass may increase the amount of brain mass available for more complex cognitive tasks. Allometric analysis indicates that mammalian brain size scales around the two-thirds or three-quarters exponent of the body mass. Comparison of a particular animal's brain size with the expected brain size based on such allometric analysis provides an encephalization quotient that can be used as another indication of animal intelligence. Sperm whales have the largest brain mass of any animal on earth, averaging and in mature males, in comparison to the average human brain which averages in mature males. The brain to body mass ratio in some odontocetes, such as belugas and narwhals, is second only to humans.
Dolphins are known to engage in complex play behaviour, which includes such things as producing stable underwater toroidal air-core vortex rings or "bubble rings". Two main methods of bubble ring production are: rapid puffing of a burst of air into the water and allowing it to rise to the surface, forming a ring, or swimming repeatedly in a circle and then stopping to inject air into the helical vortex currents thus formed. They also appear to enjoy biting the vortex rings, so that they burst into many separate bubbles and then rise quickly to the surface. Dolphins are known to use this method during hunting. Dolphins are also known to use tools. In Shark Bay, a population of Indo-Pacific bottlenose dolphins put sponges on their beak to protect them from abrasions and sting ray barbs while foraging in the seafloor. This behaviour is passed on from mother to daughter, and it is only observed in 54 female individuals.
Self-awareness is seen, by some, to be a sign of highly developed, abstract thinking. Self-awareness, though not well-defined scientifically, is believed to be the precursor to more advanced processes like metacognitive reasoning (thinking about thinking) that are typical of humans. Research in this field has suggested that cetaceans, among others, possess self-awareness. The most widely used test for self-awareness in animals is the mirror test, in which a temporary dye is placed on an animal's body, and the animal is then presented with a mirror; then whether the animal shows signs of self-recognition is determined. In 1995, Marten and Psarakos used television to test dolphin self-awareness. They showed dolphins real-time footage of themselves, recorded footage, and another dolphin. They concluded that their evidence suggested self-awareness rather than social behavior. While this particular study has not been repeated since then, dolphins have since "passed" the mirror test.
Vocalisations
Dolphins make a broad range of sounds using nasal airsacs located just below the blowhole. Roughly three categories of sounds can be identified: frequency modulated whistles, burst-pulsed sounds and clicks. Dolphins communicate with whistle-like sounds produced by vibrating connective tissue, similar to the way human vocal cords function, and through burst-pulsed sounds, though the nature and extent of that ability is not known. The clicks are directional and are for echolocation, often occurring in a short series called a click train. The click rate increases when approaching an object of interest. Dolphin echolocation clicks are amongst the loudest sounds made by marine animals.
Bottlenose dolphins have been found to have signature whistles, a whistle that is unique to a specific individual. These whistles are used in order for dolphins to communicate with one another by identifying an individual. It can be seen as the dolphin equivalent of a name for humans. These signature whistles are developed during a dolphin's first year; it continues to maintain the same sound throughout its lifetime. An auditory experience influences the whistle development of each dolphin. Dolphins are able to communicate to one another by addressing another dolphin through mimicking their whistle. The signature whistle of a male bottlenose dolphin tends to be similar to that of his mother, while the signature whistle of a female bottlenose dolphin tends to be more identifying. Bottlenose dolphins have a strong memory when it comes to these signature whistles, as they are able to relate to a signature whistle of an individual they have not encountered for over twenty years. Research done on signature whistle usage by other dolphin species is relatively limited. The research on other species done so far has yielded varied outcomes and inconclusive results.
Sperm whales can produce three specific vocalisations: creaks, codas, and slow clicks. A creak is a rapid series of high-frequency clicks that sounds somewhat like a creaky door hinge. It is typically used when homing in on prey. A coda is a short pattern of 3 to 20 clicks that is used in social situations to identify one another (like a signature whistle), but it is still unknown whether sperm whales possess individually specific coda repertoires or whether individuals make codas at different rates. Slow clicks are heard only in the presence of males (it is not certain whether females occasionally make them). Males make a lot of slow clicks in breeding grounds (74% of the time), both near the surface and at depth, which suggests they are primarily mating signals. Outside breeding grounds, slow clicks are rarely heard, and usually near the surface.
Foraging and predation
All whales are carnivorous and predatory. Odontocetes, as a whole, mostly feed on fish and cephalopods, and then followed by crustaceans and bivalves. All species are generalist and opportunistic feeders. Some may forage with other kinds of animals, such as other species of whales or certain species of pinnipeds. One common feeding method is herding, where a pod squeezes a school of fish into a small volume, known as a bait ball. Individual members then take turns plowing through the ball, feeding on the stunned fish. Coralling is a method where dolphins chase fish into shallow water to catch them more easily. Orcas and bottlenose dolphins have also been known to drive their prey onto a beach to feed on it, a behaviour known as beach or strand feeding. The shape of the snout may correlate with tooth number and thus feeding mechanisms. The narwhal, with its blunt snout and reduced dentition, relies on suction feeding.
Sperm whales usually dive between , and sometimes , in search of food. Such dives can last more than an hour. They feed on several species, notably the giant squid, but also the colossal squid, octopuses, and fish like demersal rays, but their diet is mainly medium-sized squid. Some prey may be taken accidentally while eating other items. A study in the Galápagos found that squid from the genera Histioteuthis (62%), Ancistrocheirus (16%), and Octopoteuthis (7%) weighing between were the most commonly taken. Battles between sperm whales and giant squid or colossal squid have never been observed by humans; however, white scars are believed to be caused by the large squid. A 2010 study suggests that female sperm whales may collaborate when hunting Humboldt squid.
The orca is known to prey on numerous other toothed whale species. One example is the false killer whale. To subdue and kill whales, orcas continually ram them with their heads; this can sometimes kill bowhead whales, or severely injure them. Other times, they corral their prey before striking. They are typically hunted by groups of 10 or fewer orca, but they are seldom attacked by an individual. Calves are more commonly taken by orca, but adults can be targeted, as well. Groups even attack larger cetaceans such as minke whales, gray whales, and rarely sperm whales or blue whales. Other marine mammal prey species include nearly 20 species of seal, sea lion and fur seal.
These cetaceans are targeted by terrestrial and pagophilic predators. The polar bear is well-adapted for hunting Arctic whales and calves. Bears are known to use sit-and-wait tactics, as well as active stalking and pursuit of prey on ice or water. Whales lessen the chance of predation by gathering in groups. This, however, means less room around the breathing hole as the ice slowly closes the gap. When out at sea, whales dive out of the reach of surface-hunting orca. Polar bear attacks on belugas and narwhals are usually successful in winter, but rarely inflict any damage in summer.
For most of the smaller species of dolphins, only a few of the larger sharks, such as the bull shark, dusky shark, tiger shark, and great white shark, are a potential risk, especially for calves. Dolphins can tolerate and recover from extreme injuries (including shark bites) although the exact methods used to achieve this are not known. The healing process is rapid and even very deep wounds do not cause dolphins to hemorrhage to death. Even gaping wounds restore in such a way that the animal's body shape is restored, and infection of such large wounds are rare.
Life cycle
Toothed whales are fully aquatic creatures, which means their birth and courtship behaviours are very different from terrestrial and semiaquatic creatures. Since they are unable to go onto land to calve, they deliver their young with the fetus positioned for tail-first delivery. This prevents the calf from drowning either upon or during delivery. To feed the newborn, toothed whales, being aquatic, must squirt the milk into the mouth of the calf. Being mammals, they have mammary glands used for nursing calves; they are weaned around 11 months of age. This milk contains high amounts of fat which is meant to hasten the development of blubber; it contains so much fat, it has the consistency of toothpaste. Females deliver a single calf, with gestation lasting about a year, dependency until one to two years, and maturity around seven to 10 years, all varying between the species. This mode of reproduction produces few offspring, but increases the survival probability of each one. Females, referred to as "cows", carry the responsibility of childcare, as males, referred to as "bulls", play no part in raising calves.
In orcas, false killer whales, short-finned pilot whales, narwhals, and belugas, there is an unusually long post-reproductive lifespan (menopause) in females. Older females, though unable to have their own children, play a key role in the rearing of other calves in the pod, and in this sense, given the costs of pregnancy especially at an advanced age, extended menopause is advantageous.
Interaction with humans
Threats
Sperm whaling
The head of the sperm whale is filled with a waxy liquid called spermaceti. This liquid can be refined into spermaceti wax and sperm oil. These were much sought after by 18th-, 19th-, and 20th-century whalers. These substances found a variety of commercial applications, such as candles, soap, cosmetics, machine oil, other specialized lubricants, lamp oil, pencils, crayons, leather waterproofing, rustproofing materials, and many pharmaceutical compounds.
Ambergris, a solid, waxy, flammable substance produced in the digestive system of sperm whales, was also sought as a fixative in perfumery.
Sperm whaling in the 18th century began with small sloops carrying only a pair of whaleboats (sometimes only one). As the scope and size of the fleet increased, so did the rig of the vessels change, as brigs, schooners, and finally ships and barks were introduced. In the 19th-century stubby, square-rigged ships (and later barks) dominated the fleet, being sent to the Pacific (the first being the British whaleship Emilia, in 1788), the Indian Ocean (1780s), and as far away as the Japan grounds (1820) and the coast of Arabia (1820s), as well as Australia (1790s) and New Zealand (1790s).
Hunting for sperm whales during this period was notoriously dangerous for the crews of the 19th-century whaleboats. Though a properly harpooned sperm whale generally exhibited a fairly consistent pattern of trying to flee underwater to the point of exhaustion (at which point it would surface and offer no further resistance), it was not uncommon for bull whales to become enraged and turn to attack pursuing whaleboats on the surface, particularly if it had already been wounded by repeated harpooning attempts. A commonly reported tactic was for the whale to invert itself and violently thrash the surface of the water with its fluke, flipping and crushing nearby boats.
The estimated historic worldwide sperm whale population numbered 1,100,000 before commercial sperm whaling began in the early 18th century. By 1880, it had declined an estimated 29%. From that date until 1946, the population appears to have recovered somewhat as whaling pressure lessened, but after the Second World War, with the industry's focus again on sperm whales, the population declined even further to only 33%. In the 19th century, between 184,000 and 236,000 sperm whales were estimated to have been killed by the various whaling nations, while in the modern era, at least 770,000 were taken, most between 1946 and 1980. Remaining sperm whale populations are large enough so that the species' conservation status is vulnerable, rather than endangered. However, the recovery from the whaling years is a slow process, particularly in the South Pacific, where the toll on males of breeding age was severe.
Drive hunting
Dolphins and porpoises are hunted in an activity known as dolphin drive hunting. This is done by driving a pod together with boats and usually into a bay or onto a beach. Their escape is prevented by closing off the route to the ocean with other boats or nets. Dolphins are hunted this way in several places around the world, including the Solomon Islands, the Faroe Islands, Peru, and Japan, the most well-known practitioner of this method. By numbers, dolphins are mostly hunted for their meat, though some end up in dolphinariums. Despite the controversial nature of the hunt resulting in international criticism, and the possible health risk that the often polluted meat causes, thousands of dolphins are caught in drive hunts each year.
In Japan, the hunting is done by a select group of fishermen. When a pod of dolphins has been spotted, they are driven into a bay by the fishermen while banging on metal rods in the water to scare and confuse the dolphins. When the dolphins are in the bay, it is quickly closed off with nets so the dolphins cannot escape. The dolphins are usually not caught and killed immediately, but instead left to calm down over night. The following day, the dolphins are caught one by one and killed. The killing of the animals used to be done by slitting their throats, but the Japanese government banned this method, and now dolphins may officially only be killed by driving a metal pin into the neck of the dolphin, which causes them to die within seconds according to a memo from Senzo Uchida, the executive secretary of the Japan Cetacean Conference on Zoological Gardens and Aquariums. A veterinary team's analysis of a 2011 video footage of Japanese hunters killing striped dolphins using this method suggested that, in one case, death took over four minutes.
Since much of the criticism is the result of photos and videos taken during the hunt and slaughter, it is now common for the final capture and slaughter to take place on site inside a tent or under a plastic cover, out of sight from the public. The most circulated footage is probably that of the drive and subsequent capture and slaughter process taken in Futo, Japan, in October 1999, shot by the Japanese animal welfare organization Elsa Nature Conservancy. Part of this footage was, amongst others, shown on CNN. In recent years, the video has also become widespread on the internet and was featured in the animal welfare documentary Earthlings, though the method of killing dolphins as shown in this video is now officially banned. In 2009, a critical documentary on the hunts in Japan titled The Cove was released and shown amongst others at the Sundance Film Festival.
Other threats
Toothed whales can also be threatened by humans more indirectly. They are unintentionally caught in fishing nets by commercial fisheries as bycatch and accidentally swallow fishhooks. Gillnetting and Seine netting are significant causes of mortality in cetaceans and other marine mammals. Porpoises are commonly entangled in fishing nets. Whales are also affected by marine pollution. High levels of organic chemicals accumulate in these animals since they are high in the food chain. They have large reserves of blubber, more so for toothed whales, as they are higher up the food chain than baleen whales. Lactating mothers can pass the toxins on to their young. These pollutants can cause gastrointestinal cancer and greater vulnerability to infectious diseases. They may also swallow litter, such as plastic bags. Pollution of the Yangtze river has led to the extinction of the baiji. Environmentalists speculate that advanced naval sonar endangers some whales. Some scientists suggest that sonar may trigger whale beachings, and they point to signs that such whales have experienced decompression sickness.
Conservation
Currently, no international convention gives universal coverage to all small whales, although the International Whaling Commission has attempted to extend its jurisdiction over them. ASCOBANS was negotiated to protect all small whales in the North and Baltic Seas and in the northeast Atlantic. ACCOBAMS protects all whales in the Mediterranean and Black Seas. The global UNEP Convention on Migratory Species currently covers seven toothed whale species or populations on its Appendix I, and 37 species or populations on Appendix II. All oceanic cetaceans are listed in CITES appendices, meaning international trade in them and products derived from them is very limited.
Many organizations are dedicated to protecting certain species that do not fall under any international treaty, such as CIRVA (Committee for the Recovery of the Vaquita), and the Wuhan Institute of Hydrobiology (for the Yangtze finless porpoise).
In captivity
Species
Various species of toothed whales, mainly dolphins, are kept in captivity, as well as several other species of porpoise such as harbour porpoises and finless porpoises. These small cetaceans are more often than not kept in theme parks, such as SeaWorld, commonly known as a dolphinarium. Bottlenose dolphins are the most common species kept in dolphinariums, as they are relatively easy to train, have a long lifespan in captivity, and have a friendly appearance. Hundreds if not thousands of bottlenose dolphins live in captivity across the world, though exact numbers are hard to determine. Orca are well known for their performances in shows, but the number kept in captivity is very small, especially when compared to the number of bottlenose dolphins, with only 44 captives being held in aquaria as of 2012. Other species kept in captivity are spotted dolphins, false killer whales, and common dolphins, Commerson's dolphins, as well as rough-toothed dolphins, but all in much lower numbers than the bottlenose dolphin. Also, fewer than ten pilot whales, Amazon river dolphins, Risso's dolphins, spinner dolphins, or tucuxi are in captivity. Two unusual and very rare hybrid dolphins, known as wolphins, are kept at the Sea Life Park in Hawaii, which is a cross between a bottlenose dolphin and a false killer whale. Also, two common/bottlenose hybrids reside in captivity: one at Discovery Cove and the other at SeaWorld San Diego.
Controversy
Organizations such as the Animal Welfare Institute and Whale and Dolphin Conservation campaign against the captivity of dolphins and orcas. SeaWorld faced a lot of criticism after the documentary Blackfish was released in 2013.
Aggression among captive orca is common. In August 1989, a dominant female orca, Kandu V, tried to rake a newcomer whale, Corky II, with her mouth during a live show, and smashed her head into a wall. Kandu V broke her jaw, which severed an artery, and then bled to death. In November 2006, a dominant female killer whale, Kasatka, repeatedly dragged experienced trainer Ken Peters to the bottom of the stadium pool during a show after hearing her calf crying for her in the back pools. In February 2010, an experienced female trainer at SeaWorld Orlando, Dawn Brancheau, was killed by orca Tilikum shortly after a show in Shamu Stadium. Tilikum had been associated with the deaths of two people previously. In May 2012, Occupational Safety and Health Administration administrative law judge Ken Welsch cited SeaWorld for two violations in the death of Dawn Brancheau and fined the company a total of US$12,000. Trainers were banned from making close contact with the orca. In April 2014, the US Court of Appeals for the District of Columbia denied an appeal by SeaWorld.
In 2013, SeaWorld's treatment of orca in captivity was the basis of the movie Blackfish, which documents the history of Tilikum, an orca captured by SeaLand of the Pacific, later transported to SeaWorld Orlando, which has been involved in the deaths of three people. In the aftermath of the release of the film, Martina McBride, 38 Special, REO Speedwagon, Cheap Trick, Heart, Trisha Yearwood, and Willie Nelson cancelled scheduled concerts at SeaWorld parks. SeaWorld disputes the accuracy of the film, and in December 2013 released an ad countering the allegations and emphasizing its contributions to the study of cetaceans and their conservation.
| Biology and health sciences | Toothed whale | Animals |
326872 | https://en.wikipedia.org/wiki/Draco%20%28lizard%29 | Draco (lizard) | Draco is a genus of agamid lizards that are also known as flying lizards, flying dragons or gliding lizards. These lizards are capable of gliding flight via membranes that may be extended to create wings (patagia), formed by an enlarged set of ribs. They are arboreal insectivores.
While not capable of powered flight they often obtain lift in the course of their gliding flights. Glides as long as have been recorded, over which the animal loses only in height which makes for a glide ratio of 6:1. This is done by a lizard of only around in total length, tail included. They are found across Southeast Asia and Southern India and are fairly common in forests, areca gardens, teak plantations and shrub jungle.
History of discovery
Carl Linnaeus described the genus in 1758, with the type species being Draco volans. The name of the genus is from the Latin term for dragons of mythology. In the early and mid 20th century, there was controversy about their gliding capabilities, with some authors suggesting that the patagia were solely for display, but research in the late 1950s firmly established the gliding function of the patagia.
Distribution
Species of Draco are widely distributed in the forests of Southeast Asia, with one species, Draco dussumieri, inhabiting Southern India.
Habitat and ecology
Members of Draco are primarily arboreal, inhabiting tropical rainforests, and are almost never found on the forest floor. They are insectivorous, primarily feeding on eusocial insects such as ants and termites. The colour of the patagium is strongly correlated to the colour of falling leaves in their range, which complements their cryptic camouflage resembling tree bark; both are likely to be camouflage against predatory birds.
Gliding
The lizards are well known for their "display structures" and ability to glide long distances using their wing-like, patagial membranes supported by elongated thoracic ribs to generate lift forces. The hindlimbs in cross section form a streamlined and contoured airfoil, and are also probably involved in generating lift. Gliding is both used to escape predators, and as the primary means of moving through their forest habitat. The folding and unfolding of the membrane is controlled by the iliocostalis and intercostal muscles, which in other lizards are used to control breathing. At takeoff, the lizard jumps and descends headfirst, orientating itself so that the underside of the body is parallel to the ground. During flight, the back arches, forming the patagium into a cambered surface, and the forelimbs grab the front of the patagium, forming a straight front edge to the aerofoil. The forelimbs are used to manipulate the patagium in order to adjust the trajectory during flight. Maximum gliding speeds have been found to be between 5.2 and 7.6 metres per second, depending on the species. During the landing process, the glide is mostly horizontal. Immediately before landing, the forelimbs release the patagium. The landing is forefeet-first, followed by hindfeet. The shape of the gliding membrane does not correlate with body size, meaning the larger species have proportionately less lift-generating surface area and consequently higher wing loading.
Life history
Draco lizards are highly territorial, with the home range consisting of one or a few trees. The trees are actively guarded by males, with territory-less males searching the forest landscape in search of vacant areas. Experimental studies have determined that suitable unoccupied territories were claimed within a few hours of the removal of a dominant male. Females move freely through the territories. The patagium is used as a display structure during courtship and territorial disputes between rival males, alongside the opening of a brightly-colored dewlap that contrasts with their camouflaged body scalation. The dewlap is translucent, and deliberately orientated perpendicular to the orientation of the sun during display in order to enhance visibility. Draco is sexually dimorphic, with females being larger than males. The only time a female flying lizard ventures to the ground is when she is ready to lay her eggs. She descends the tree she is on and makes a nest hole by forcing her head into the soil. She then lays a clutch of 2–5 eggs before filling the hole and guards the eggs for approximately 24 hours, but then leaves and has nothing more to do with her offspring.
Phylogenetics
Within Agamidae, Draco is a member of the subfamily Draconinae. Within Draconinae, Draco is most closely related to the genera Japalura and Ptyctolaemus.
Species
The following 41 species are recognized:
Draco abbreviatus – Singapore flying dragon
Draco beccarii
Draco biaro – Lazell's flying dragon
Draco bimaculatus – two-spotted flying lizard
Draco blanfordii – Blanford's flying dragon, Blanford’s flying lizard, Blanford's gliding lizard
Draco boschmai
Draco caerulhians
Draco cornutus
Draco cristatellus – crested flying dragon
Draco cyanopterus
Draco dussumieri – Indian flying lizard, Western Ghats flying lizard, southern flying lizard
Draco fimbriatus – fringed flying dragon, crested gliding lizard
Draco formosus – dusky gliding lizard
Draco guentheri – Günther's flying lizard, Guenther's flying lizard
Draco haematopogon – red-bearded flying dragon, yellow-bearded gliding lizard
Draco indochinensis – Indochinese flying lizard, Indochinese gliding lizard
Draco iskandari
Draco jareckii
Draco lineatus – lined flying dragon
Draco maculatus – spotted flying dragon
Draco maximus – great flying dragon, giant gliding lizard
Draco melanopogon – black-bearded gliding lizard, black-barbed flying dragon
Draco mindanensis – Mindanao flying dragon, Mindanao flying lizard
Draco modiglianii – lined flying dragon
Draco norvillii – Norvill's flying lizard
Draco obscurus – dusky gliding lizard
Draco ornatus – white-spotted flying lizard
Draco palawanensis
Draco punctatus – punctate flying dragon
Draco quadrasi – Quadras's flying lizard
Draco quinquefasciatus – five-lined flying dragon, five-banded gliding lizard
Draco reticulatus
Draco rhytisma
Draco spilonotus – Sulawesi lined gliding lizard
Draco spilopterus – Philippine flying dragon
Draco sumatranus – common gliding lizard
Draco supriatnai
Draco taeniopterus – Thai flying dragon, barred flying dragon, barred gliding lizard
Draco timoriensis – Timor flying dragon
Draco volans – common flying dragon
Draco walkeri
Nota bene: a binomial authority in parentheses indicates that the species was originally described in a genus other than Draco.
Similar prehistoric reptiles
Several other lineages of reptile known from the fossil record have convergently evolved similar gliding mechanisms consisting of a patagium or plate flanking the torso; the weigeltisaurids are the oldest of these, living in the Late Permian from around 258 to 252 million years ago. Other lineages include the Triassic kuehneosaurids and Mecistotrachelos, and the Cretaceous lizard Xianglong.
| Biology and health sciences | Iguania | Animals |
326877 | https://en.wikipedia.org/wiki/Hydrosaurus | Hydrosaurus | Hydrosaurus, commonly known as the sailfin dragons or sailfin lizards, is a genus in the family Agamidae. These relatively large lizards are named after the sail-like structure on their tails. They are native to Indonesia (4 species) and the Philippines (1 species) where they are generally found near water, such as rivers and mangrove. Sailfin lizards are semiaquatic and able to run short distances across water using both their feet and tail for support, similar to the basilisks. They are threatened by both habitat loss and overcollection for the wild animal trade.
In the 19th century, the genus was called Lophura, however in 1903 Poche pointed out that the name was pre-occupied by a genus of pheasants. Since Günther in 1873, the Sulawesi populations were considered to belong to H. amboinensis; Denzer et al. in 2020 resurrected H. celebensis and H. microlophus, increasing the number of species from three to five.
They are the only members of the subfamily Hydrosaurinae.
Species
There are currently five valid species according to the Reptile Database,
| Biology and health sciences | Iguania | Animals |
326896 | https://en.wikipedia.org/wiki/Pogona | Pogona | Pogona is a genus of reptiles containing eight lizard species, which are often known by the common name bearded dragons or informally (especially in Australia) beardies. The name "bearded dragon" refers to the underside of the throat (or "beard") of the lizard, which can turn black and become inflated for a number of reasons, most often as a result of stress, if they feel threatened, or are trying to entice a mate. They are a semiarboreal species, spending significant amounts of time on branches, in bushes, and near human habitation. Pogona species bask on rocks and exposed branches in the mornings and afternoons and sleep at night, making them a diurnal species. Their diet consists primarily of vegetation and some insects. They are found throughout much of Australia and inhabit environments such as deserts, and shrublands.
The genus Pogona is in the subfamily Amphibolurinae of the lizard group Agamidae. Bearded dragons are characterized by their broad, triangular heads, flattened bodies, and rows and clusters of spiny scales covering their entire bodies. When threatened, bearded dragons puff up their bodies and beards to ward off predators and make their somewhat dull spikes seem more dangerous. Bearded dragons display a hand-waving gesture to show submission (most often when acknowledging another bearded dragon's territory), and a head-bobbing display to show dominance between dragons. Some have the ability to slightly change color in response to certain stimuli including rivalry challenges between males and ambient temperature changes (e.g., turning black to absorb heat). Bearded dragons occur in a variety of colors and morphs and can range from being all dark to completely white under controlled breeding conditions. Males grow up to long, and females up to .
Bearded dragons live in the woodlands, heaths, deserts and coastal dunes, with their range extending throughout the interior of the eastern states to the eastern half of South Australia and southeastern Northern Territory. They are considered to be semiarboreal and quite readily climb and bask at height. This is also linked to dominance behavior and competition for territory/basking areas. They can be found on fallen/broken trees, rocky outcrops, and bushes when basking. Many of the Australian locals have spotted bearded dragons on fence posts and elevated rocky areas. At night, they prefer to dig holes to sleep in, climb in trees, or submerge themselves in rocks and like to climb into the cracks and crevices of stones and caves.
Bearded dragons go through a type of hibernation called brumation, in which like hibernation, reptiles go months without eating, but sporadically drink water. Reptiles go dormant in the hottest temperatures, but it differs from brumation during cooler temperatures. When temperatures are extreme, a very small range of temperatures exists through which the reptile's bodies can stay active and where their bodies cannot tolerate the extreme heat and they die. Bearded dragons go through brumation when the temperature goes below 15.5–21.0°C (60–70°F) during the night and 24.0–26.5°C (75–80°F) during the day for 8-10 hours. When the climate is too hot they will often burrow underground. They will also form more permanent burrows or covered hiding places to use as protection from the climate changes at night and predation.
Behavior
Adult bearded dragons are very territorial. As they grow, they establish territories in which displays of aggression and appeasement form a normal part of their social interactions. A dominant male adopts a dominant stance and sometimes readies himself for a fight to attack a male aggressor to defend territory or food sources, or in competition for a female. Any male approaching without displaying submissive behavior is seen as a challenge for territory. Aggressive males have even been known to attack females that do not display submissive gestures in return.
Correspondingly, adult male bearded dragons can bite more forcefully than adult females, which is associated with greater head dimensions.
The bearded dragon occurs in many different colors. The beard itself is used for mating and aggression displays, as well as heat management. It forms part of a range of gestures and signals through which the dragons have basic levels of communication. Both sexes have a beard, but males display more frequently, especially in courtship rituals. Females also display their beards as a sign of aggression. The beard darkens, sometimes turning jet black, and inflates during the display. The bearded dragon may also open its mouth and gape in addition to inflating its beard to appear more intimidating. Extreme behavior such as hissing can be observed when threatened with a predator, inflating the body and tilting towards the threat in defense. Bearded dragons have relatively strong jaws, but often only attack as a last resort when threatened outside of competition with their own species.
Head bobbing is another behavior seen in both females and males; they quickly move their heads up and down, often darkening and flaring their beards. Changes in the pace of head bobbing are thought to be a form of communication. Males head bob to impress females, and a male often has to demonstrate his dominance when attempting to mate before the female will concede. Smaller males often respond to a larger male's head bobbing by arm waving, which is a submissive sign. Females also arm wave to avoid aggression, often in response to a male's head bobbing. Female bearded dragons have been seen lowering themselves towards the ground and intermittently arm waving whilst moving away from a dominant male in an attempt to either appease or escape.
The bearded dragon has also been shown to perceive illusion, specifically the Delboeuf illusion. In an experiment at the University of Padova, bearded dragons were presented with two different-sized plates with the same amount of food. The bearded dragons chose the smaller plate more often than they chose the larger one, showing that they were able to perceive the illusion and interpret that a larger plate does not always mean more food. This is the first evidence of this behavior being shown in a reptile species.
Reproduction
When brumation comes to an end, the male bearded dragon goes out to find a mate. A courtship ritual occurs where the male starts bobbing his head, waving his arms, and stomping his feet in front of the female. The male chases the female and bites the back of her neck and holds on while he gets in position to copulate.
During the breeding period, female bearded dragons can store sperm in their oviductal crypts. This allows the females to lay a clutch of 11–30 eggs, twice from one mating.
Bearded dragons exhibit temperature sex determination; while the embryo is developing, higher temperatures cause dragons with a male genotype to experience sex reversal and express a female phenotype. This produces a bearded dragon that is a female, but still has a male genotype. Incubation temperatures above can cause sex reversal, and the likelihood of sex reversal has a positive correlation with temperature up to 36°C. Incubation temperatures below 31°C cannot trigger sex reversal. Surprisingly, female bearded dragons with a male genotype do not have many differences from genotypic females. According to one study done on bite force, male bearded dragons have a higher bite force than genotypic females, and sex-reversed females, but no difference was seen between genotypic females and sex-reversed females.
Like many other reptile species (and what is most often observed in birds), females are capable of laying eggs even without fertilization. These eggs appear slightly smaller and softer, and contain a yellow yolk when broken open.
Congenital defects
During the development of an embryo, abnormalities may result in birth defects. These abnormalities might be caused by chromosomal disorders, chemicals, or other genetic or environmental factors.
Bicephalism is when a bearded dragon is born with two heads and one body.
Anasarca is when a bearded dragon is swollen within the egg. Observing eggs in the incubator, an anasarca egg appears to be sweating. The cause of this is not known.
Shistosomus reflexa is when the organs of a bearded dragon develop outside of the body.
Spinal and limb defects are abnormalities in the spine, tail, limbs, or toes. This occurs with nutritional deficiencies, trauma, or temperature issues during the development of the affected area.
Microphthalmia/anophthalmia is when a bearded dragon is born with small or no eye(s). The cause of this defect is a traumatic event or an environmental event that occurred during the development of the eyes.
Hermaphroditism is when the reproductive organs of both male and female are present. Bearded dragons born with both reproductive organs are infertile.
In captivity
The central bearded dragon is the most common species in captivity, as well as one of the most popular pet reptiles, with some smaller species such as Pogona henrylawsoni being used as substitutes where less housing space is available. Introduced into the U.S. as pets during the 1990s, bearded dragons have gained much popularity as an exotic pet. This popularity has been sustained, even after Australia banned the sale of its wildlife as pets in the 1960s.
Generally, the bearded dragon is a solitary animal. Males are usually housed alone, as they fight with other males and breed with females. Captive adults reach about from head to tail, weigh and live for about 10 to 15 years and longer with good care. They have been known to live up to about 15 years in captivity, and the current world record is 18 years.
Through selective breeding, many different versions of the central bearded dragon have been developed, referred to as "morphs". They have a few main genetic traits, including "hypomelanism" and "translucent", which refer to traits physically displayed by the dragon. Bearded dragons with hypomelanism tend to have lighter and more vibrant coloration. Translucents have a less opaque quality to their skin, making their colors seem stronger, and have black eyes. Also, "leatherbacks" have reduced scale texture to give a smoother skin, "silkbacks" have softer outer skin, and "German giants" are larger than average. Silkbacks in particular require special care, as they have far more delicate skin, and as such, require different UV and humidity requirements. They also tend to live shorter lives.
Common health issues
Although bearded dragons are fairly resilient to illness, improper care can potentially kill a bearded dragon. Some health issues that bearded dragons may have include metabolic bone disease, adenovirus, impaction, polarisation, dystocia, Yellow Fungus Disease and parasites. The majority of health issues bearded dragons face in captivity are due to poor diet and inadequate heat and lighting.
Metabolic bone disease
Metabolic bone disease (MBD) is a collective term for several common diseases/illnesses that can be fatal and is probably the most common health problem of bearded dragons. A main attribute of MBD is the weakening of the skeletal structure and possible deformation. It occurs in bearded dragons due to malnutrition or the use of improper lighting, meaning they are unable to properly assimilate calcium from their diet or there isn't enough in their diet. Most bearded dragons in captivity will be fed supplementation and all will need a UVB light to enable them to properly use calcium in their diet. Typical foods that bearded dragons eat, including kale, mustard greens, and collard greens, are high in calcium and should be eaten daily along with other leafy greens and vegetables to have a well-balanced diet. Bearded dragons require UVB lights to process calcium in their diet. Without processing this calcium, their bodies will use calcium from their bones, therefore weakening them. Symptoms seen in bearded dragons with MBD include bumps in the legs, twitches or tremors, bumps along the spine or tail, a swollen bottom jaw, and jerky movements.
Hypocalcemia
Hypocalcemia occurs when there are low levels of calcium in the bearded dragon's blood. Hypocalcemia is most often tied to metabolic bone disease. Low levels of calcium can result in twitching muscles, or seizures. Hypocalcemia is most often seen in young bearded dragons, as they are slightly more fragile than adults. Maintaining a diet that consists of enough calcium is crucial to avoiding hypocalcemia as well as metabolic bone disease.
Impaction
Impaction occurs often in bearded dragons when they are fed food that is too big for them. Bearded dragons will try to eat worms or crickets that are too big for them, but this can be extremely harmful. Food should not be bigger than the space between their eyes for a young dragon. Older dragons can generally cope with larger insects but not oversized prey. If a dragon eats food that is too big for it, pressure will be put on its spinal cord during digestion. This pressure can lead to impaction which can lead to death. Another cause of impaction in captivity is ingestion of the substrate, commonly sand or other loose substrates.
Upper respiratory infection (URI)
In bearded dragons, respiratory infection (RI) is caused by a bacterial infection in the lungs. Bearded dragons develop a respiratory infection due to a number of reasons such as incorrect lighting and temperature, high humidity, prolonged psychological stress, and poor captive conditions.
Atadenovirus
Atadenovirus (ADV), also referred to as adenovirus, can be deadly. ADV can be spread between reptiles through contact alone. Most juvenile ADV-positive bearded dragons do not live past 90 days. While ADV-positive adults will live longer, they eventually contract liver diseases. Common symptoms of ADV-positive bearded dragons include stunted growth and slow weight gain. Because of their compromised immune systems, ADV-positive bearded dragons may be infected with intestinal parasites.
Lighting
Bearded dragons require UVB to enable vitamin D3 synthesis and to prevent illnesses like metabolic bone disease. Vitamin D3 is essential to calcium absorption, with calcium playing a major role in various critical biological functions. Bearded dragons also require UVA, which stimulates feeding, breeding, basking and overall health. They also require a basking heat source, most commonly a light-emitting source, to provide a basking area. Heat and UV are both vital to the bearded dragons' biological function.
Species
The following six species are recognised as being valid.
Pogona barbata – Eastern bearded dragon
Pogona henrylawsoni – Rankin's dragon, Lawson's dragon, black-soil bearded dragon, dumpy dragon, dwarf bearded dragon
Pogona microlepidota – Kimberley bearded dragon, Drysdale river bearded dragon
Pogona minor – Western bearded dragon, dwarf bearded dragon
Pogona nullarbor – Nullarbor bearded dragon
Pogona vitticeps – Central bearded dragon or inland bearded dragon
Nota bene: A binomial authority in parentheses indicates that the species was originally described under a different binomial.
Gallery
| Biology and health sciences | Iguania | Animals |
326914 | https://en.wikipedia.org/wiki/Uromastyx | Uromastyx | Uromastyx is a genus of lizards in the family Agamidae. The genus is native to Africa and the Middle East (West Asia). Member species are commonly called spiny-tailed lizards, uromastyces, mastigures, or dabb lizards.
Lizards in the genus Uromastyx are primarily herbivorous, but occasionally eat insects and other small animals, especially young lizards. They spend most of their waking hours basking in the sun, hiding in underground chambers at night time or when danger appears. They tend to establish themselves in hilly, rocky areas with good shelter and accessible vegetation.
Taxonomy
The generic name Uromastyx is derived from the Ancient Greek words ourá (οὐρά) meaning "tail" and -mastix (μάστιξ) meaning "whip" or "scourge", after the thick-spiked tail characteristic of all Uromastyx species.
Species
The following species are in the genus Uromastyx. Three additional species were formerly placed in this genus, but have been moved to their own genus, Saara.
Nota bene: A binomial authority in parentheses indicates that the species was originally described in a genus other than Uromastyx.
Description
Uromastyx species range in size from for U. macfadyeni to or more for U. aegyptia. Hatchlings or neonates are usually no more than in length. Like many reptiles, these lizards' colors change according to the temperature and season. During cool weather they appear dull and dark, but the colors become lighter in warm weather, especially when basking. The darker pigmentation allows their skin to absorb sunlight more effectively.
Their spiked tail is muscular and heavy, and is able to be swung at an attacker with great velocity, usually accompanied by hissing and an open-mouthed display of (small) teeth. Uromastyx generally sleep in their burrows with their tails closest to the opening, in order to thwart intruders.
Distribution
Uromastyx inhabit a range stretching through most of North and Northeast Africa, the Middle East, ranging as far east as Iran. Species found further east are now placed in the genus Saara. Uromastyx occur at elevations from sea level to well over . They are regularly eaten, and sold in produce markets, by local peoples.
Diet
Uromastyx lizards acquire most of the water they need from the vegetation they ingest. In the wild they generally eat any surrounding vegetation. When hatching, baby Uromastyx eat their own mother's feces as their first meal before heading off to find a more sustainable food source. They do this to establish a proper gut flora, essential for digesting the plants that they eat.
In the wild, adult U. dispar maliensis have been reported to eat insects at certain times of the year, when it is hot and their only food source available would be insects.
Reproduction
A female Uromastyx can lay anywhere from 5 to 40 eggs, depending on age and species. Eggs are laid approximately 30 days following copulation with an incubation time of 70–80 days. The neonates weigh and are about snout to vent length. They rapidly gain weight during the first few weeks following hatching.
A field study in Algeria concluded that Moroccan spiny-tailed lizards add approximately of total growth each year until around the age of 8–9 years.
Wild female Uromastyx are smaller and less colorful than males. For example, U. dispar maliensis females are often light tan with black dorsal spots, while males are mostly bright yellow with mottled black markings. Females also tend to have shorter claws. In captivity female U. dispar maliensis tend to mimic males in color. U. dispar maliensis are, therefore, reputably difficult to breed in captivity.
Relationship with humans
Captivity
Uromastyx are removed from the wild in an unregulated manner for the pet and medicinal trade in Morocco, despite their protected status in the country; conditions of the animals while being sold is often extremely poor and overcrowding is common. Historically, captive Uromastyx had a poor survival rate, due to a lack of understanding of their dietary and environmental needs. In recent years, knowledge has significantly increased, and appropriate diet and care has led to survival rates and longevity approaching and perhaps surpassing those in the wild. With good care, they are capable of living for over 25 years, and possibly as old as 60.
Consumption by humans
U. dispar maliensis, known as "ḍabb" () by peninsular Arabs, is historically consumed as food by some of the Bedouin population of the Arabian peninsula, mainly those residing in the interior and eastern regions of Arabia. This lizard used to be considered an "Arabian delicacy". It is recorded that when an Uromastyx was brought to the Islamic prophet Muhammad by Bedouins, Muhammad did not eat the lizard, but Muslims were not prohibited by him from consuming it; thus Muhammad's companion Khalid bin Walid consumed the lizard.
In Judaism, this lizard is traditionally identified as the biblical tzav, one of the eight "creeping" animals forbidden for consumption that impart ritual impurity. The Torah states: “The following shall be impure for you among the creeping animals that swarm upon the earth: The weasel, and the mouse, and the dab lizard (tzav) of every variety; and the gecko, and the land-crocodile, and the lizard, and the skink, and the chameleon” (Leviticus 11:29-30).
| Biology and health sciences | Iguania | Animals |
326971 | https://en.wikipedia.org/wiki/Fluid%20ounce | Fluid ounce | A fluid ounce (abbreviated fl oz, fl. oz. or oz. fl., old forms ℥, fl ℥, f℥, ƒ ℥) is a unit of volume (also called capacity) typically used for measuring liquids. The British Imperial, the United States customary, and the United States food labeling fluid ounce are the three that are still in common use, although various definitions have been used throughout history.
An imperial fluid ounce is of an imperial pint, of an imperial gallon or exactly 28.4130625 mL.
A US customary fluid ounce is of a US liquid pint and of a US liquid gallon or exactly 29.5735295625 mL, making it about 4.08% larger than the imperial fluid ounce.
A US food labeling fluid ounce is exactly 30 mL.
Comparison to the ounce
The fluid ounce is distinct from the (international avoirdupois) ounce as a unit of weight or mass, although it is sometimes referred to simply as an "ounce" where context makes the meaning clear (e.g., "ounces in a bottle"). A volume of pure water measuring one imperial fluid ounce has a mass of almost exactly one ounce.
Definitions and equivalences
Imperial fluid ounce
{|
|-
|height=120%|1 imperial fluid ounce
|=
|align=right|||imperial gallon
|-
|||=
|align=right|||imperial quart
|-
|||=
|align=right|||imperial pint
|-
|||=
|align=right|||imperial cup
|-
|||=
|align=right|||imperial gill
|-
|||=
|align=right|8||imperial fluid drams
|-
|||=
|align=right|||millilitres
|-
|||≈
|align=right|||cubic inches
|-
|||≈
|align=right|||US fluid ounces
|-
|||≈
|align=right colspan=2|the volume of 1 avoirdupois ounce of water
|}
US customary fluid ounce
{|
|-
|1 US fluid ounce ||=
|align=right|||US gallon
|-
|||=
|align=right|||US quart
|-
|||=
|align=right|||US pint
|-
|||=
|align=right|||US cup
|-
|||=
|align=right|||US gill
|-
|||=
|align=right|2||US tablespoons
|-
|||=
|align=right|6||US teaspoons
|-
|||=
|align=right|8||US fluid drams
|-
|||=
|align=right|||cubic inches
|-
|||≈
|align≈right|||millilitres
|-
|||≈
|align≈right|||imperial fluid ounces
|}
US food labeling fluid ounce
For serving sizes on nutrition labels in the US, regulation 21 CFR §101.9(b) requires the use of "common household measures", and 21 CFR §101.9(b)(5)(viii) defines a "common household" fluid ounce as exactly 30 milliliters. This applies to the serving size but not the package size; package sizes use the US customary fluid ounce.
{|
|-
|30 millilitres ||≈
|align=right|||imperial fluid ounces
|-
|||≈
|align=right|||US customary fluid ounces
|-
|||≈
|align=right|||cubic inches
|}
History
The fluid ounce was originally the volume occupied by one ounce of some substance, for example wine (in England) or water (in Scotland). The ounce in question also varied depending on the system of fluid measure, such as that used for wine versus ale.
Various ounces were used over the centuries, including the Tower ounce, troy ounce, avoirdupois ounce, and ounces used in international trade, such as Paris troy, a situation further complicated by the medieval practice of "allowances", whereby a unit of measure was not necessarily equal to the sum of its parts. For example, the had a for the weight of the sack and other packaging materials.
In 1824, the British Parliament defined the imperial gallon as the volume of ten pounds of water at standard temperature. The gallon was divided into four quarts, the quart into two pints, the pint into four gills, and the gill into five ounces; thus, there were 160 imperial fluid ounces to the gallon.
This made the mass of a fluid ounce of water one avoirdupois ounce (28.35 g), a relationship which remains approximately valid today despite the imperial gallon's definition being slightly revised to be 4.54609 litres (thus making the imperial fluid ounce exactly 28.4130625 mL).
The US fluid ounce is based on the US gallon, which in turn is based on the wine gallon of 231 cubic inches that was used in the United Kingdom prior to 1824. With the adoption of the international inch, the US fluid ounce became gal × 231 in/gal × (2.54 cm/in) = 29.5735295625 mL exactly, or about 4% larger than the imperial unit.
In the U.K., the use of the fluid ounce as a measurement in trade, public health, and public administration was circumscribed to a few specific uses (the labelling of beer, cider, water, lemonade and fruit juice in returnable containers) in 1995, and abolished entirely in 2000, by The Units of Measurement Regulations 1994.
| Physical sciences | Volume | Basics and measurement |
327061 | https://en.wikipedia.org/wiki/Gene%20flow | Gene flow | In population genetics, gene flow (also known as migration and allele flow) is the transfer of genetic material from one population to another. If the rate of gene flow is high enough, then two populations will have equivalent allele frequencies and therefore can be considered a single effective population. It has been shown that it takes only "one migrant per generation" to prevent populations from diverging due to drift. Populations can diverge due to selection even when they are exchanging alleles, if the selection pressure is strong enough. Gene flow is an important mechanism for transferring genetic diversity among populations. Migrants change the distribution of genetic diversity among populations, by modifying allele frequencies (the proportion of members carrying a particular variant of a gene). High rates of gene flow can reduce the genetic differentiation between the two groups, increasing homogeneity. For this reason, gene flow has been thought to constrain speciation and prevent range expansion by combining the gene pools of the groups, thus preventing the development of differences in genetic variation that would have led to differentiation and adaptation. In some cases dispersal resulting in gene flow may also result in the addition of novel genetic variants under positive selection to the gene pool of a species or population (adaptive introgression.)
There are a number of factors that affect the rate of gene flow between different populations. Gene flow is expected to be lower in species that have low dispersal or mobility, that occur in fragmented habitats, where there is long distances between populations, and when there are small population sizes. Mobility plays an important role in dispersal rate, as highly mobile individuals tend to have greater movement prospects. Although animals are thought to be more mobile than plants, pollen and seeds may be carried great distances by animals, water or wind. When gene flow is impeded, there can be an increase in inbreeding, measured by the inbreeding coefficient (F) within a population. For example, many island populations have low rates of gene flow due to geographic isolation and small population sizes. The Black Footed Rock Wallaby has several inbred populations that live on various islands off the coast of Australia. The population is so strongly isolated that lack of gene flow has led to high rates of inbreeding.
Measuring gene flow
The level of gene flow among populations can be estimated by observing the dispersal of individuals and recording their reproductive success. This direct method is only suitable for some types of organisms, more often indirect methods are used that infer gene flow by comparing allele frequencies among population samples. The more genetically differentiated two populations are, the lower the estimate of gene flow, because gene flow has a homogenizing effect. Isolation of populations leads to divergence due to drift, while migration reduces divergence. Gene flow can be measured by using the effective population size () and the net migration rate per generation (m). Using the approximation based on the Island model, the effect of migration can be calculated for a population in terms of the degree of genetic differentiation(). This formula accounts for the proportion of total molecular marker variation among populations, averaged over loci. When there is one migrant per generation, the inbreeding coefficient () equals 0.2. However, when there is less than 1 migrant per generation (no migration), the inbreeding coefficient rises rapidly resulting in fixation and complete divergence ( = 1). The most common is < 0.25. This means there is some migration happening. Measures of population structure range from 0 to 1. When gene flow occurs via migration the deleterious effects of inbreeding can be ameliorated.
The formula can be modified to solve for the migration rate when is known: , Nm = number of migrants.
Barriers to gene flow
Allopatric speciation
When gene flow is blocked by physical barriers, this results in Allopatric speciation or a geographical isolation that does not allow populations of the same species to exchange genetic material. Physical barriers to gene flow are usually, but not always, natural. They may include impassable mountain ranges, oceans, or vast deserts. In some cases, they can be artificial, human-made barriers, such as the Great Wall of China, which has hindered the gene flow of native plant populations. One of these native plants, Ulmus pumila, demonstrated a lower prevalence of genetic differentiation than the plants Vitex negundo, Ziziphus jujuba, Heteropappus hispidus, and Prunus armeniaca whose habitat is located on the opposite side of the Great Wall of China where Ulmus pumila grows. This is because Ulmus pumila has wind-pollination as its primary means of propagation and the latter-plants carry out pollination through insects. Samples of the same species which grow on either side have been shown to have developed genetic differences, because there is little to no gene flow to provide recombination of the gene pools.
Sympatric speciation
Barriers to gene flow need not always be physical. Sympatric speciation happens when new species from the same ancestral species arise along the same range. This is often a result of a reproductive barrier. For example, two palm species of Howea found on Lord Howe Island were found to have substantially different flowering times correlated with soil preference, resulting in a reproductive barrier inhibiting gene flow. Species can live in the same environment, yet show very limited gene flow due to reproductive barriers, fragmentation, specialist pollinators, or limited hybridization or hybridization yielding unfit hybrids. A cryptic species is a species that humans cannot tell is different without the use of genetics. Moreover, gene flow between hybrid and wild populations can result in loss of genetic diversity via genetic pollution, assortative mating and outbreeding. In human populations, genetic differentiation can also result from endogamy, due to differences in caste, ethnicity, customs and religion.
Human assisted gene-flow
Genetic rescue
Gene flow can also be used to assist species which are threatened with extinction. When a species exist in small populations there is an increased risk of inbreeding and greater susceptibility to loss of diversity due to drift. These populations can benefit greatly from the introduction of unrelated individuals who can increase diversity and reduce the amount of inbreeding, and potentially increase population size. This was demonstrated in the lab with two bottleneck strains of Drosophila melanogaster, in which crosses between the two populations reversed the effects of inbreeding and led to greater chances of survival in not only one generation but two.
Genetic pollution
Human activities such as movement of species and modification of landscape can result in genetic pollution, hybridization, introgression and genetic swamping. These processes can lead to homogenization or replacement of local genotypes as a result of either a numerical and/or fitness advantage of introduced plant or animal. Nonnative species can threaten native plants and animals with extinction by hybridization and introgression either through purposeful introduction by humans or through habitat modification, bringing previously isolated species into contact. These phenomena can be especially detrimental for rare species coming into contact with more abundant ones which can occur between island and mainland species. Interbreeding between the species can cause a 'swamping' of the rarer species' gene pool, creating hybrids that supplant the native stock. This is a direct result of evolutionary forces such as natural selection, as well as genetic drift, which lead to the increasing prevalence of advantageous traits and homogenization. The extent of this phenomenon is not always apparent from outward appearance alone. While some degree of gene flow occurs in the course of normal evolution, hybridization with or without introgression may threaten a rare species' existence. For example, the Mallard is an abundant species of duck that interbreeds readily with a wide range of other ducks and poses a threat to the integrity of some species.
Urbanization
There are two main models for how urbanization affects gene flow of urban populations. The first is through habitat fragmentation, also called urban fragmentation, in which alterations to the landscape that disrupt or fragment the habitat decrease genetic diversity. The second is called the urban facilitation model, and suggests that in some populations, gene flow is enabled by anthropogenic changes to the landscape. Urban facilitation of gene flow connects populations, reduces isolation, and increases gene flow into an area which would otherwise not have this specific genome composition.
Urban facilitation can occur in many different ways, but most of the mechanisms include bringing previously separated species into contact, either directly or indirectly. Altering a habitat through urbanization will cause habitat fragmentation, but could also potentially disrupt barriers and create a pathway, or corridor, that can connect two formerly separated species. The effectiveness of this depends on individual species’ dispersal abilities and adaptiveness to different environments to use anthropogenic structures to travel. Human-driven climate change is another mechanism by which southern-dwelling animals might be forced northward towards cooler temperatures, where they could come into contact with other populations not previously in their range. More directly, humans are known to introduce non-native species into new environments, which could lead to hybridization of similar species.
This urban facilitation model was tested on a human health pest, the Western black widow spider (Latrodectus hesperus). A study by Miles et al. collected genome-wide single nucleotide polymorphism variation data in urban and rural spider populations and found evidence for increased gene flow in urban Western black widow spiders compared to rural populations. In addition, the genome of these spiders was more similar across rural populations than it was for urban populations, suggesting increased diversity, and therefore adaptation, in the urban populations of the Western black widow spider. Phenotypically, urban spiders are larger, darker, and more aggressive, which could lead to increased survival in urban environments. These findings demonstrate support for urban facilitation, as these spiders are actually able to spread and diversify faster across urban environments than they would in a rural one. However, it is also an example of how urban facilitation, despite increasing gene flow, is not necessarily beneficial to an environment, as Western black widow spiders have highly toxic venom and therefore pose risks for human health.
Another example of urban facilitation is that of migrating bobcats (Lynx rufus) in the northern US and southern Canada. A study by Marrote et al. sequenced fourteen different microsatellite loci in bobcats across the Great Lakes region, and found that longitude affected the interaction between anthropogenic landscape alterations and bobcat population gene flow. While rising global temperatures push bobcat populations into northern territory, increased human activity also enables bobcat migration northward. The increased human activity brings increased roads and traffic, but also increases road maintenance, plowing, and snow compaction, inadvertently clearing a path for bobcats to travel by. The anthropogenic influence on bobcat migration pathways is an example of urban facilitation via opening up a corridor for gene flow. However, in the bobcat's southern range, an increase in roads and traffic is correlated with a decrease in forest cover, which hinders bobcat population gene flow through these areas. Somewhat ironically, the movement of bobcats northward is caused by human-driven global warming, but is also enabled by increased anthropogenic activity in northern ranges that make these habitats more suitable to bobcats.
Consequences of urban facilitation vary from species to species. Positive effects of urban facilitation can occur when increased gene flow enables better adaptation and introduces beneficial alleles, and would ideally increase biodiversity. This has implications for conservation: for example, urban facilitation benefits an endangered species of tarantula and could help increase the population size. Negative effects would occur when increased gene flow is maladaptive and causes the loss of beneficial alleles. In the worst-case scenario, this would lead to genomic extinction through a hybrid swarm. It is also important to note that in the scheme of overall ecosystem health and biodiversity, urban facilitation is not necessarily beneficial, and generally applies to urban adapter pests. Examples of this include the previously mentioned Western black widow spider, and also the cane toad, which was able to use roads by which to travel and overpopulate Australia.
Gene flow between species
Horizontal gene transfer
Horizontal gene transfer (HGT) refers to the transfer of genes between organisms in a manner other than traditional reproduction, either through transformation (direct uptake of genetic material by a cell from its surroundings), conjugation (transfer of genetic material between two bacterial cells in direct contact), transduction (injection of foreign DNA by a bacteriophage virus into the host cell) or GTA-mediated transduction (transfer by a virus-like element produced by a bacterium) .
Viruses can transfer genes between species. Bacteria can incorporate genes from dead bacteria, exchange genes with living bacteria, and can exchange plasmids across species boundaries.
"Sequence comparisons suggest recent horizontal transfer of many genes among diverse species including across the boundaries of phylogenetic 'domains'. Thus determining the phylogenetic history of a species can not be done conclusively by determining evolutionary trees for single genes."
Biologist Gogarten suggests "the original metaphor of a tree no longer fits the data from recent genome research". Biologists [should] instead use the metaphor of a mosaic to describe the different histories combined in individual genomes and use the metaphor of an intertwined net to visualize the rich exchange and cooperative effects of horizontal gene transfer.
"Using single genes as phylogenetic markers, it is difficult to trace organismal phylogeny in the presence of HGT. Combining the simple coalescence model of cladogenesis with rare HGT events suggest there was no single last common ancestor that contained all of the genes ancestral to those shared among the three domains of life. Each contemporary molecule has its own history and traces back to an individual molecule cenancestor. However, these molecular ancestors were likely to be present in different organisms at different times."
Hybridization
In some instances, when a species has a sister species and breeding capabilities are possible due to the removal of previous barriers or through introduction due to human intervention, species can hybridize and exchange genes and corresponding traits. This exchange is not always clear-cut, for sometimes the hybrids may look identical to the original species phenotypically but upon testing the mtDNA it is apparent that hybridization has occurred. Differential hybridization also occurs because some traits and DNA are more readily exchanged than others, and this is a result of selective pressure or the absence thereof that allows for easier transaction. In instances in which the introduced species begins to replace the native species, the native species becomes threatened and the biodiversity is reduced, thus making this phenomenon negative rather than a positive case of gene flow that augments genetic diversity. Introgression is the replacement of one species' alleles with that of the invader species. It is important to note that hybrids are sometime less "fit" than their parental generation, and as a result is a closely monitored genetic issue as the ultimate goal in conservation genetics is to maintain the genetic integrity of a species and preserve biodiversity.
Examples
While gene flow can greatly enhance the fitness of a population, it can also have negative consequences depending on the population and the environment in which they reside. The effects of gene flow are context-dependent.
Fragmented Population: fragmented landscapes such as the Galapagos Islands are an ideal place for adaptive radiation to occur as a result of differing geography. Darwin's finches likely experienced allopatric speciation in some part due to differing geography, but that does not explain why we see so many different kinds of finches on the same island. This is due to adaptive radiation, or the evolution of varying traits in light of competition for resources. Gene flow moves in the direction of what resources are abundant at a given time.
Island Population: The marine iguana is an endemic species of the Galapagos Islands, but it evolved from a mainland ancestor of land iguana. Due to geographic isolation gene flow between the two species was limited and differing environments caused the marine iguana to evolve in order to adapt to the island environment. For instance, they are the only iguana that has evolved the ability to swim.
Human Populations: In Europe Homo sapiens interbred with Neanderthals resulting in gene flow between these populations. This gene flow has resulted in Neanderthal alleles in modern European population. Two theories exist for the human evolution throughout the world. The first is known as the multiregional model in which modern human variation is seen as a product of radiation of Homo erectus out of Africa after which local differentiation led to the establishment of regional population as we see them now. Gene flow plays an important role in maintaining a grade of similarities and preventing speciation. In contrast the single origin theory assumes that there was a common ancestral population originating in Africa of Homo sapiens which already displayed the anatomical characteristics we see today. This theory minimizes the amount of parallel evolution that is needed.
Butterflies: Comparisons between sympatric and allopatric populations of Heliconius melpomene, H. cydno, and H. timareta revealed a genome-wide trend of increased shared variation in sympatry, indicative of pervasive interspecific gene flow.
Human-mediated gene flow: The captive genetic management of threatened species is the only way in which humans attempt to induce gene flow in ex situ situation. One example is the giant panda which is part of an international breeding program in which genetic materials are shared between zoological organizations in order to increase genetic diversity in the small populations. As a result of low reproductive success, artificial insemination with fresh/frozen-thawed sperm was developed which increased cub survival rate. A 2014 study found that high levels of genetic diversity and low levels of inbreeding were estimated in the breeding centers.
Plants: Two populations of monkeyflowers were found to use different pollinators (bees and hummingbirds) that limited gene flow, resulting in genetic isolation, eventually producing two different species, Mimulus lewisii and Mimulus cardinalis .
Sika deer: Sika deer were introduced into Western Europe, and they reproduce easily with the native red deer. This translocation of Sika deer has led to introgression and there are no longer "pure" red deer in the region, and all can be classified as hybrids.
Bobwhite quail: Bobwhite quail were translocated from the southern part of the United States to Ontario in order to increase population numbers and game for hunting. The hybrids that resulted from this translocation was less fit than the native population and were not adapted to survive the Northern Winters.
| Biology and health sciences | Basics_4 | Biology |
327068 | https://en.wikipedia.org/wiki/Topiary | Topiary | Topiary is the horticultural practice of training perennial plants by clipping the foliage and twigs of trees, shrubs and subshrubs to develop and maintain clearly defined shapes, whether geometric or fanciful. The term also refers to plants which have been shaped in this way. As an art form it is a type of living sculpture. The word derives from the Latin word for an ornamental landscape gardener, topiarius, a creator of topia or "places", a Greek word that Romans also applied to fictive indoor landscapes executed in fresco.
The plants used in topiary are evergreen, mostly woody, have small leaves or needles, produce dense foliage, and have compact and/or columnar (e.g., fastigiate) growth habits. Common species chosen for topiary include cultivars of European box (Buxus sempervirens), arborvitae (Thuja species), bay laurel (Laurus nobilis), holly (Ilex species), myrtle (Eugenia or Myrtus species), yew (Taxus species), and privet (Ligustrum species). Shaped wire cages are sometimes employed in modern topiary to guide untutored shears, but traditional topiary depends on patience and a steady hand; small-leaved ivy can be used to cover a cage and give the look of topiary in a few months. The hedge is a simple form of topiary used to create boundaries, walls or screens.
History
Origin
European topiary dates from Roman times. Pliny's Natural History and the epigram writer Martial both credit Gaius Matius Calvinus, in the circle of Julius Caesar, with introducing the first topiary to Roman gardens, and Pliny the Younger describes in a letter the elaborate figures of animals, inscriptions, cyphers and obelisks in clipped greens at his Tuscan villa (Epistle v.6, to Apollinaris). Within the atrium of a Roman house or villa, a place that had formerly been quite plain, the art of the topiarius produced a miniature landscape (topos) which might employ the art of stunting trees, also mentioned, disapprovingly, by Pliny (Historia Naturalis xii.6).
Far Eastern topiary
The clipping and shaping of shrubs and trees in China and Japan have been practised with equal rigor, but for different reasons. The goal is to achieve an artful expression of the "natural" form of venerably aged pines, given character by the forces of wind and weather. Their most concentrated expressions are in the related arts of Chinese penjing and Japanese bonsai.
Japanese cloud-pruning is closest to the European art: the cloud-like forms of clipped growth are designed to be best appreciated after a fall of snow. Japanese Zen gardens (karesansui, dry rock gardens) make extensive use of Karikomi (a topiary technique of clipping shrubs and trees into large curved shapes or sculptures) and Hako-zukuri (shrubs clipped into boxes and straight lines).
Renaissance topiary
Since its European revival in the 16th century, topiary has been seen on the parterres and terraces of gardens of the European elite, as well as in simple cottage gardens; Barnabe Googe, about 1578, found that "women" (a signifier of a less than gentle class) were clipping rosemary "as in the fashion of a cart, a peacock, or such things as they fancy." In 1618 William Lawson suggested
Your gardener can frame your lesser wood to the shape of men armed in the field, ready to give battell: or swift-running Grey Houndes to chase the Deere, or hunt the Hare. This kind of hunting shall not wate your corne, nor much your coyne.
Traditional topiary forms use foliage pruned and/or trained into geometric shapes such as balls or cubes, obelisks, pyramids, cones, or tiered plates and tapering spirals. Representational forms depicting people, animals, and man-made objects have also been popular. The royal botanist John Parkinson found privet "so apt that no other can be like unto it, to be cut, lead, and drawn into what forme one will, either of beasts, birds, or men armed or otherwise." Evergreens have usually been the first choice for Early Modern topiary, however, with yew and boxwood leading other plants.
Topiary at Versailles and its imitators was never complicated: low hedges punctuated by potted trees trimmed as balls on standards, interrupted by obelisks at corners, provided the vertical features of flat-patterned parterre gardens. Sculptural forms were provided by stone and lead sculptures. In Holland, however, the fashion was established for more complicated topiary designs; this Franco-Dutch garden style spread to England after 1660, but by 1708-09 one searches in vain for fanciful topiary among the clipped hedges and edgings, and the standing cones and obelisks of the aristocratic and gentry English parterre gardens in Kip and Knyff's Britannia Illustrata.
Decline in the 18th century
In England topiary was all but killed as a fashion by the famous satiric essay on "Verdant Sculpture" that Alexander Pope published in the short-lived newspaper The Guardian, 29 September 1713, with its mock catalogue descriptions of
Adam and Eve in yew; Adam a little shattered by the fall of the tree of knowledge in the great storm; Eve and the serpent very flourishing.
The tower of Babel, not yet finished.
St George in box; his arm scarce long enough, but will be in condition to stick the dragon by next April.
A quickset hog, shot up into a porcupine, by its being forgot a week in rainy weather.
In the 1720s and 1730s, the generation of Charles Bridgeman and William Kent swept the English garden clean of its hedges, mazes, and topiary. Although topiary fell from grace in aristocratic gardens, it continued to be featured in cottagers' gardens, where a single example of traditional forms, a ball, a tree trimmed to a cone in several cleanly separated tiers, meticulously clipped and perhaps topped with a topiary peacock, might be passed on as an heirloom. Such an heirloom, but on heroic scale, was the ancient churchward yew of Harlington, west of London, immortalized in an engraved broadsheet of 1729 bearing an illustration with an enthusiastic verse encomium by its dedicated parish clerk and topiarist. formerly shaped as an obelisk on square plinth topped with a ten-foot ball surmounted by a cockerel, the Harlington Yew survives today, untonsured for the last two centuries.
Revival
The revival of topiary in English gardening parallels the revived "Jacobethan" taste in architecture; John Loudon in the 1840s was the first garden writer to express a sense of loss due to the topiary that had been removed from English gardens. The art of topiary, with enclosed garden "rooms", burst upon the English gardening public with the mature examples at Elvaston Castle, Derbyshire, which opened to public viewing in the 1850s and created a sensation: "within a few years architectural topiary was springing up all over the country (it took another 25 years before sculptural topiary began to become popular as well)". The following generation, represented by James Shirley Hibberd, rediscovered the charm of topiary specimens as part of the mystique of the "English cottage garden", which was as much invented as revived from the 1870s:
The classic statement of the British Arts and Crafts revival of topiary among roses and mixed herbaceous borders, characterised generally as "the old-fashioned garden" or the "Dutch garden" was to be found in Topiary: Garden Craftsmanship in Yew and Box by Nathaniel Lloyd (1867–1933), who had retired in middle age and taken up architectural design with the encouragement of Sir Edwin Lutyens. Lloyd's own timber-framed manor house, Great Dixter, Sussex, remains an epitome of this stylised mix of topiary with "cottagey" plantings that was practised by Gertrude Jekyll and Edwin Lutyens in a fruitful partnership. The new gardening vocabulary incorporating topiary required little expensive restructuring: "At Lyme Park, Cheshire, the garden went from being an Italian garden to being a Dutch garden without any change actually taking place on the ground," Brent Elliot noted in 2000.
Americans in England were sensitive to the renewed charms of topiary. When William Waldorf Astor bought Hever Castle, Kent, around 1906, the moat surrounding the house precluded the addition of wings for servants, guests and the servants of guests that the Astor manner required. He accordingly built an authentically styled Tudor village to accommodate the overflow, with an "Old English Garden" including buttressed hedges and free-standing topiary. In the preceding decade, expatriate Americans led by Edwin Austin Abbey created an Anglo-American society at Broadway, Worcestershire, where topiary was one of the elements of a "Cotswold" house-and-garden style soon naturalised among upper-class Americans at home. Topiary, which had featured in very few 18th-century American gardens, came into favour with the Colonial Revival gardens and the grand manner of the American Renaissance, 1880–1920. Interest in the revival and maintenance of historic gardens in the 20th century led to the replanting of the topiary maze at the Governor's Palace, Colonial Williamsburg, in the 1930s.
20th century
American portable style topiary was introduced to Disneyland around 1962. Walt Disney helped bring this new medium into being - wishing to recreate his cartoon characters throughout his theme park in the form of landscape shrubbery. This style of topiary is based on a suitably shaped steel wire frame through which the plants eventually extend as they grow. The frame, which remains as a permanent trimming guide, may be either stuffed with sphagnum moss and then planted, or placed around shrubbery. The sculpture slowly transforms into a permanent topiary as the plants fill in the frame.
This style has led to imaginative displays and festivals throughout the Disney resorts and parks, and mosaiculture (multiple types and styles of plants creating a mosaic, living sculpture) worldwide includes the impressive display at the 2008 Summer Olympics in China. Living corporate logos along roadsides, green roof softscapes and living walls that biofilter air are offshoots of this technology.
Artificial topiary is another offshoot similar to the concept of artificial Christmas trees. This topiary mimics the style of living versions and is often used to supply indoor greenery for home or office decoration. Patents are issued for the style, design, and construction methodology of different types of topiary trees.
Notable topiary displays
Australia
Railton, Tasmania known as Railton Town of Topiary
Asia
Mosaiculture 2006 (Shanghai, China)
The Samban-Lei Sekpil in Manipur, India, begun in 1983 and recently measuring in height, is the world's tallest topiary, according to Guinness Book of World Records. It is clipped of Duranta erecta, a shrub widely used in Manipuri gardens, into a tiered shape called a sekpil or satra that honours the forest god Umang Lai.
Royal Palace at Bang Pa-In in Thailand
The Terrace Garden in Chandigarh, India, has topiaries in the form of animals by Narinder Kumar Sharma as an attraction for children.
Central America
Parque Francisco Alvarado, Zarcero, Costa Rica
South America
Tulcan Topiary Garden Cemetery, Tulcan, Ecuador
Europe
Cliveden (Buckinghamshire, England)
Levens Hall (Cumbria, England)
A premier topiary garden started in the late 17th century by M. Beaumont, a French gardener who laid out the gardens of Hampton Court (which were recreated in the 1980s). Levens Hall is recognised by the Guinness Book of Records as having the oldest Topiary garden in the world.
Topsham railway station (Devon, England) An example of topiary lettering.
Canons Ashby (Northamptonshire, England) A 16th-century garden revised in 1708
Stiffkey, (Norfolk, England)
Several informal designs including a line of elephants at Nellie's cottage and a guitar.
Hidcote Manor Garden (Gloucestershire, England)
Knightshayes Court (Devon, England)
Owlpen Manor (Gloucestershire, England) A late 16th-early 17th century terraced garden on a hillside, reordered in the 1720s, with "dark, secret rooms of yew" (Vita Sackville-West).
Great Dixter Gardens (East Sussex, England): Laid out by Nathaniel Lloyd, the author of a book on topiary, and preserved and extended by his son, the garden-writer Christopher Lloyd.
Much Wenlock Priory, Shropshire
Drummond Castle Gardens (Perthshire, Scotland)
Portmeirion (Snowdonia, Wales)
Parc des Topiares (Durbuy, Belgium)
A large topiary garden (10 000 m2) with over 250 figures.
Château de Villandry, France
Villa Lante (Bagnaia, Italy)
Giardino Giusti (Verona, Italy)
Castello Balduino (Montalto Pavese, Italy)
Guggenheim Museum, (Bilbao, Spain): A huge sculpture of a West Highland White Terrier designed by the artist Jeff Koons, which is thought by experts and scientists to be the world's biggest topiary dog.
The Tsubo-en Zen garden in Lelystad, Netherlands is a private modern Japanese Zen (karesansui, dry rock) garden that makes extensive use of so-called O-karikomi combined with Hako-zukuri (see above).
Gardens of the Palace of Versailles outside Paris, France
North America
Hunnewell Arboretum (Wellesley, Massachusetts)
140-year-old topiary garden of native white pine and arborvitae.
Ladew Topiary Gardens (Monkton, Maryland)
A topiary garden in Maryland established by award-winning topiary artist Harvey Ladew in the late 1930s. Located approximately halfway between the north Baltimore suburbs and the southern Pennsylvania border. Ladew's most famous topiary is a hunt, horses, riders, dogs and the fox, clearing a well-clipped hedge, the most famous single piece of classical topiary in North America.
Topiary Garden at Longwood Gardens (Kennett Square, Pennsylvania)
Columbus Topiary Park at Old Deaf School (Topiary Park, Columbus, Ohio)
A public garden in downtown Columbus that features a topiary tableau of Georges Seurat's famous painting Sunday Afternoon on the Island of La Grande Jatte
Pearl Fryar's Topiary Garden, (Bishopville, South Carolina)
Green Animals, (Portsmouth, Rhode Island)
One of the subjects of the documentary Fast, Cheap and Out of Control (1997) was George Mendonça, the topiarist at Green Animals for more than seventy years: "it's just cut and wait, cut and wait" Mendonça says in a filmed sequence.
Busch Gardens Tampa, established 1959. 365 acre property featuring large, colorful and detailed sphagnum topiary.
In popular culture
In the Tim Burton/Johnny Depp film Edward Scissorhands, Edward proves to have a natural gift for topiary art. Numerous creative works are shown throughout the movie.
In the Stephen King novel The Shining, topiary animals that move when people are not looking frighten the Torrance family.
In the children's novel The Children of Green Knowe by Lucy M. Boston, an overgrown topiary figure of Noah plays a sinister role.
A real-life topiary artist is one of the subjects of Errol Morris's Fast, Cheap and Out of Control.
| Technology | Horticulture | null |
327393 | https://en.wikipedia.org/wiki/Decagon | Decagon | In geometry, a decagon (from the Greek δέκα déka and γωνία gonía, "ten angles") is a ten-sided polygon or 10-gon. The total sum of the interior angles of a simple decagon is 1440°.
Regular decagon
A regular decagon has all sides of equal length and each internal angle will always be equal to 144°. Its Schläfli symbol is {10} and can also be constructed as a truncated pentagon, t{5}, a quasiregular decagon alternating two types of edges.
Side length
The picture shows a regular decagon with side length and radius of the circumscribed circle.
The triangle has two equally long legs with length and a base with length
The circle around with radius intersects in a point (not designated in the picture).
Now the triangle is an isosceles triangle with vertex and with base angles .
Therefore . So and hence is also an isosceles triangle with vertex . The length of its legs is , so the length of is .
The isosceles triangles and have equal angles of 36° at the vertex, and so they are similar, hence:
Multiplication with the denominators leads to the quadratic equation:
This equation for the side length has one positive solution:
So the regular decagon can be constructed with ruler and compass.
Further conclusions
and the base height of (i.e. the length of ) is and the triangle has the area: .
Area
The area of a regular decagon of side length a is given by:
In terms of the apothem r (see also inscribed figure), the area is:
In terms of the circumradius R, the area is:
An alternative formula is where d is the distance between parallel sides, or the height when the decagon stands on one side as base, or the diameter of the decagon's inscribed circle.
By simple trigonometry,
and it can be written algebraically as
Construction
As 10 = 2 × 5, a power of two times a Fermat prime, it follows that a regular decagon is constructible using compass and straightedge, or by an edge-bisection of a regular pentagon.
An alternative (but similar) method is as follows:
Construct a pentagon in a circle by one of the methods shown in constructing a pentagon.
Extend a line from each vertex of the pentagon through the center of the circle to the opposite side of that same circle. Where each line cuts the circle is a vertex of the decagon. In other words, the image of a regular pentagon under a point reflection with respect of its center is a concentric congruent pentagon, and the two pentagons have in total the vertices of a concentric regular decagon.
The five corners of the pentagon constitute alternate corners of the decagon. Join these points to the adjacent new points to form the decagon.
The golden ratio in decagon
Both in the construction with given circumcircle as well as with given side length is the golden ratio dividing a line segment by exterior division the
determining construction element.
In the construction with given circumcircle the circular arc around G with radius produces the segment , whose division corresponds to the golden ratio.
In the construction with given side length the circular arc around D with radius produces the segment , whose division corresponds to the golden ratio.
Symmetry
The regular decagon has Dih10 symmetry, order 20. There are 3 subgroup dihedral symmetries: Dih5, Dih2, and Dih1, and 4 cyclic group symmetries: Z10, Z5, Z2, and Z1.
These 8 symmetries can be seen in 10 distinct symmetries on the decagon, a larger number because the lines of reflections can either pass through vertices or edges. John Conway labels these by a letter and group order. Full symmetry of the regular form is r20 and no symmetry is labeled a1. The dihedral symmetries are divided depending on whether they pass through vertices (d for diagonal) or edges (p for perpendiculars), and i when reflection lines path through both edges and vertices. Cyclic symmetries in the middle column are labeled as g for their central gyration orders.
Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g10 subgroup has no degrees of freedom but can be seen as directed edges.
The highest symmetry irregular decagons are d10, an isogonal decagon constructed by five mirrors which can alternate long and short edges, and p10, an isotoxal decagon, constructed with equal edge lengths, but vertices alternating two different internal angles. These two forms are duals of each other and have half the symmetry order of the regular decagon.
Dissection
Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into m(m-1)/2 parallelograms.
In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the regular decagon, m=5, and it can be divided into 10 rhombs, with examples shown below. This decomposition can be seen as 10 of 80 faces in a Petrie polygon projection plane of the 5-cube. A dissection is based on 10 of 30 faces of the rhombic triacontahedron. The list defines the number of solutions as 62, with 2 orientations for the first symmetric form, and 10 orientations for the other 6.
Skew decagon
A skew decagon is a skew polygon with 10 vertices and edges but not existing on the same plane. The interior of such a decagon is not generally defined. A skew zig-zag decagon has vertices alternating between two parallel planes.
A regular skew decagon is vertex-transitive with equal edge lengths. In 3-dimensions it will be a zig-zag skew decagon and can be seen in the vertices and side edges of a pentagonal antiprism, pentagrammic antiprism, and pentagrammic crossed-antiprism with the same D5d, [2+,10] symmetry, order 20.
These can also be seen in these four convex polyhedra with icosahedral symmetry. The polygons on the perimeter of these projections are regular skew decagons.
Petrie polygons
The regular skew decagon is the Petrie polygon for many higher-dimensional polytopes, shown in these orthogonal projections in various Coxeter planes: The number of sides in the Petrie polygon is equal to the Coxeter number, h, for each symmetry family.
| Mathematics | Two-dimensional space | null |
327893 | https://en.wikipedia.org/wiki/Isoflurane | Isoflurane | Isoflurane, sold under the brand name Forane among others, is a general anesthetic. It can be used to start or maintain anesthesia; however, other medications are often used to start anesthesia, due to airway irritation with isoflurane. Isoflurane is given via inhalation.
Side effects of isoflurane include a decreased ability to breathe (respiratory depression), low blood pressure, and an irregular heartbeat. Serious side effects can include malignant hyperthermia or high blood potassium. It should not be used in patients with a history of malignant hyperthermia in either themselves or their family members. It is unknown if its use during pregnancy is safe for the fetus, but use during a cesarean section appears to be safe. Isoflurane is a halogenated ether.
Isoflurane was approved for medical use in the United States in 1979. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Isoflurane is always administered in conjunction with air or pure oxygen. Often, nitrous oxide is also used. Although its physical properties imply that anaesthesia can be induced more rapidly than with halothane, its pungency can irritate the respiratory system, negating any possible advantage conferred by its physical properties. Thus, it is mostly used in general anesthesia as a maintenance agent after induction of general anesthesia with an intravenous agent such as thiopentone or propofol.
Mechanism of action
Similar to many general anesthetics, the exact mechanism of the action has not been clearly delineated. Isoflurane reduces pain sensitivity (analgesia) and relaxes muscles. Isoflurane likely binds to GABA, glutamate and glycine receptors, but has different effects on each receptor. Isoflurane acts as a positive allosteric modulator of the GABAA receptor in electrophysiology studies of neurons and recombinant receptors. It potentiates glycine receptor activity, which decreases motor function. It inhibits receptor activity in the NMDA glutamate receptor subtypes. Isoflurane inhibits conduction in activated potassium channels. Isoflurane also affects intracellular molecules. It inhibits plasma membrane calcium ATPases (PMCAs) which affects membrane fluidity by hindering the flow of Ca2+ (calcium ions) out across the membrane, this in turn affects neuron depolarization. It binds to the D subunit of ATP synthase and NADH dehydrogenase.
General anaesthesia with isoflurane reduces plasma endocannabinoid AEA concentrations, and this could be a consequence of stress reduction after loss of consciousness.
Adverse effects
Isoflurane can cause a sudden decrease in blood pressure due to dose-dependent peripheral vasodilation. This may be specially marked in hypovolemic patients.
Animal studies have raised safety concerns of certain general anesthetics, in particular ketamine and isoflurane, in young children. The risk of neurodegeneration was increased in combination of these agents with nitrous oxide and benzodiazepines such as midazolam. Whether these concerns occur in humans is unclear.
Elderly
Biophysical studies using NMR spectroscopy has provided molecular details of how inhaled anesthetics interact with three amino acid residues (G29, A30 and I31) of amyloid beta peptide and induce aggregation. This area is important as "some of the commonly used inhaled anesthetics may cause brain damage that accelerates the onset of Alzheimer's disease".
Physical properties
It is administered as a racemic mixture of (R)- and (S)-optical isomers. Isoflurane has a boiling point of . It is non-combustible but can give off irritable and toxic fumes when exposed to flame.
History
Together with enflurane and halothane, Isoflurane began to replace the flammable ethers used in the pioneer days of surgery; this shift began in the 1940s to the 1950s. Its name comes from being a structural isomer of enflurane, hence they have the same empirical formula.
Environment
The average lifetime of isoflurane in the atmosphere is 3.2 years, its global warming potential is 510 and the yearly emissions add up to 880 tons.
Veterinary use
Isoflurane is frequently used for veterinary anaesthesia.
| Biology and health sciences | Anesthetics | Health |
328501 | https://en.wikipedia.org/wiki/Cherry%20blossom | Cherry blossom | The cherry blossom, or sakura, is the flower of trees in Prunus subgenus Cerasus. Sakura usually refers to flowers of ornamental cherry trees, such as cultivars of Prunus serrulata, not trees grown for their fruit (although these also have blossoms). Cherry blossoms have been described as having a vanilla-like smell, which is mainly attributed to coumarin.
Wild species of cherry tree are widely distributed, mainly in the Northern Hemisphere. They are common in East Asia, especially in Japan, where they have been cultivated, producing many varieties.
Most of the ornamental cherry trees planted in parks and other places for viewing are cultivars developed for ornamental purposes from various wild species. In order to create a cultivar suitable for viewing, a wild species with characteristics suitable for viewing is needed. Prunus speciosa (Oshima cherry), which is endemic to Japan, produces many large flowers, is fragrant, easily mutates into double flowers and grows rapidly. As a result, various cultivars, known as the Cerasus Sato-zakura Group, have been produced since the 14th century and continue to contribute greatly to the development of hanami (flower viewing) culture. From the modern period, cultivars are mainly propagated by grafting, which quickly produces cherry trees with the same genetic characteristics as the original individuals, and which are excellent to look at.
The Japanese word sakura ( or ; or ) can mean either the tree or its flowers (see ). The cherry blossom is considered the national flower of Japan, and is central to the custom of hanami.
Sakura trees are often called Japanese cherry in English. (This is also a common name for Prunus serrulata.) The cultivation of ornamental cherry trees began to spread in Europe and the United States in the early 20th century, particularly after Japan presented trees to the United States as a token of friendship in 1912. British plant collector Collingwood Ingram conducted important studies of Japanese cherry trees after the First World War.
Classification
Classifying cherry trees is often confusing, since they are relatively prone to mutation and have diverse flowers and characteristics, and many varieties (a sub-classification of species), hybrids between species, and cultivars exist. Researchers have assigned different scientific names to the same type of cherry tree throughout different periods.
In Europe and North America, ornamental cherry trees are classified under the subgenus Cerasus ("true cherries"), within the genus Prunus. Cerasus consists of about 100 species of cherry tree, but does not include bush cherries, bird cherries, or cherry laurels (other non-Cerasus species in Prunus are plums, peaches, apricots, and almonds). Cerasus was originally named as a genus in 1700 by de Tournefort. In 1753, Linnaeus combined it with several other groupings to form a larger Prunus genus. Cerasus was later converted into a section and then a subgenus, this system becoming widely accepted, but some botanists resurrected it as a genus instead. In China and Russia, where there are many more wild cherry species than in Europe, Cerasus continues to be used as a genus.
In Japan, ornamental cherry trees were traditionally classified in the genus Prunus, as in Europe and North America, but after a 1992 paper by Hideaki Ohba of the University of Tokyo, classification in the genus Cerasus became more common. This means that (for example) the scientific name Cerasus incisa is now used in Japan instead of Prunus incisa.
A culture of plum blossom viewing has existed in mainland China since ancient times, and although cherry trees have many wild species, most of them had small flowers, and the distribution of wild cherry trees with large flowers suitable for cherry blossom viewing was limited. In Europe and North America, there were few cherry species with characteristics suitable for cherry blossom viewing. In Japan, on the other hand, the Prunus speciosa (Oshima cherry) and , which have large flowers suitable for cherry blossom viewing and tend to grow into large trees, were distributed over a fairly large area of the country and were close to people's living areas. The development of cherry blossom viewing, and the production of cultivars, is therefore considered to have taken place primarily in Japan.
Because cherry trees have mutable traits, many cultivars have been created for cherry blossom viewing, especially in Japan. Since the Heian period, the Japanese have produced cultivars by selecting superior or mutant trees from among the natural crossings of wild cherry trees. They were also produced by crossing trees artificially and then breeding them by grafting and cutting. Oshima, Yamazakura, Prunus pendula f. ascendens (syn, Prunus itosakura, Edo higan), and other varieties which grow naturally in Japan, mutate easily. The Oshima cherry, which is an endemic species in Japan, tends to mutate into a double-flowered tree, grows quickly, has many large flowers, and has a strong fragrance. Due to these favorable characteristics, the Oshima cherry has been used as a base for many Sakura cultivars (called the Sato-zakura Group). Two such cultivars are the Yoshino cherry and Kanzan; Yoshino cherries are actively planted in Asian countries, and Kanzan is actively planted in Western countries.
Hanami: Flower viewing in Japan
is the many centuries-old practice of holding feasts or parties under blooming ( or ; or ) or (; ) trees. During the Nara period (710–794), when the custom is said to have begun, it was blossoms that people admired. By the Heian period (794–1185), however, cherry blossoms were attracting more attention, and was synonymous with . From then on, in both and haiku, meant "cherry blossoms," as implied by one of Izumi Shikibu's poems. The custom was originally limited to the elite of the Imperial Court but soon spread to samurai society and, by the Edo period, to the common people as well. Tokugawa Yoshimune planted areas of cherry blossom trees to encourage this. Under the trees, people held cheerful feasts where they ate, and drank .
Since a book written in the Heian period mentions , one of the cultivars with pendulous branches, Prunus itosakura 'Pendula' (Sidare-zakura) is considered the oldest cultivar in Japan. In the Kamakura period, when the population increased in the southern Kantō region, the Oshima cherry, which originated in Izu Oshima Island, was brought to Honshu and cultivated there; it then made its way to the capital, Kyoto. The Sato-zakura Group first appeared during the Muromachi period.
Prunus itosakura (syn. Prunus subhirtella, Edo higan) is a wild species that grows slowly. However, it has the longest life span among cherry trees and is easy to grow into large trees. For this reason, there are many large, old specimens of this species in Japan. They are often regarded as sacred and have become landmarks that symbolize Shinto shrines, Buddhist temples, and local areas. For example, , which is around 2,000 years old, , which is around 1,500 years old, and , which is around 1,000 years old, are famous for their age.
In the Edo period, various double-flowered cultivars were produced and planted on the banks of rivers, in Buddhist temples, in Shinto shrines, and in daimyo gardens in urban areas such as Edo; the common people living in urban areas could enjoy them. Books from the period record more than 200 varieties of cherry blossoms and mention many varieties that are currently known, such as 'Kanzan'. However, this situation was limited to urban areas, and the main objects of hanami across the country were still wild species such as and Oshima cherry.
Since Japan was modernized in the Meiji period, the Yoshino cherry has spread throughout Japan, and it has become the main object of hanami. Various other cultivars were cut down one after another during changes related to the rapid modernization of cities, such as the reclamation of waterways and the demolition of daimyo gardens. The gardener Takagi Magoemon and the village mayor of Kohoku Village, Shimizu Kengo, were concerned about this situation and preserved a few by planting a row of cherry trees, of various cultivars, along the Arakawa River bank. In Kyoto, Sano Toemon XIV, a gardener, collected various cultivars and propagated them. After World War II, these cultivars were inherited by the National Institute of Genetics, Tama Forest Science Garden and the Flower Association of Japan, and from the 1960s onwards were again used for hanami.
Every year, the Japanese Meteorological Agency (JMA) and the public track the as it moves northward up the archipelago with the approach of warmer weather, via nightly forecasts following the weather segment of news programs. Since 2009, tracking of the sakura zensen has been largely taken over by private forecasting companies, with the JMA switching to focus only on data collection that than forecasting. The blossoming begins in Okinawa in January and typically reaches Kyoto and Tokyo at the beginning of April, though recent years have trended towards earlier flowerings near the end of March. It proceeds northward and into areas of higher altitude, arriving in Hokkaido a few weeks later. Japanese locals, in addition to overseas tourists, pay close attention to these forecasts.
Most Japanese schools and public buildings have cherry blossom trees planted outside of them. Since the fiscal and school years both begin in April, in many parts of Honshu the first day of work or school coincides with the cherry blossom season. However, while most cherry blossom trees bloom in the spring, there are also lesser-known winter cherry blossoms (fuyuzakura in Japanese) that bloom between October and December.
The Japan Cherry Blossom Association has published a list of Japan's Top 100 Cherry Blossom Spots (), with at least one location in every prefecture.
Blooming season
Many cherry species and cultivars bloom between March and April in the Northern Hemisphere. Wild cherry trees, even if they are the same species, differ genetically from one individual to another. Even if they are planted in the same area, there is some variation in the time when they reach full bloom. In contrast, cultivars are clones propagated by grafting or cutting, so each tree of the same cultivar planted in the same area will come into full bloom all at once due to their genetic similarity.
Some wild species, such as Edo higan and the cultivars developed from them, are in full bloom before the leaves open. Yoshino cherry became popular for cherry-blossom viewing because of these characteristics of simultaneous flowering and blooming before the leaves open; it also bears many flowers and grows into a large tree. Many cultivars of the Sato-zakura group, which were developed from complex interspecific hybrids based on Oshima cherry, are often used for ornamental purposes. They generally reach full bloom a few days to two weeks after Yoshino cherry does.
Impacts of climate change
The flowering time of cherry trees is thought to be affected by global warming and the heat island effect of urbanization. According to the record of full bloom dates of in Kyoto, Japan, which has been recorded for about 1200 years, the time of full bloom was relatively stable from 812 to the 1800s. After that, the time of full color rapidly became earlier, and in 2021, the earliest full bloom date in 1200 years was recorded. The average peak bloom day in the 1850s was around April 17, but by the 2020s, it was April 5; the average temperature rose by about during this time. According to the record of full bloom dates of the Yoshino cherry in the Tidal Basin in Washington, D.C., the bloom date was April 5 in 1921, but it was March 31 in 2021. These records are consistent with the history of rapid increases in global mean temperature since the mid-1800s.
Japanese cherry trees grown in the Southern Hemisphere will bloom at a different time of the year. For example, in Australia, while the trees in the Cowra Japanese Garden bloom in late September to mid-October, the Sydney cherry blossom festival is in late August.
There's an escalating concern of climate change as it poses a threat to sakura cultivars, given that they are highly susceptible to shifts in temperature and weather fluctuations. The changes, driven by climate change including warmer temperatures and earlier starts to springtime, may disrupt the timing of their blooms and potentially lead to reduced flowering and cultural significance.
In 2023, it has been observed in China that cherry blossoms have reached their peak bloom weeks earlier than they previously had a few decades ago. Similarly, data from Kyoto, Japan, and Washington, D.C., United States, also indicated that blooming periods are occurring earlier in those locations as well.
Although precise forecasting is generally challenging, AI predictions from Japan Meteorological Agency, have suggested that without substantial efforts to rein in climate change, the Somei-Yoshino cherry tree variety could face significant challenges and even the risk of disappearing entirely from certain parts of Japan, including Miyazaki, Nagasaki, and Kagoshima prefectures in the Kyushu region by 2100.
Symbolism in Japan
Cherry blossoms are a frequent topic in waka composition, where they commonly symbolize impermanence. Due to their characteristic of blooming , cherry blossoms and are considered an enduring metaphor for the ephemeral nature of life. Cherry blossoms frequently appear in Japanese art, manga, anime, and film, as well as stage set designs for musical performances. There is at least one popular folk song, originally meant for the shakuhachi (bamboo flute), titled "Sakura", in addition to several later pop songs bearing the name. The flower is also used on all manner of historical and contemporary consumer goods, including kimonos, stationery, and dishware.
Mono no aware
The traditional symbolism of cherry blossoms as a metaphor for the ephemeral nature of life is associated with the influence of Shinto, embodied in the concept of (the pathos of things). The connection between cherry blossoms and mono no aware dates back to 18th-century scholar Motoori Norinaga. The transience of the blossoms, their beauty, and their volatility have often been associated with mortality and the graceful and ready acceptance of destiny and karma.
Nationalism and militarism
The Sakurakai, or Cherry Blossom Society, was the name chosen by young officers within the Imperial Japanese Army in September 1930 for their secret society established to reorganize the state along totalitarian militaristic lines, via a military coup d'état if necessary.
During World War II, cherry blossoms were used as a symbol to motivate the Japanese people and stoke nationalism and militarism. The Japanese proverb hana wa sakuragi, hito wa bushi ("the best blossom is the cherry blossom, the best man is a warrior") was evoked in the Imperial Japanese Army as a motivation during the war. Even before the war, cherry blossoms were used in propaganda to inspire the "Japanese spirit", as in the "Song of Young Japan", exulting in "warriors" who were "ready like the myriad cherry blossoms to scatter". In 1894, Sasaki Nobutsuna composed a poem, Shina seibatsu no uta (The Song of the Conquest of the Chinese) to coincide with the First Sino-Japanese War. The poem compares falling cherry blossoms to the sacrifice of Japanese soldiers who fall in battles for their country and emperor. In 1932, Akiko Yosano's poetry urged Japanese soldiers to endure suffering in China and compared the dead soldiers to cherry blossoms. Arguments that the plans for the Battle of Leyte Gulf, involving all Japanese ships, would expose Japan to danger if they failed were countered with the plea that the Navy be permitted to "bloom as flowers of death". The last message of the forces on Peleliu was "Sakura, Sakura". Japanese pilots would paint sakura flowers on the sides of their planes before embarking on a suicide mission, or even take branches of the trees with them on their missions. A cherry blossom painted on the side of a bomber symbolized the intensity and ephemerality of life; in this way, falling cherry petals came to represent the sacrifice of youth in suicide missions to honor the emperor. The first kamikaze unit had a subunit called Yamazakura, or wild cherry blossom. The Japanese government encouraged the people to believe that the souls of downed warriors were reincarnated in the blossoms.
Artistic and popular uses
Cherry blossoms have been used symbolically in Japanese sports; the Japan national rugby union team has used the flower as an emblem on its uniforms since the team's first international matches in the 1930s, depicted as a "bud, half-open and full-bloomed". The team is known as the "Brave Blossoms" (), and has had their current logo since 1952. The cherry blossom is also seen in the logo of the Japan Cricket Association and the Japan national American football team.
Cherry blossoms are a prevalent symbol in irezumi, the traditional art of Japanese tattoos. In this art form, cherry blossoms are often combined with other classic Japanese symbols like koi fish, dragons, or tigers.
The cherry blossom remains symbolic today. It was used for the Tokyo 2020 Paralympics mascot, Someity. It is also a common way to indicate the start of spring, such as in the Animal Crossing series of video games, where many of the game's trees are flowering cherries.
Cultivars
Japan has a wide diversity of cherry trees, including hundreds of cultivars. By one classification method, there are more than 600 cultivars in Japan, while the Tokyo Shimbun claims that there are 800. According to the results of DNA analysis of 215 cultivars carried out by Japan's Forestry and Forest Products Research Institute in 2014, many of the cultivars that have spread around the world are hybrids produced by crossing Oshima cherry and with various wild species. Among these cultivars, the Sato-zakura Group and many other cultivars have a large number of petals, and the representative cultivar is Prunus serrulata 'Kanzan'.
The following species, hybrids, and varieties are used for Sakura cultivars:
Prunus apetala (Clove Cherry)
Prunus campanulata
Prunus × furuseana (P. incisa × P. jamasakura)
Prunus × incam (P. incisa × P. campanulata)
Prunus incisa var. incisa
Prunus incisa var. kinkiensis
Prunus × introrsa
Prunus itosakura (Prunus subhirtella, Prunus pendula)
Prunus × kanzakura (P. campanulata × P. jamasakura and P. campanulata × P. speciosa)
Prunus leveilleana (Prunus verecunda)
Prunus × miyoshii
Prunus nipponica
Prunus padus
Prunus × parvifolia (P. incisa × P. speciosa)
Prunus pseudocerasus
Prunus × sacra (P. itosakura × P. jamasakura)
Prunus sargentii
Prunus serrulata var. lannesiana, Prunus lannesiana (Prunus Sato-zakura group. Complex interspecific hybrids based on Prunus speciosa.)
Prunus × sieboldii
Prunus speciosa
Prunus × subhirtella (P. incisa × P. itosakura)
Prunus × syodoi
Prunus × tajimensis
Prunus × takenakae
Prunus × yedoensis (P. itosakura × P. speciosa)
The most popular cherry blossom cultivar in Japan is 'Somei-yoshino' (Yoshino cherry). Its flowers are nearly pure white, tinged with the palest pink, especially near the stem. They bloom and usually fall within a week before the leaves come out. Therefore, the trees look nearly white from top to bottom. The cultivar takes its name from the village of Somei, which is now part of Toshima in Tokyo. It was developed in the mid- to late-19th century, at the end of the Edo period and the beginning of the Meiji period. The 'Somei-yoshino' is so widely associated with cherry blossoms that jidaigeki and other works of fiction often show the trees being cultivated in the Edo period or earlier, although such depictions are anachronisms.
is a representative cultivar that blooms before the arrival of spring. It is a natural hybrid between the Oshima cherry and Prunus campanulata and is characterized by deep pink petals. Wild cherry trees usually do not bloom in cold seasons because they cannot produce offspring if they bloom before spring, when pollinating insects become active. However, it is thought that 'Kawazu-zakura' blooms earlier because Prunus campanulata from Okinawa, which did not originally grow naturally in Honshu, was crossed with the Oshima cherry. In wild species, flowering before spring is a disadvantageous feature of selection; in cultivars such as 'Kawazu-zakura', early flowering and flower characteristics are preferred, and they are propagated by grafting.
Cherry trees are generally classified by species and cultivar, but in Japan they are also classified using names based on the characteristics of the flowers and trees. Cherry trees with more petals than the ordinary five are classified as yae-zakura (double-flowered sakura), and those with drooping branches are classified as shidare-zakura, or weeping cherry. Most yae-zakura and shidare-zakura are cultivars. Famous shidare-zakura cultivars include 'Shidare-zakura', 'Beni-shidare', and 'Yae-beni-shidare', all derived from the wild species Prunus itosakura (syn, Prunus subhirtella or Edo higan).
The color of cherry blossoms is generally a gradation between white and red, but there are cultivars with unusual colors such as yellow and green. The representative cultivars of these colors are and , which were developed in the Edo period of Japan.
In 2007, Riken produced a new cultivar named 'Nishina zao' by irradiating cherry trees with a heavy-ion beam. This cultivar is a mutation of the green-petaled ; it is characterized by its pale yellow-green-white flowers when it blooms and pale yellow-pink flowers when they fall. Riken produced the cultivars 'Nishina otome' (blooms in both spring and autumn, or year-round in a greenhouse), 'Nishina haruka' (larger flowers), and 'Nishina komachi' ('lantern-like' flowers that remain partially closed) in the same way.
All wild cherry trees produce small, unpalatable fruit or edible cherries, however, some cultivars have structural modifications to render the plant unable to naturally reproduce. For example, and , which originated from the Oshima cherry, have a modified pistil that develops into a leaf-like structure, and can only be propagated by artificial methods such as grafting and cutting. Cherry trees grown for their fruit are generally cultivars of the related species Prunus avium, Prunus cerasus, and Prunus fruticosa.
Cultivation by country
In the present day, ornamental cherry blossom trees are distributed and cultivated worldwide. While flowering cherry trees were historically present in Europe, North America, Philippines, and China, the practice of cultivating ornamental cherry trees was centered in Japan, and many of the cultivars planted worldwide, such as that of Prunus × yedoensis, have been developed from Japanese hybrids.
The global distribution of ornamental cherry trees, along with flower viewing festivals or hanami, largely started in the early 20th century, often as gifts from Japan. However, some regions have historically cultivated their own native species of flowering cherry trees, a notable variety of which is the Himalayan wild cherry tree Prunus cerasoides.
The origin of wild cherry species
The wild Himalayan cherry, Prunus cerasoides, is native to the Himalayan region of Asia, and is common in countries such as Nepal, India, Bhutan, and Myanmar, where it is also cultivated.
In 1975, three Japanese researchers proposed a theory that cherry trees originated in the Himalayan region and spread eastwards to reach Japan at a time before human civilisation, when the Japanese archipelago was connected to the Eurasian continent, and that cherry species differentiation was actively promoted in Japan.
According to Masataka Somego, a professor at Tokyo University of Agriculture, cherry trees originated 10 million years ago in what is now Nepal and later differentiated in the Japanese archipelago, giving rise to species unique to Japan.
According to the Kazusa DNA Research Institute, detailed DNA research has shown that the Prunus itosakura and the Prunus speciosa, which is endemic to Japan, differentiated into independent species 5.52 million years ago.
On the other hand, according to Ko Shimamoto, a professor at Nara Institute of Science and Technology, modern theories based on detailed DNA research reject the theory that the Himalayan cherry tree is the root of the Japanese cherry tree, and the ancestor of the cherry tree is estimated to be a plant belonging to the Prunus grayana.
According to HuffPost, it is a widely held consensus that the origin of the first cherry blossoms happened somewhere in the Himalayas, Eurasia but scholars posit that the blossoms may have reached Japan around several thousand years ago. In Japan, centuries of hybridization have brought about more than 300 varieties of the cherry blossom.
Culinary use
Cherry blossoms and leaves are edible, and both are used as food ingredients in Japan:
The blossoms are pickled in salt and umezu (ume vinegar), and used for coaxing out flavor in wagashi, a traditional Japanese confectionery, or anpan, a Japanese sweet bun most-commonly filled with red bean paste. The pickling method, known as , is said to date back to the end of the Edo period, though the general method of pickling vegetables in salt to produce tsukemono has been known as early as the Jōmon period.
Salt-pickled blossoms in hot water are called sakurayu and drunk at festive events like weddings in place of green tea.
The leaves are pickled in salted water and used for sakuramochi.
Cherry blossoms are used as a flavoring botanical in Japanese Roku gin.
Toxicity
Cherry leaves and blossoms contain coumarin, which is potentially hepatotoxic and is banned in high doses by the Food and Drug Administration. However, coumarin has a desirable vanilla-like scent, and the salt curing process used prior to most culinary applications, which involves washing, drying, and salting the blossoms or leaves for a full day, reduces the concentration of coumarin to acceptable levels while preserving its scent. Coumarin may also be isolated from the plant for use in perfumes, pipe tobacco, or as an adulterant in vanilla flavorings, though the tonka bean is a more common natural source of this chemical.
Cherry seeds and bark contain amygdalin and should not be eaten.
| Biology and health sciences | Rosales | Plants |
328579 | https://en.wikipedia.org/wiki/Steroid%20hormone | Steroid hormone | A steroid hormone is a steroid that acts as a hormone. Steroid hormones can be grouped into two classes: corticosteroids (typically made in the adrenal cortex, hence cortico-) and sex steroids (typically made in the gonads or placenta). Within those two classes are five types according to the receptors to which they bind: glucocorticoids and mineralocorticoids (both corticosteroids) and androgens, estrogens, and progestogens (sex steroids). Vitamin D derivatives are a sixth closely related hormone system with homologous receptors. They have some of the characteristics of true steroids as receptor ligands.
Steroid hormones help control metabolism, inflammation, immune functions, salt and water balance, development of sexual characteristics, and the ability to withstand injury and illness. The term steroid describes both hormones produced by the body and artificially produced medications that duplicate the action for the naturally occurring steroids.
Synthesis
The natural steroid hormones are generally synthesized from cholesterol in the gonads and adrenal glands. These forms of hormones are lipids. They can pass through the cell membrane as they are fat-soluble, and then bind to steroid hormone receptors (which may be nuclear or cytosolic depending on the steroid hormone) to bring about changes within the cell. Steroid hormones are generally carried in the blood, bound to specific carrier proteins such as sex hormone-binding globulin or corticosteroid-binding globulin. Further conversions and catabolism occurs in the liver, in other "peripheral" tissues, and in the target tissues.
Synthetic steroids and sterols
A variety of synthetic steroids and sterols have also been contrived. Most are steroids, but some nonsteroidal molecules can interact with the steroid receptors because of a similarity of shape. Some synthetic steroids are weaker or stronger than the natural steroids whose receptors they activate.
Some examples of synthetic steroid hormones:
Glucocorticoids: alclometasone, prednisone, dexamethasone, triamcinolone, cortisone
Mineralocorticoid: fludrocortisone
Vitamin D: dihydrotachysterol
Androgens: oxandrolone, oxabolone, nandrolone (also known as anabolic-androgenic steroids or simply anabolic steroids)
Oestrogens: diethylstilbestrol (DES) and ethinyl estradiol (EE)
Progestins: norethisterone, medroxyprogesterone acetate, hydroxyprogesterone caproate.
Some steroid antagonists:
Androgen: cyproterone acetate
Progestins: mifepristone, gestrinone
Transport
Steroid hormones are transported through the blood by being bound to carrier proteins—serum proteins that bind them and increase the hormones' solubility in water. Some examples are sex hormone-binding globulin (SHBG), corticosteroid-binding globulin, and albumin. Most studies say that hormones can only affect cells when they are not bound by serum proteins. In order to be active, steroid hormones must free themselves from their blood-solubilizing proteins and either bind to extracellular receptors, or passively cross the cell membrane and bind to nuclear receptors. This idea is known as the free hormone hypothesis. This idea is shown in Figure 1 to the right.
One study has found that these steroid-carrier complexes are bound by megalin, a membrane receptor, and are then taken into cells via endocytosis. One possible pathway is that once inside the cell these complexes are taken to the lysosome, where the carrier protein is degraded and the steroid hormone is released into the cytoplasm of the target cell. The hormone then follows a genomic pathway of action. This process is shown in Figure 2 to the right. The role of endocytosis in steroid hormone transport is not well understood and is under further investigation.
In order for steroid hormones to cross the lipid bilayer of cells, they must overcome energetic barriers that would prevent their entering or exiting the membrane. Gibbs free energy is an important concept here. These hormones, which are all derived from cholesterol, have hydrophilic functional groups at either end and hydrophobic carbon backbones. When steroid hormones are entering membranes free energy barriers exist when the functional groups are entering the hydrophobic interior of membrane, but it is energetically favorable for the hydrophobic core of these hormones to enter lipid bilayers. These energy barriers and wells are reversed for hormones exiting membranes. Steroid hormones easily enter and exit the membrane at physiologic conditions. They have been shown experimentally to cross membranes near a rate of 20 μm/s, depending on the hormone.
Though it is energetically more favorable for hormones to be in the membrane than in the ECF or ICF, they do in fact leave the membrane once they have entered it. This is an important consideration because cholesterol—the precursor to all steroid hormones—does not leave the membrane once it has embedded itself inside. The difference between cholesterol and these hormones is that cholesterol is in a much larger negative Gibb's free energy well once inside the membrane, as compared to these hormones. This is because the aliphatic tail on cholesterol has a very favorable interaction with the interior of lipid bilayers.
Mechanisms of action and effects
There are many different mechanisms through which steroid hormones affect their target cells. All of these different pathways can be classified as having either a genomic effect or a non-genomic effect. Genomic pathways are slow and result in altering transcription levels of certain proteins in the cell; non-genomic pathways are much faster.
Genomic pathways
The first identified mechanisms of steroid hormone action were the genomic effects. In this pathway, the free hormones first pass through the cell membrane because they are fat soluble. In the cytoplasm, the steroid may or may not undergo an enzyme-mediated alteration such as reduction, hydroxylation, or aromatization. Then the steroid binds to a specific steroid hormone receptor, also known as a nuclear receptor, which is a large metalloprotein. Upon steroid binding, many kinds of steroid receptors dimerize: two receptor subunits join together to form one functional DNA-binding unit that can enter the cell nucleus. Once in the nucleus, the steroid-receptor ligand complex binds to specific DNA sequences and induces transcription of its target genes.
Non-genomic pathways
Because non-genomic pathways include any mechanism that is not a genomic effect, there are various non-genomic pathways. However, all of these pathways are mediated by some type of steroid hormone receptor found at the plasma membrane. Ion channels, transporters, G-protein coupled receptors (GPCR), and membrane fluidity have all been shown to be affected by steroid hormones. Of these, GPCR linked proteins are the most common. For more information on these proteins and pathways, visit the steroid hormone receptor page.
| Biology and health sciences | Steroids | Biology |
328581 | https://en.wikipedia.org/wiki/River%20shark | River shark | Glyphis is a genus in the family Carcharhinidae, commonly known as the river sharks. They live in rivers or coastal regions in and around south-east Asia, Africa and parts of Australia.
Taxonomy
This genus contains only three extant species. Further species could easily remain undiscovered,because, due to their secretive habits. This genus was thought to contain five different species, but recent studies on molecular data revealed that the species Glyphis gangeticus has an irregular distribution in the Indo-West Pacific region. The genus Glyphis is closest to the genus Lamiopsis.
Species
The recognized species in this genus are:
Glyphis fowlerae Compagno, White & Cavanagh, 2010 (Borneo river shark) synonym of G. gangeticus
Glyphis gangeticus (J. P. Müller & Henle, 1839) (Ganges shark)
Glyphis garricki L. J. V. Compagno, W. T. White & Last, 2008 (northern river shark)
Glyphis glyphis (J. P. Müller & Henle, 1839) (speartooth shark)
Glyphis hastalis Agassiz, 1843
Glyphis pagoda (Noetling, 1901)
Glyphis siamensis (Steindachner, 1896) (Irrawaddy river shark) synonym of G. gangeticus
Distribution and habitat
Their precise geographic range is uncertain, but the known species are documented in parts of South Asia, Southeast Asia, New Guinea and Australia. Of the three currently described species, the Ganges shark is restricted to freshwater, while the northern river shark and the speartooth shark are found in coastal marine waters, as well. While the bull shark (Carcharhinus leucas) is sometimes called both the river shark and the Ganges shark, it should not be confused with the river sharks of the genus Glyphis. River sharks evolved to have their offspring in freshwater, therefore, making them safe to roam in the water while other sharks are able to survive in saltwater.
Conservation
River sharks remain very poorly known to researchers. River sharks were thought to be extinct until the end of the 20th century, when small populations were discovered in Borneo and Northern Australia. Now, they face a critically endangered status as they are so poorly studied, and people know very little about their population and life history.
Glyphis gangeticus uses the Ganges River as nursery grounds and the birthplace of many Ganges shark offspring, however the population has been severely diminished owing to a long history of fishing and other pollution-related issues in the Northern Arabian Sea. Additionally, India, where the Ganges river flows, is reported to be one of the top three greatest shark and ray capturers in the world, accounting for up to nine percent of reported global landings (Jabado et al., 2018). They are reported from the Zambezi river in Africa. They have been found in nine different tidal areas, which consist of muddy waters with a low salinity. Their placement in connection to coastal marine waters indicates that they are usually born around October.
Images
| Biology and health sciences | Sharks | Animals |
328709 | https://en.wikipedia.org/wiki/Darwin%27s%20finches | Darwin's finches | Darwin's finches (also known as the Galápagos finches) are a group of about 18 species of passerine birds. They are well known for being a classic example of adaptive radiation and for their remarkable diversity in beak form and function. They are often classified as the subfamily Geospizinae or tribe Geospizini. They belong to the tanager family and are not closely related to the true finches. The closest known relative of the Galápagos finches is the South American dull-coloured grassquit (Asemospiza obscura). They were first collected when the second voyage of the Beagle visited the Galápagos Islands, with Charles Darwin on board as a gentleman naturalist. Apart from the Cocos finch, which is from Cocos Island, the others are found only on the Galápagos Islands.
The term "Darwin's finches" was first applied by Percy Lowe in 1936, and popularised in 1947 by David Lack in his book Darwin's Finches. Lack based his analysis on the large collection of museum specimens collected by the 1905–06 Galápagos expedition of the California Academy of Sciences, to whom Lack dedicated his 1947 book. The birds vary in size from and weigh between . The smallest are the warbler-finches and the largest is the vegetarian finch. The most important differences between species are in the size and shape of their beaks, which are highly adapted to different food sources. Food availability was different among the islands of the Galapagos and could also change dramatically due to natural events such as droughts. The birds are all dull-coloured. They are thought to have evolved from a single finch species that came to the islands more than a million years ago.
Darwin's theory
During the survey voyage of HMS Beagle, Darwin was unaware of the significance of the birds of the Galápagos. He had learned how to preserve bird specimens from John Edmonstone while at the University of Edinburgh and had been keen on shooting, but he had no expertise in ornithology and by this stage of the voyage concentrated mainly on geology. In Galápagos he mostly left bird shooting to his servant Syms Covington. Nonetheless, these birds were to play an important part in the inception of Darwin's theory of evolution by natural selection.
On the Galápagos Islands and afterward, Darwin thought in terms of "centres of creation" and rejected ideas concerning the transmutation of species. From Henslow's teaching, he was interested in the geographical distribution of species, particularly links between species on oceanic islands and on nearby continents. On Chatham Island, he recorded that a mockingbird was similar to those he had seen in Chile, and after finding a different one on Charles Island he carefully noted where mockingbirds had been caught. In contrast, he paid little attention to the finches. When examining his specimens on the way to Tahiti, Darwin noted that all of the mockingbirds on Charles Island were of one species, those from Albemarle of another, and those from James and Chatham Islands of a third. As they sailed home about nine months later, this, together with other facts, including what he had heard about Galápagos tortoises, made him wonder about the stability of species.
Following his return from the voyage Darwin presented the finches to the Zoological Society of London on 4 January 1837, along with other mammal and bird specimens that he had collected. The bird specimens, including the finches, were given to John Gould, the famous English ornithologist, for identification. Gould set aside his paying work and at the next meeting, on 10 January, reported that the birds from the Galápagos Islands that Darwin had thought were blackbirds, "gross-beaks" and finches were actually "a series of ground Finches which are so peculiar [as to form] an entirely new group, containing 12 species." This story made the newspapers.
Darwin had been in Cambridge at that time. In early March, he met Gould again and for the first time to get a full report on the findings, including the point that his Galápagos "wren" was another closely allied species of finch. The mockingbirds that Darwin had labelled by island were separate species rather than just varieties. Gould found more species than Darwin had expected, and concluded that 25 of the 26 land birds were new and distinct forms, found nowhere else in the world but closely allied to those found on the South American continent. Darwin now saw that, if the finch species were confined to individual islands, like the mockingbirds, this would help to account for the number of species on the islands, and he sought information from others on the expedition. Specimens had also been collected by Captain Robert FitzRoy, FitzRoy's steward Harry Fuller, and Darwin's servant Covington, who had labelled them by island. From these, Darwin tried to reconstruct the locations from where he had collected his own specimens. The conclusions supported his idea of the transmutation of species.
Text from The Voyage of the Beagle
At the time that he rewrote his diary for publication as Journal and Remarks (later The Voyage of the Beagle), he described Gould's findings on the number of birds, noting that "Although the species are thus peculiar to the archipelago, yet nearly all in their general structure, habits, colour of feathers, and even tone of voice, are strictly American". In the first edition of The Voyage of the Beagle, Darwin said thatIt is very remarkable that a nearly perfect gradation of structure in this one group can be traced in the form of the beak, from one exceeding in dimensions that of the largest gros-beak, to another differing but little from that of a warbler".
By the time the first edition was published, the development of Darwin's theory of natural selection was in progress. For the 1845 second edition of The Voyage (now titled Journal of Researches), Darwin added more detail about the beaks of the birds, and two closing sentences which reflected his changed ideas:Seeing this gradation and diversity of structure in one small, intimately related group of birds, one might really fancy that from an original paucity of birds in this archipelago, one species had been taken and modified for different ends."
Text from On the Origin of Species
Darwin discussed the divergence of various species of birds in the Galápagos more explicitly in his chapter on geographical distribution in On the Origin of Species; however, he does not single out the finches:
Polymorphism in Darwin's finches
Whereas Darwin spent just five weeks in the Galápagos, and David Lack spent three months, Peter and Rosemary Grant and their colleagues have made research trips to the Galápagos for about 30 years, particularly studying Darwin's finches.
Females are dimorphic in song type: songs A and B are quite distinct. Also, males with song A have shorter bills than B males, another clear difference. With these beaks, males are able to feed differently on their favourite cactus, the prickly pear Opuntia. Those with long beaks are able to punch holes in the cactus fruit and eat the fleshy aril pulp, which surrounds the seeds, whereas those with shorter beaks tear apart the cactus base and eat the pulp and any insect larvae and pupae (both groups eat flowers and buds). This dimorphism clearly maximises their feeding opportunities during the non-breeding season when food is scarce.
If the population is panmictic, then Geospiza conirostris exhibits a balanced genetic polymorphism and not, as originally supposed, a case of nascent sympatric speciation. The selection maintaining the polymorphism maximises the species' niche by expanding its feeding opportunity. The genetics of this situation cannot be clarified in the absence of a detailed breeding program, but two loci with linkage disequilibrium is a possibility.
Another interesting dimorphism is for the bills of young finches, which are either 'pink' or 'yellow'. All species of Darwin's finches exhibit this morphism, which lasts for two months. No interpretation of this phenomenon is known.
Taxonomy
Family
For some decades, taxonomists have placed these birds in the family Emberizidae along with the New World sparrows and Old World buntings. However, the Sibley–Ahlquist taxonomy puts Darwin's finches with the tanagers (Monroe and Sibley 1993), and at least one recent work follows that example (Burns and Skutch 2003). The American Ornithologists' Union, in its North American checklist, places the Cocos finch in the Emberizidae, but with an asterisk indicating that the placement is probably wrong (AOU 1998–2006); in its tentative South American check-list, the Galápagos species are incertae sedis, of uncertain place (Remsen et al. 2007).
Species
Genus Geospiza
Genovesa ground finch (Geospiza acutirostris)
Española cactus finch (Geospiza conirostris)
Sharp-beaked ground finch (Geospiza difficilis)
Vampire finch (Geospiza septentrionalis)
Medium ground finch (Geospiza fortis)
Genovesa cactus finch (Geospiza propinqua)
Small ground finch (Geospiza fuliginosa)
Large ground finch (Geospiza magnirostris)
Common cactus finch (Geospiza scandens)
Big Bird (not yet formally named): In 1981, a hybrid male arrived at Daphne Major island. Its mating with local Galapagos finches (specifically G. fortis) has produced a new "big bird" population that can exploit previously unexploited food due to its larger size. They do not breed with the other species on the island, as the females do not recognize the songs of the new males. Genetic evidence shows that currently, after several generations (a time scale that suggests shorter speciation events could have occurred previously), it lives in a complete reproductive isolation from the native species. According to professor Leif Andersson of Uppsala University, a taxonomist not aware of its history would consider it a distinct species.
Genus Camarhynchus
Large tree finch (Camarhynchus psittacula)
Medium tree finch (Camarhynchus pauper)
Small tree finch (Camarhynchus parvulus)
Woodpecker finch (Camarhynchus pallidus) – sometimes separated in Cactospiza
Mangrove finch (Camarhynchus heliobates)
Genus Certhidea
Green warbler-finch (Certhidea olivacea)
Grey warbler-finch (Certhidea fusca)
Genus Pinaroloxias
Cocos finch (Pinaroloxias inornata)
Genus Platyspiza
Vegetarian finch (Platyspiza crassirostris)
Modern research
A long-term study carried out for more than 40 years by the Princeton University researchers Peter and Rosemary Grant has documented evolutionary changes in beak size affected by El Niño/La Niña cycles in the Pacific.
Molecular basis of beak evolution
Developmental research in 2004 found that bone morphogenetic protein 4 (BMP4), and its differential expression during development, resulted in variation of beak size and shape among finches. BMP4 acts in the developing embryo to lay down skeletal features, including making the beak stronger. The same group showed that the development of the different beak shapes in Darwin's finches are also influenced by slightly different timing and spatial expressions of a gene called calmodulin (CaM). Calmodulin acts in a similar way to BMP4, affecting some of the features of beak growth like making them long and pointy. The authors suggest that changes in the temporal and spatial expression of these two factors are possible developmental controls of beak morphology. In a recent study genome sequencing revealed a 240 kilobase haplotype encompassing the ALX1 gene that encodes a transcription factor affecting craniofacial development is strongly associated with beak shape diversity. Moreover, these changes in the beak size have also altered vocalizations in Darwin's finches.
Further research from 2016, in which genomes from each of the Darwin's finch species were sequenced, established that a single nucleotide polymorphism in the high mobility AT-hook 2 gene (HMGA2) locus is significantly associated with variation in beak size (Lamichhaney et al. 2016). HMGA2 codes for a transcription factor which in humans has been associated with variation in height, craniofacial distances, and primary tooth eruption.
In an analysis of the genomes of individuals from three Geospiza ground finch species found in sympatry (G. fortis, G. fuliginosa, G. magnirostris), 11 out of 32,569 SNPs were identified as representing four independent groups of statistically linked SNPs that together explained 83.6% of the variance in beak size (Chaves 2016). What this means is that only a small fraction of the genome in Darwin's finches is responsible for variation in beak morphology which is consistent with the rapid changes in beak form in response to the varying environments on the Galapagos Islands.
| Biology and health sciences | Passerida | Animals |
328850 | https://en.wikipedia.org/wiki/Vitreous%20body | Vitreous body | The vitreous body (vitreous meaning "glass-like"; , ) is the clear gel that fills the space between the lens and the retina of the eyeball (the vitreous chamber) in humans and other vertebrates. It is often referred to as the vitreous humor (also spelled humour), from Latin meaning liquid, or simply "the vitreous". Vitreous fluid or "liquid vitreous" is the liquid component of the vitreous gel, found after a vitreous detachment. It is not to be confused with the aqueous humor, the other fluid in the eye that is found between the cornea and lens.
Structure
The vitreous humor is a transparent, colorless, gelatinous mass that fills the space in the eye between the lens and the retina. It is surrounded by a layer of collagen called the vitreous membrane (or hyaloid membrane or vitreous cortex) separating it from the rest of the eye. It makes up four-fifths of the volume of the eyeball. The vitreous humour is fluid-like near the centre, and gel-like near the edges.
The vitreous humour is in contact with the vitreous membrane overlying the retina. Collagen fibrils attach the vitreous at the optic nerve disc and the ora serrata (where the retina ends anteriorly), at the Wieger-band, the dorsal side of the lens. The vitreous also firmly attaches to the lens capsule, retinal vessels, and the macula, the area of the retina which provides finer detail and central vision.
Aquaporin 4 in Müller cells in rats transports water to the vitreous body.
Anatomical features
The vitreous has many anatomical landmarks, including the hyaloid membrane, Berger's space, space of Erggelet, Wieger's ligament, Cloquet's canal and the space of Martegiani.
Surface features:
Patella fossa: Shallow saucer-like concavity anteriorly, in which the lens rests, separated by Berger's space
(Wieger's ligament): Circular thickening of vitreous 8–9mm in diameter, delineates the patella fossa
Anterior hyaloid: Vitreous surface anterior to ora serrata. Continuous with and invests in the zonular fibres, and extends forward between the ciliary processes
Vitreous base: Denser cortical area of vitreous. Firmly attached to the posterior 2mm of the pars plana, and the anterior 2–4mm of retina
Posterior hyaloid surface: Closely applied to retinal internal limiting membrane. Firm attachment sites: Along blood vessels and at sites of retinal degeneration
Space of Martegioni: A funnel shaped space overlying the optic disc with condensed edge
Cloquet's canal: A 1–2 mm wide canal within the vitreous, from the space of Martegioni to the space of Berger, along an S-shaped course mainly below the horizontal. It is labelled "hyaloid canal" in the above diagram.
Mittendorf's dot: A small circular opacity on the posterior lens capsule, which represents the site of attachment of the hyaloid artery before it subsequently regressed.
Bergmeister's papilla: A tuft of fibrous tissue at the optic disc, which represents a remnant of the sheath associated with the hyaloid artery before it subsequently regressed.
Internal structures of the vitreous
The vitreous body at birth is homogenous with a finely striated pattern.
With early aging the vitreous develops narrow transvitreal "channels".
The cortex is denser than the centre with development.
From adolescence, vitreous tracts form from anterior to posterior.
These vitreous tracts are fine sheet-like condensations of vitreous.
Named tracts
Retrolental tract: Extends posteriorly from the hyaloideocapsular ligament into central vitreous
Coronary tract: External to the retrolental tract, and excluding posteriorly from a circular zone overlying the posterior 1/3rd of the ciliary processes
Median tract: Extends back from a circular zone external to the coronary tract, at the anterior margin of the vitreous base
Preretinal tract: Extends back from the ora serrata and vitreous base
Biochemical properties
Its composition is similar to that of the cornea, but the vitreous contains very few cells. It is composed mostly of phagocytes, which remove unwanted cellular debris in the visual field, and hyalocytes, which turn over the hyaluronan.
The vitreous humour contains no blood vessels, and 98–99% of its volume is water. In addition to water, the vitreous consists of salts, sugars, vitrosin (a type of collagen), a network of collagen type II fibrils with glycosaminoglycan, hyaluronan, opticin, and a wide array of proteins. Despite having little solid matter, the fluid is substantial enough to fill the eye and give it its spherical shape. This contrasts with the aqueous humour, which is more fluid, and the lens, on the other hand, which is elastic in nature and is tightly packed with cells. The vitreous humour has a viscosity two to four times that of water, giving it a gelatinous consistency. It has a refractive index of 1.336.
Development
The vitreous fluid is not present at birth (the eye being filled with only the gel-like vitreous body), but found after age 4-5, and increases in size thereafter.
Produced by cells in the non-pigmented portion of the ciliary body, the vitreous humour is derived from embryonic mesenchyme cells, which degenerate after birth.
The nature and composition of the vitreous humour changes over the course of life. In adolescence, the vitreous cortex becomes more dense and vitreous tracts develop; and in adulthood, the tracts become better defined and sinuous. Central vitreous liquefies, fibrillar degeneration occurs, and the tracts break up (syneresis).
Coarse strands develop with ageing. The gel volume decreases with age, and the liquid volume increases. The cortex may disappear at sites, allowing liquid vitreous to extrude adjacently into the potential space between vitreous cortex and retina (vitreous detachment).
Clinical significance
Injury
If the vitreous pulls away from the retina, it is known as a vitreous detachment. As the human body ages, the vitreous often liquefies and may collapse. This is more likely to occur, and occurs much earlier, in eyes that are nearsighted (myopia). It can also occur after injuries to the eye or inflammation in the eye (uveitis).
The collagen fibres of the vitreous are held apart by electrical charges. With aging, these charges tend to reduce, and the fibres may clump together. Similarly, the gel may liquefy, a condition known as synaeresis, allowing cells and other organic clusters to float freely within the vitreous humour. These allow floaters which are perceived in the visual field as spots or fibrous strands. Floaters are generally harmless, but the sudden onset of recurring floaters may signify a posterior vitreous detachment or other diseases of the eye.
Posterior vitreous detachment: Once liquid vitreous enters the sub-hyaloid space between the vitreous cortex and the retina, it may strip the vitreous cortex off the retina with each eye movement (see Saccade).
Postmortem and forensic
After death, the vitreous resists putrefaction longer than other body fluids. Within the hours, days and weeks after death, the vitreous potassium concentration rises, at such a predictable speed that vitreous potassium levels are frequently used to estimate the time since death (post-mortem interval) of a corpse.
The metabolic exchange and equilibration between systemic circulation and vitreous humour is so slow that vitreous humour is sometimes the fluid of choice for postmortem analysis of glucose levels or substances which would be more rapidly diffused, degraded, excreted or metabolized from the general circulation.
According to Jewish religion, extracting the vitreous fluid for forensic chemical analysis is preferred to blood analysis (in case a forensic or post-mortem toxicology test is deemed necessary). This avoids the loss of even a few droplets of blood from the body prior to burial.
Additional images
| Biology and health sciences | Visual system | Biology |
988372 | https://en.wikipedia.org/wiki/Far%20side%20of%20the%20Moon | Far side of the Moon | The far side of the Moon is the lunar hemisphere that always faces away from Earth, opposite to the near side, because of synchronous rotation in the Moon's orbit. Compared to the near side, the far side's terrain is rugged, with a multitude of impact craters and relatively few flat and dark lunar maria ("seas"), giving it an appearance closer to other barren places in the Solar System such as Mercury and Callisto. It has one of the largest craters in the Solar System, the South Pole–Aitken basin. The hemisphere has sometimes been called the "Dark side of the Moon", where "dark" means "unknown" instead of "lacking sunlight" each location on the Moon experiences two weeks of sunlight while the opposite location experiences night.
About 18 percent of the far side is occasionally visible from Earth due to oscillation and to libration. The remaining 82 percent remained unobserved until 1959, when it was photographed by the Soviet Luna 3 space probe. The Soviet Academy of Sciences published the first atlas of the far side in 1960. The Apollo 8 astronauts were the first humans to see the far side in person when they orbited the Moon in 1968. All crewed and uncrewed soft landings had taken place on the near side of the Moon, until January 3, 2019, when the Chang'e 4 spacecraft made the first landing on the far side. The Chang'e 6 sample-return mission was launched on May 3, 2024, landed in the Apollo basin in the southern hemisphere of the lunar far side and returned to Earth a month later on June 25 with humanity's first lunar samples retrieved from the far side.
Astronomers have suggested installing a large radio telescope on the far side, where the Moon would shield it from possible radio interference from Earth.
Definition
Tidal forces from Earth have slowed the Moon's rotation to the point where the same side is always facing the Earth—a phenomenon called tidal locking. The other face, most of which is never visible from the Earth, is therefore called the "far side of the Moon". Over time, some crescent-shaped edges of the far side can be seen due to libration. In total, 59 percent of the Moon's surface is visible from Earth at one time or another. Useful observation of the parts of the far side of the Moon occasionally visible from Earth is difficult because of the low viewing angle from Earth (they cannot be observed "full on").
A common misconception is that the Moon does not rotate on its axis. If that were so, the whole of the Moon would be visible to Earth over the course of its orbit. Instead, its rotation period matches its orbital period, meaning it turns around once for every orbit it makes: in Earth terms, it could be said that its day and its year have the same length (i.e., ~29.5 earth days).
The phrase "dark side of the Moon" does not refer to "dark" as in the absence of light, but rather "dark" as in unknown: until humans were able to send spacecraft around the Moon, this area had never been seen. In reality, both the near and far sides receive (on average) almost equal amounts of light directly from the Sun. This symmetry is complicated by sunlight reflected from the Earth onto the near side (earthshine), and by lunar eclipses, which occur only when the far side is already dark. Lunar eclipses mean that the side facing earth receives fractionally less sunlight than the far side when considered over a long period of time.
At night under a "full Earth" the near side of the Moon receives on the order of 10 lux of illumination (about what a city sidewalk under streetlights gets; this is 34 times more light than is received on Earth under a full Moon) whereas the far side of the Moon during the lunar night receives only about 0.001 lux of starlight. Only during a full Moon (as viewed from Earth) is the whole far side of the Moon dark.
The word dark has expanded to refer also to the fact that communication with spacecraft can be blocked while the spacecraft is on the far side of the Moon, during Apollo space missions for example.
Differences
The two hemispheres of the Moon have dramatically different appearances, with the near side covered in multiple, large maria (Latin for 'seas', since the earliest astronomers incorrectly thought that these plains were seas of lunar water).
The far side has a battered, densely cratered appearance with few maria. Only 1% of the surface of the far side is covered by maria, compared to 31.2% on the near side. One commonly accepted explanation for this difference is related to a higher concentration of heat-producing elements on the near-side hemisphere, as has been demonstrated by geochemical maps obtained from the Lunar Prospector gamma-ray spectrometer. While other factors, such as surface elevation and crustal thickness, could also affect where basalts erupt, these do not explain why the far side South Pole–Aitken basin (which contains the lowest elevations of the Moon and possesses a thin crust) was not as volcanically active as Oceanus Procellarum on the near side.
It has also been proposed that the differences between the two hemispheres may have been caused by a collision with a smaller companion moon that also originated from the Theia collision. In this model, the impact led to an accretionary pile rather than a crater, contributing a hemispheric layer of extent and thickness that may be consistent with the dimensions of the far side highlands. The chemical composition of the far side is inconsistent with this model.
The far side has more visible craters. This is thought to be a result of the effects of lunar lava flows, which cover and obscure craters, rather than a shielding effect from the Earth. NASA calculates that the Earth obscures only about 4 square degrees out of 41,000 square degrees of the sky as seen from the Moon. "This makes the Earth negligible as a shield for the Moon [and] it is likely that each side of the Moon has received equal numbers of impacts, but the resurfacing by lava results in fewer craters visible on the near side than the far side, even though both sides have received the same number of impacts."
Newer research suggests that heat from Earth at the time when the Moon was formed is the reason the near side has fewer impact craters. The lunar crust consists primarily of plagioclases formed when aluminium and calcium condensed and combined with silicates in the mantle. The cooler far side experienced condensation of these elements sooner and so formed a thicker crust; meteoroid impacts on the near side would sometimes penetrate the thinner crust here and release basaltic lava that created the maria, but would rarely do so on the far side.
The far side exhibits more extreme variations in terrain elevation than the near side. The Moon's highest and lowest points, along with its tallest mountains measured from base to peak, are all located on the far side.
Exploration
Early exploration
Until the late 1950s, little was known about the far side of the Moon. Librations periodically allowed limited glimpses of features near the lunar limb on the far side, but only up to 59% of the total surface of the Moon. These features were seen from a low angle, hindering useful observation (it proved difficult to distinguish a crater from a mountain range). The remaining 82% of the surface on the far side remained unknown, and its properties were subject to much speculation.
An example of a far side feature that can be seen through libration is the Mare Orientale, which is a prominent impact basin spanning almost , yet this was not even named as a feature until 1906, by Julius Franz in Der Mond. The true nature of the basin was discovered in the 1960s when rectified images were projected onto a globe. The basin was photographed in fine detail by Lunar Orbiter 4 in 1967. Before space exploration began, astronomers expected that the far side would be similar to the side visible to Earth.
On 7 October 1959, the Soviet probe Luna 3 took the first photographs of the lunar far side, eighteen of them resolvable, covering one-third of the surface invisible from the Earth. The images were analysed, and the first atlas of the far side of the Moon was published by the USSR Academy of Sciences on 6 November 1960. It included a catalog of 500 distinguished features of the landscape. In 1961, the first globe (1: scale) containing lunar features invisible from the Earth was released in the USSR, based on images from Luna 3.
On 20 July 1965, another Soviet probe, Zond 3, transmitted 25 pictures of very good quality of the lunar far side, with much better resolution than those from Luna 3. In particular, they revealed chains of craters, hundreds of kilometers in length, but, unexpectedly, no mare plains like those visible from Earth with the naked eye. In 1967, the second part of the Atlas of the Far Side of the Moon was published in Moscow, based on data from Zond 3, with the catalog now including 4,000 newly discovered features of the lunar far side landscape. In the same year, the first Complete Map of the Moon (1: scale) and updated complete globe (1: scale), featuring 95 percent of the lunar surface, were released in the Soviet Union.
As many prominent landscape features of the far side were discovered by Soviet space probes, Soviet scientists selected names for them. This caused some controversy, though the Soviet Academy of Sciences selected many non-Soviet names, including Jules Verne, Marie Curie and Thomas Edison. The International Astronomical Union later accepted many of the names.
Further survey mission
On 26 April 1962, NASA's Ranger 4 space probe became the first spacecraft to impact the far side of the Moon, although it failed to return any scientific data before impact.
The first truly comprehensive and detailed mapping survey of the far side was undertaken by the American uncrewed Lunar Orbiter program launched by NASA from 1966 to 1967. Most of the coverage of the far side was provided by the final probe in the series, Lunar Orbiter 5.
The far side was first seen directly by human eyes during the Apollo 8 mission in December, 1968. Astronaut William Anders described the view:
It has been seen by all 24 men who flew on Apollo 8 and Apollo 10 through Apollo 17, and photographed by multiple lunar probes. Spacecraft passing behind the Moon were out of direct radio communication with the Earth, and had to wait until the orbit allowed transmission. During the Apollo missions, the main engine of the Service Module was fired when the vessel was behind the Moon, producing some tense moments in Mission Control before the craft reappeared.
Geologist-astronaut Harrison Schmitt, who became the last to step onto the Moon, had aggressively lobbied for Apollo 17's landing site to be on the far side of the Moon, targeting the lava-filled crater Tsiolkovskiy. Schmitt's ambitious proposal included a special communications satellite based on the existing TIROS satellites to be launched into a Farquhar–Lissajous halo orbit around the L2 point so as to maintain line-of-sight contact with the astronauts during their powered descent and lunar surface operations. NASA administrators rejected these plans on the grounds of added risk and lack of funding.
The idea of utilizing Earth–Moon for communications satellite covering the Moon's far side has been realized, as China National Space Administration launched Queqiao relay satellite in 2018. It has since been used for communications between the Chang'e 4 lander and Yutu 2 rover that have successfully landed in early 2019 on the lunar far side and ground stations on the Earth. L2 is proposed to be "an ideal location" for a propellant depot as part of the proposed depot-based space transportation architecture.
Soft landing
The China National Space Administration (CNSA)'s Chang'e 4 achieved humanity's first ever soft landing on the lunar far side on 3 January 2019 and deployed the Yutu-2 lunar rover onto the lunar surface.
The craft included a lander equipped with a low-frequency radio spectrograph and geological research tools. The far side of the Moon provides a good environment for radio astronomy as interferences from the Earth are blocked by the Moon.
In February 2020, Chinese astronomers reported, for the first time, a high-resolution image of a lunar ejecta sequence, as well as direct analysis of its internal architecture. These were based on observations made by the Lunar Penetrating Radar (LPR) on board the Yutu-2 rover.
CNSA launched Chang'e 6 on 3 May 2024, which conducted the first lunar sample return from Apollo Basin on the far side of the Moon. It was CNSA's second lunar sample return mission, the first achieved by Chang'e 5 from the lunar near side four years earlier. It also carried a mini "Jinchan" rover to conduct infrared spectroscopy of lunar surface and imaged the Chang'e 6's lander on the lunar surface. The lander-ascender-rover combination was separated with the orbiter and returner before landing on 1 June 2024 at 22:23 UTC. It landed on the Moon's surface on 1 June 2024. The ascender was launched back to lunar orbit on 3 June 2024 at 23:38 UTC, carrying samples collected by the lander, and later completed another robotic rendezvous and docking in lunar orbit. The sample container was then transferred to the returner, which landed in Inner Mongolia on 25 June 2024, completing China's far side sample return mission.
The Lunar Surface Electromagnetics Experiment (LuSEE-Night) lander, a mission to soft land as early as 2026 a robotic observatory on the far side designed to measure electromagnetic waves from the early history of the universe is being developed by NASA and the United States Department of Energy.
Potential uses and missions
Because the far side of the Moon is shielded from radio transmissions from the Earth, it is considered a good location for placing radio telescopes for use by astronomers. Small, bowl-shaped craters provide a natural formation for a stationary telescope similar to Arecibo in Puerto Rico. For much larger-scale telescopes, the crater Daedalus is situated near the center of the far side, and the rim would help to block stray communications from orbiting satellites. Another potential candidate for a radio telescope is the Saha crater.
Before deploying radio telescopes to the far side, several problems must be overcome. The fine lunar dust can contaminate equipment, vehicles, and space suits. The conducting materials used for the radio dishes must also be carefully shielded against the effects of solar flares. Finally, the area around the telescopes must be protected against contamination by other radio sources.
The Lagrange point of the Earth–Moon system is located about above the far side, which has also been proposed as a location for a future radio telescope which would perform a Lissajous orbit about the Lagrangian point.
One of the NASA missions to the Moon under study would send a sample-return lander to the South Pole–Aitken basin, the location of a major impact event that created a formation nearly across. The force of this impact has created a deep penetration into the lunar surface, and a sample returned from this site could be analyzed for information concerning the interior of the Moon.
Because the near side is partly shielded from the solar wind by the Earth, the far side maria are expected to have the highest concentration of helium-3 on the surface of the Moon. This isotope is relatively rare on the Earth, but has good potential for use as a fuel in fusion reactors. Proponents of lunar settlement have cited the presence of this material as a reason for developing a Moon base.
Named features
Aitken (crater)
Amici (crater)
Anuchin (crater)
Apollo (crater)
Avogadro (crater)
Bel'kovich (crater)
Belopol'skiy (crater)
Bergstrand (crater)
Berkner (crater)
Birkhoff (crater)
Bjerknes (lunar crater)
Bok (lunar crater)
Campbell (lunar crater)
Cantor (crater)
Carnot (crater)
Cassegrain (crater)
Chandler (crater)
Chappell (crater)
Chernyshev (crater)
Comrie (crater)
Coulomb-Sarton Basin
Crookes (crater)
d'Alembert (crater)
Daedalus (crater)
Davisson (crater)
Debus (crater)
Delporte (crater)
Dyson (crater)
Ellerman (crater)
Emden (crater)
Esnault-Pelterie (crater)
Finsen (crater)
Fleming (crater)
Fowler (crater)
Fridman (crater)
Ganskiy (crater)
Gerasimovich (crater)
Gullstrand (crater)
Hayn (crater)
Hegu (crater)
Hertzsprung (crater)
H. G. Wells (crater)
Hippocrates (lunar crater)
Houzeau (crater)
Icarus (crater)
Ioffe (crater)
Izsak (crater)
Jenner (crater)
Kamerlingh Onnes (crater)
Kirkwood (crater)
Klute (crater)
Kolhörster (crater)
Komarov (crater)
Korolev (lunar crater)
Kovalevskaya (crater)
Kugler (crater)
Kulik (crater)
Lamb (crater)
Lacus Luxuriae
Lacus Oblivionis
Lander (crater)
Langevin (crater)
Lebedev (crater)
Leibnitz (crater)
Lucretius (crater)
Lunar south pole
Maksutov (crater)
McKellar (crater)
Mare Australe
Mare Frigoris
Mare Humboldtianum
Mare Ingenii
Mare Moscoviense
Mare Orientale
Mendeleev (crater)
Michelson (crater)
Montes Cordillera
Montes Rook
Mons Tai
Nicholson (lunar crater)
Nishina (crater)
Ohm (crater)
Oppenheimer (crater)
Oresme (crater)
Pannekoek (crater)
Paraskevopoulos (crater)
Parenago (crater)
Patsaev (crater)
Perrine (crater)
Pettit (lunar crater)
Pirquet (crater)
Pogson (crater)
Priestley (lunar crater)
Quetelet (crater)
Rowland (crater)
Sarton (crater)
Schlesinger (crater)
Shaler (crater)
Shternberg (crater)
Shuleykin (crater)
Sniadecki (crater)
Sommerfeld (crater)
South Pole–Aitken basin
Statio Tianhe (Chang'e 4 landing site)
Stebbins (crater)
Stoletov (crater)
Sverdrup (crater)
Tianjin (crater)
Tikhov (lunar crater)
Titov (crater)
Tsinger (crater)
Tsiolkovskiy (crater)
Tyndall (lunar crater)
Vallis Bouvard
Vallis Inghirami
van't Hoff (crater)
Van de Graaff (crater)
Van der Waals (crater)
Vavilov (crater)
Vertregt (crater)
Virtanen (crater)
Volkov (crater)
Von Kármán (lunar crater)
Von Neumann (crater)
Von Zeipel (crater)
Wan-Hoo (crater)
Wiener (crater)
Wright (lunar crater)
Yamamoto (crater)
Zhinyu (crater)
| Physical sciences | Solar System | Astronomy |
988523 | https://en.wikipedia.org/wiki/Demoiselle%20crane | Demoiselle crane | The demoiselle crane (Grus virgo) is a species of crane found in central Eurosiberia, ranging from the Black Sea to Mongolia and Northeast China. There is also a small breeding population in Turkey. These cranes are migratory birds. Birds from western Eurasia will spend the winter in Africa while the birds from Asia, Mongolia and China will spend the winter in the Indian subcontinent. The bird is symbolically significant in the culture of India, where it is known as koonj or kurjaa.
Taxonomy
The demoiselle crane was formally described in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. He placed it with the herons and cranes in the genus Ardea and coined the binomial name Ardea virgo. He specified the type locality as the orient but this has been restricted to India. Linnaeus cited the accounts by earlier authors. The English naturalist Eleazar Albin had described and illustrated the "Numidian crane" in 1738. Albin explained that: "This Bird is called Demoiselles by reason of certain ways of acting that it has, wherein it seems to imitate the Gestures of a Woman who affects a Grace in her Walking, Obeisances, and Dancing". Linnaeus also cited the English naturalist George Edwards who had described and illustrated the "Demoiselle of Numidia" in 1750. The name "la demoiselle de Numidie" had been used in 1676 by the French naturalist Claude Perrault. The demoiselle crane is now placed in the genus Grus that was introduced in 1760 by the French zoologist Mathurin Jacques Brisson. The species is treated as monospecific: no subspecies are recognised. The genus name Grus is the Latin word for a "crane". The specific epithet virgo is Latin meaning "maiden". Some authorities place this species together with the closely related blue crane (Grus paradisea) in the genus Anthropoides.
Description
The demoiselle is long, tall and has a wingspan. It weighs . It is the smallest species of crane. The demoiselle crane is slightly smaller than the common crane but has similar plumage. It has a long white neck stripe and the black on the foreneck extends down over the chest in a plume.
It has a loud trumpeting call, higher-pitched than the common crane. Like other cranes it has a dancing display, more balletic than the common crane, with less leaping.
Distribution and habitat
The demoiselle crane breeds in central Eurasia from the Black Sea east to Mongolia and northeast China. It breeds in open habitats with sparse vegetation, usually near water. In winter it migrates either to the Sahel region of Africa, from Lake Chad eastwards to southern Ethiopia, or to western regions of the Indian subcontinent. There was previously a small population in Turkey and an isolated resident population in the Atlas Mountains of northwest Africa. These are both now extinct. On its Indian wintering grounds it forms large flocks which gather on agricultural land. It roosts at night in shallow open water.
Behaviour and ecology
Breeding
Eggs are laid between April and May. The minimal nest is placed on an open patch of grass or bare ground. The clutch is normally two eggs. These are laid at daily intervals and incubation begins after the first egg. Incubation is by both sexes but mainly by the female. The eggs hatch asynchronously after 27 to 29 days. The chicks are pale brown above and greyish white below. They are fed and cared for by both parents. The fledgeling period is between 55 and 65 days. They first breed when they are two years old.
In culture
The demoiselle crane is known as the koonj/kurjan in the languages of North India, and figure prominently in the literature, poetry and idiom of the region. Beautiful women are often compared to the koonj because its long and thin shape is considered graceful. Metaphorical references are also often made to the koonj for people who have ventured far from home or undertaken hazardous journeys.
The name koonj is derived from the Sanskrit word kraunch, which is a cognate Indo-European term for crane itself. In the ancient story of Valmiki, the composer of the Hindu epic Ramayana, it is claimed that his first verse was inspired by the sight of a hunter kill the male of a pair of demoiselle cranes that were courting. Observing the lovelorn female circling and crying in grief, he cursed the hunter in verse. Since tradition held that all poetry prior to this moment had been revealed rather than created by man, this verse concerning the demoiselle cranes is regarded as the first human-composed meter.
The flying formation of the koonj during migrations also inspired infantry formations in ancient India. The Mahabharata epic describes both warring sides adopting the koonj formation on the second day of the Kurukshetra War.
| Biology and health sciences | Gruiformes | Animals |
988912 | https://en.wikipedia.org/wiki/Grey%20crowned%20crane | Grey crowned crane | The grey crowned crane (Balearica regulorum), also known as the African crowned crane, golden crested crane, golden crowned crane, East African crane, East African crowned crane, African crane, Eastern crowned crane, Kavirondo crane, South African crane, and crested crane, is a bird in the crane family, Gruidae. It is found in nearly all of Africa, especially in eastern and southern Africa, and it is the national bird of Uganda.
Taxonomy
The grey crowned crane is closely related to the black crowned crane, and the two species have sometimes been treated as the same species. The two are separable on the basis of genetic evidence, calls, plumage, and bare parts, and all authorities treat them as different species today.
There are two subspecies. The East African B. r. gibbericeps (crested crane) occurs in the east of the Democratic Republic of the Congo and in Uganda, of which it is the national bird represented in its national flag, and Kenya to eastern South Africa. It has a larger area of bare red facial skin above the white patch than the smaller nominate species, B. r. regulorum (South African crowned crane), which breeds from Angola south to South Africa.
Description
The grey crowned crane is approximately tall, weighs , and has a wingspan of . Its body plumage is mainly grey. The wings are predominantly white but contain feathers with a range of colours, with a distinctive black patch at the very top. The head has a crown of stiff golden feathers. The sides of the face are white, and there is a bright red inflatable throat pouch. The bill is relatively short and grey, and the legs are black. They have long legs for wading through the grasses. The feet are large, yet slender, adapted for balance rather than defence or grasping. The sexes are similar, although males tend to be slightly larger. Younger cranes are greyer than adults, with a feathered buff face.
This species and the black crowned crane are the only cranes that can roost in trees, because of a long hind toe that can grasp branches. This trait is assumed to be an ancestral trait among the cranes, which has been lost in the other subfamily. Crowned cranes also lack a coiled trachea and have loose plumage compared to the other cranes.
Distribution and habitat
The grey crowned crane occurs in dry savannah in Sub-Saharan Africa, although it nests in somewhat wetter habitats. They can also be found in marshes, cultivated lands and grassy flatlands near rivers and lakes in Uganda and Kenya and as far south as South Africa. This animal does not have set migration patterns, and birds nearer the tropics are typically sedentary. Birds in more arid areas, particularly Namibia, make localised seasonal movements during drier periods.
Behaviour
The grey crowned crane has a breeding display involving dancing, bowing, and jumping. It has a booming call which involves inflation of the red gular sac. It also makes a honking sound quite different from the trumpeting of other crane species. Both sexes dance, and immature birds join the adults. Dancing is an integral part of courtship, but also may be done at any time of the year.
Flocks of 30–150 birds are not uncommon.
Diet and feeding
These cranes are omnivores, eating plants, seeds, grain, insects, frogs, worms, snakes, small fish and the eggs of aquatic animals. Stamping their feet as they walk, they flush out insects which are quickly caught and eaten. The birds also associate with grazing herbivores, benefiting from the ability to grab prey items disturbed by antelopes and gazelles. They spend their entire day looking for food. At night, the crowned crane spends its time in the trees sleeping and resting.
Breeding
Grey crowned cranes time their breeding season around the rains, although the effect varies geographically. In East Africa the species breeds year-round, but most frequently during the drier periods, whereas in Southern Africa the breeding season is timed to coincide with the rains. During the breeding season, pairs of cranes construct a large nest; a platform of grass and other plants in tall wetland vegetation.
The grey crowned crane lays a clutch of 2-5 glossy, dirty-white eggs, which are incubated by both sexes for 28–31 days. Chicks are precocial, can run as soon as they hatch, and fledge in 56–100 days. Once they are fully grown and independent, chicks of different sexes will separate from their parents to start their own family. Grey crowned cranes have been seen to congregate in large numbers in a ceremony akin to a wedding when two chicks are being married off. The new couple dance for a while before flying off together to start a new family.
Relationship with humans
Status and conservation
Although the grey crowned crane remains common over some of its range, it faces threats to its habitat due to drainage, overgrazing, and pesticide pollution. Their global population is estimated to be between 58,000 and 77,000 individuals. In 2012 it was uplisted from vulnerable to endangered by the IUCN.
Symbolism
The grey crowned crane is the national bird of Uganda and features in the country's flag and coat of arms.
The crane is seen as the titular bird in The Bird with the Crystal Plumage but is wrongly stated to be Siberian.
| Biology and health sciences | Gruiformes | Animals |
989858 | https://en.wikipedia.org/wiki/Streaming%20television | Streaming television | Streaming television is the digital distribution of television content, such as and films and television series, streamed over the Internet. Standing in contrast to dedicated terrestrial television delivered by over-the-air aerial systems, cable television, and/or satellite television systems, streaming television is provided as over-the-top media (OTT), or as Internet Protocol television (IPTV). In the United States, streaming television has become "the dominant form of TV viewing."
History
Up until the 1990s, it was not thought possible that a television show could be squeezed into the limited telecommunication bandwidth of a copper telephone cable to provide a streaming service of acceptable quality, as the required bandwidth of a digital television signal was (in the mid-1990s perceived to be) around 200Mbit/s, which was 2,000 times greater than the bandwidth of a speech signal over a copper telephone wire. By the year 2000, a television broadcast could be compressed to 2Mbit/s, but most consumers still had little opportunity to obtain greater than 1Mbit/s connection speeds.
Streaming services started as a result of two major technological developments: MPEG (motion-compensated DCT) video compression and asymmetric digital subscriber line (ADSL) data communication.
The first worldwide live-streaming event was a radio live broadcast of a baseball game between the Seattle Mariners and the New York Yankees streamed by ESPN SportsZone on September 5, 1995. During the mid-2000s, the streaming media was based on UDP, whereas the basis of the majority of the Internet was HTTP and content delivery networks (CDNs). In 2007, HTTP-based adaptive streaming was introduced by Move Networks. This new technology would be a significant change for the industry. One year later the introduction of HTTP-based adaptive streaming, many companies such as Microsoft and Netflix developed their streaming technology. In 2009, Apple launched HTTP Live Streaming (HLS), and Adobe, in 2010, HTTP Dynamic Streaming (HDS). In addition, HTTP-based adaptive streaming was chosen for important streaming events such as Roland Garros, Wimbledon, Vancouver and London Olympic Games, and many others and on premium on-demand services (Netflix, Amazon Instant Video, etc.). The increase in streaming services required a new standardization, therefore in 2012, with the contributions of Apple, Netflix, Microsoft, and other companies, Dynamic Adaptive Streaming, known as MPEG-DASH, was published as the new HTTP-based adaptive streaming standard.
The mid-2000s were the beginning of television programs becoming available via the Internet. In 2003, TVonline Station was founded in Greece, making it the world's first television station to produce and broadcast content exclusively over the internet. The online video platform site YouTube was launched in early 2005, allowing users to share illegally posted television programs. YouTube co-founder Jawed Karim said the inspiration for YouTube first came from Janet Jackson's role in the 2004 Super Bowl incident, when her breast was exposed during her performance, and later from the 2004 Indian Ocean tsunami. Karim could not easily find video clips of either event online, which led to the idea of a video sharing site.
Apple's iTunes service also began offering select television programs and series in 2005, available for download after direct payment. A few years later, television networks and other independent services began creating sites where shows and programs could be streamed online. Amazon Prime Video began in the United States as Amazon Unbox in 2006, but did not launch worldwide until 2016. Netflix, a website originally created for DVD rentals and sales, began providing streaming content in 2007. In 2008 Hulu, owned by NBC and Fox, was launched, followed by tv.com in 2009, owned by CBS. The first generation Apple TV was released in 2007 and in 2008 the first generation Roku streaming device was announced. Digital media players also began to become available to the public during this time. These digital media players have continued to be updated and new generations released.
Smart TVs took over the television market after 2010 and continue to partner with new providers to bring streaming video to even more users. As of 2015, smart TVs are the only type of middle to high-end television being produced. Amazon's version of a digital media player, Amazon Fire TV, was not offered to the public until 2014.
Access to television programming has evolved from computer and television access to include mobile devices such as smartphones and tablet computers. Corresponding apps for mobile devices started to become available via app stores in 2008, but they grew in popularity in the 2010s with the rapid deployment of LTE cellular networks. These apps enable users to stream television content on mobile devices that support them.
In 2008, the International Academy of Web Television, headquartered in Los Angeles, formed in order to organize and support television actors, authors, executives, and producers in web series and streaming television. The organization also administers the selection of winners for the Streamy Awards. In 2009, the Los Angeles Web Series Festival was founded. Several other festivals and award shows have been dedicated solely to web content, including the Indie Series Awards and the Vancouver Web Series Festival. In 2013, in response to the shifting of the soap opera All My Children from broadcast to streaming television, a new category for "Fantastic web-only series" in the Daytime Emmy Awards was created. Later that year, Netflix made history by earning the first Primetime Emmy Award nominations for a streaming television series, for Arrested Development, Hemlock Grove, and House of Cards, at the 65th Primetime Emmy Awards. Hulu earned the first Emmy win for Outstanding Drama Series, for The Handmaid's Tale at the 69th Primetime Emmy Awards.
Traditional cable and satellite television providers began to offer services such as Sling TV, owned by Dish Network, which was unveiled in January 2015. DirecTV, another satellite television provider launched their own streaming service, DirecTV Stream, in 2016. Sky launched a similar streaming service in the UK called Now.
In 2013, Video on demand website Netflix earned the first Primetime Emmy Award nominations for original streaming television at the 65th Primetime Emmy Awards. Three of its series, House of Cards, Arrested Development, and Hemlock Grove, earned nominations that year. On July 13, 2015, cable company Comcast announced an HBO plus broadcast TV package at a price discounted from basic broadband plus basic cable.
In 2017, YouTube launched YouTube TV, a streaming service that allows users to watch live television programs from popular cable or network channels, and record shows to stream anywhere, anytime. , 28% of US adults cite streaming services as their main means for watching television, and 61% of those ages 18 to 29 cite it as their main method. , Netflix is the world's largest streaming TV network and also the world's largest Internet media and entertainment company with 269 million paid subscribers, and by revenue and market cap. In 2020, the COVID-19 pandemic had a strong impact in the television streaming business with the lifestyle changes such as staying at home and lockdowns.
Technology
The Hybrid Broadcast Broadband TV (HbbTV) consortium of industry companies (such as SES, Humax, Philips, and ANT Software) is currently promoting and establishing an open European standard for hybrid set-top boxes for the reception of broadcast and broadband digital television and multimedia applications with a single-user interface.
BBC iPlayer originally incorporated peer-to-peer streaming, moved towards centralized distribution for their video streaming services. BBC executive Anthony Rose cited network performance as an important factor in the decision, as well as consumers being unhappy with their own network bandwidth being used for transmitting content to other viewers. Samsung TV has also announced their plans to provide streaming options including 3D Video on Demand through their Explore 3D service.
Access control
Some streaming services incorporate digital rights management. The W3C made the controversial decision to adopt Encrypted Media Extensions due in large part to motivations to provide copy protection for streaming content. Sky Go has software that is provided by Microsoft to prevent content being copied.
Additionally, BBC iPlayer makes use of a parental control system giving users the option to "lock" content, requiring a password to access it. The goal of these systems is to enable parents to keep children from viewing sexually themed, violent, or otherwise age-inappropriate material. Flagging systems can be used to warn a user that content may be certified or that it is intended for viewing post-watershed. Honour systems are also used where users are asked for their dates of birth or age to verify if they are able to view certain content.
IPTV
IPTV delivers television content using signals based on the Internet Protocol (IP), through managed private network infrastructure entirely owned by a single telecom or Internet service provider (ISP). This stands in contrast to delivering content over unmanaged public networks - a practice known as over-the-top content delivery. Both IPTV and OTT use the Internet protocol over a packet-switched network to transmit data, but IPTV operates in a closed system—a dedicated, managed network controlled by the local cable, satellite, telephone, or fiber-optic company. In its simplest form, IPTV simply replaces traditional circuit switched analog or digital television channels with digital channels which happen to use packet-switched transmission. In both the old and new systems, subscribers have set-top boxes or other customer-premises equipment that communicates directly over company-owned or dedicated leased lines with central-office servers. Packets never travel over the public Internet, so the television provider can guarantee enough local bandwidth for each customer's needs.
The Internet protocol is a cheap, standardized way to enable two-way communication and simultaneously provide different data (e.g., TV-show files, email, Web browsing) to different customers. This supports DVR-like features for time shifting television: for example, to catch up on a TV show that was broadcast hours or days ago, or to replay the current TV show from its beginning. It also supports video on demand—browsing a catalog of videos (such as movies or television shows) which might be unrelated to the company's scheduled broadcasts.
IPTV has an ongoing standardization process (for example, at the European Telecommunications Standards Institute).
Streaming quality
Streaming quality is the quality of image and audio transmission from the servers of the distributor to the user's screen. Also, Streaming resolution helps to measure the size of the streaming quality of video pixels. High-definition video (720p+) and later standards require higher bandwidth and faster connection speeds than previous standards, because they carry higher spatial resolution image content. In addition, transmission packet loss and latency caused by network impairments and insufficient bandwidth degrade replay quality. Decoding errors may manifest themselves with video breakup and macro blocks. The generally accepted download rate for streaming high-definition (1080p) video encoded in AVC is 6000 kbit/s, whereas UHD requires upwards of 16,000 kbit/s.
For users who do not have the bandwidth to stream HD/4K video or even SD video, most streaming platforms make use of an adaptive bitrate stream so that if the user's bandwidth suddenly drops, the platform will lower its streaming bitrate to compensate. Most modern television streaming platforms offer a wide range of both manual and automatic bitrate settings which are based on initial connection tests during the first few seconds of a video loading, and can be changed on the fly. This is valid for both Live and Catch-up content. Additionally, platforms can also offer content in standards such as HDR or Dolby Vision or at higher framerates which can require additional costs or subscription tiers to access.
Usage
Internet television is common in most US households as of the mid-2010s. In a 2013 study by eMarketer, about one in four new televisions being sold is a smart TV. Within the same decade, rapid deployment of LTE cellular network and general availability of smartphones have increased popularity of the streaming services, and the corresponding apps on mobile devices. On August 18, 2022, Nielsen reported that for the first time, streaming viewership has surpassed cable.
Considering the popularity of smart TVs, smartphones, and devices such as the Roku and Chromecast, much of the US public can watch television via the Internet. Internet-only channels are now established enough to feature some Emmy-nominated shows, such as Netflix's House of Cards. Many networks also distribute their shows the next day to streaming providers such as Hulu Some networks may use a proprietary system, such as the BBC utilizes their BBC iPlayer format. This has resulted in bandwidth demands increasing to the point of causing issues for some networks. It was reported in February 2014 that Verizon Fios is having issues coping with the demand placed on their network infrastructure. Until long-term bandwidth issues are worked out and regulation such at net neutrality Internet Televisions push to HDTV may start to hinder growth.
Aereo was launched in March 2012 in New York City (and subsequently stopped from broadcasting in June 2014). It streamed network TV only to New York customers over the Internet. Broadcasters filed lawsuits against Aereo, because Aereo captured broadcast signals and streamed the content to Aereo's customers without paying broadcasters. In mid-July 2012, a federal judge sided with the Aereo start-up. Aereo planned to expand to every major metropolitan area by the end of 2013. The Supreme Court ruled against Aereo June 24, 2014.
Some have noted that as opposed to broadcast television, with demographics of mostly "unspokenly straight" white viewers, cable, and with streaming services, dollars from subscription can "level the playing field," giving viewers from marginalized communities, and representation of their communities, "equal power."
Market competitors
Many providers of Internet television services exist—including conventional television stations that have taken advantage of the Internet as a way to continue showing television shows after they have been broadcast, often advertised as "on-demand" and "catch-up" services. Today, almost every major broadcaster around the world is operating an Internet television platform. Examples include the BBC, which introduced the BBC iPlayer on 25 June 2008 as an extension to its "RadioPlayer" and already existing streamed video-clip content, and Channel 4 that launched 4oD ("4 on Demand") (now All 4) in November 2006 allowing users to watch recently shown content. Most Internet television services allow users to view content free of charge; however, some content is for a fee. In the UK, the term catch up TV was most commonly used to refer to these sort of services at the time.
Since 2012, around 200 over-the-top (OTT) platforms providing streamed and downloadable content have emerged. Investment by Netflix in new original content for its OTT platform reached $13bn in 2018.
Streaming platforms
Amazon Prime Video
Amazon Prime Video was originally launched in the year 2006. Upon its initial release, the popular streaming service was referred to as Amazon Unbox. Amazon Prime Video was created due to the development of Amazon Prime, which is a paid service that includes free shipping of different types of goods. Amazon Prime Video is available in approximately 200 countries around the world. Each year, Amazon invests in the production of films and TV series that are streamed as Amazon originals.
Apple TV+
Apple TV+ is a streaming service owned by Apple Inc. Apple TV+ is a streaming subscription platform that launched November 1, 2019. The service offers original content exclusively made by Apple, being seen as Apple Originals. This streaming platform solely releases content that can only be found on Apple TV+, there is no third-party content found on the platform whereas several other streaming services have third-party content. The Apple TV+ name derives from the Apple TV media player that was released in 2007.
Disney+
Disney+ is an American subscription streaming service owned and operated by the Disney Entertainment division of The Walt Disney Company. Released on November 12, 2019, the service primarily distributes films and television series produced by The Walt Disney Studios and Walt Disney Television, with dedicated content hubs for the brands The Walt Disney Company, Pixar, Marvel, Star Wars, and National Geographic, as well as Star in some regions. Original films and television series are also distributed on Disney+.
Hulu
Launched in 2007, Hulu is only available to viewers in the United States because of licensing restrictions. Hulu is one of the only streaming services that provides streaming for current on-air television shows a few days after their original broadcast on cable television, but with limited availability. Hulu originally had both a free and paid plan. The free plan was accessible only via computer and there was a limited amount of content for users, whereas the paid plan could be accessed via computers, mobile devices, and connected televisions. In 2019, The Walt Disney Company became the major owner of Hulu. The platform has bundle deals where customers can subscribe to both Hulu and Disney+.
Max
Max is a streaming service released by Warner Bros. Discovery. The platform was released on May 27, 2020 in the United States, and within the first five months of launching, had amassed 8 million subscribers across the country. It offers classic Warner Bros. films and self-produced programs, and has won the right to exclusively air Ghibli Studios films in the United States. It is not until 45 days after the theatrical release from 2022 that the release is taking place on the platform and reached 70 million subscribers in December 2021. In September 2022, 92 million households were counted as subscribers, but since this was announced, including subscribers to the HBO channel, it is expected that the actual population of Max alone will be much smaller.
Netflix
Netflix, founded by Reed Hastings and Marc Randolph, is a media streaming and video rental in 1997. Two years later, Netflix was offering the audience the possibility of an online subscription service. Subscribers could select movies and TV shows on Netflix's website and receive the chosen titles via DVDs in prepaid return envelopes. In 2007, Netflix's subscribers could watch some movies and TV shows online, directly from their homes. In 2010, Netflix launched an only-streaming plan with unlimited streaming services without DVDs. Starting from the United States, the only-streaming plan reached several countries; by 2016 more than 190 countries could use this service. In 2011, Netflix began to negotiate the production of original programming, starting with the series House of Cards.
Paramount+
Paramount+ is a streaming service that is owned by the Paramount Global Media Company. The streaming service was launched on October 28, 2014, and was known as CBS All Access originally. At the time of the release, the platform focused primarily on streaming programs from local CBS stations as well as complete access to all CBS network content. In 2016 the streaming service created original content that could only be found by using the platform. As the network continued to expand with its content, the service decided to rebrand themselves and took the name Paramount+, taking its name from Paramount Pictures film studio. The network since expanded to Latin America, Europe and Australia.
Peacock
Peacock is a streaming service owned and operated by Peacock TV, which is a subsidiary of NBCUniversal Television and Streaming. The streaming service gets its name from the NBC logo based on its colors. The platform had launched on July 15, 2020. The streaming service primarily features content that can be found on NBC networking channels as well as other third-party sources. Additionally, Peacock now offers original content that cannot be found on any other streaming platform. In December 2022, Peacock reached 20 million paid subscribers. In March 2023, the platform had 22 million paid subscribers.
YouTube
The domain name of YouTube was bought and activated by Chad Hurley, Steve Chen, and Jawed Karim in the beginning of 2005. YouTube launched later that year as an online video sharing and social media platform. The video platform became popular among the audience thanks to a short video, called Lazy Sunday, uploaded by Saturday Night Live in December 2005. The SNL's video was not broadcast on TV, therefore people looked for it on Google by typing "SNL rap video," "Lazy Sunday SNL," or "Chronicles of Narnia SNL." The first result of searches was a link video on YouTube, which was the beginning of sharing videos on YouTube. Because of its popularity, YouTube had some issues caused by its bandwidth expenses. In 2006, Google bought Youtube, and after some months the video platform was the second-largest engine search in the world.
Binge-watching
In the 1990s, the practice of watching entire seasons in a short amount of time emerged with the introduction of the DVD box. Media-marathoning consists of watching at least one season of a TV show in a week or less, watching three or more films from the same series in a week or less, or reading three or more books from the same series in a month or less. The term "binge-watching" arrived with streaming TV, when Netflix launched its first original production, House of Cards, and started marketing this process of watching TV series episode after episode in 2013. COVID-19 gave another connotation to binge-watching, which was considered a negative activity.
Broadcasting rights
Broadcasting rights (also called Streaming rights in this case) vary from country to country and even within provinces of countries. These rights govern the distribution of copyrighted content and media and allow the sole distribution of that content at any one time. An example of content only being aired in certain countries is BBC iPlayer. The BBC checks a user's IP address to make sure that only users located in the UK can stream content from the BBC. The BBC only allows free use of their product for users within the UK as those users have paid for a television license that funds part of the BBC. This IP address check is not foolproof as the user may be accessing the BBC website through a VPN or proxy server. Broadcasting rights can also be restricted to allowing a broadcaster rights to distribute that content for a limited time. Channel 4's online service All 4 can only stream shows created in the US by companies such as HBO for thirty days after they are aired on one of the Channel 4 group channels. This is to boost DVD sales for the companies who produce that media.
Some companies pay very large amounts for broadcasting rights with sports and US sitcoms usually fetching the highest price from UK-based broadcasters. A trend among major content producers in North America is the use of the "TV Everywhere" system. Especially for live content, the TV Everywhere system restricts viewership of a video feed to select Internet service providers, usually cable television companies that pay a retransmission consent or subscription fee to the content producer. This often has the negative effect of making the availability of content dependent upon the provider, with the consumer having little or no choice on whether they receive the product.
Profits and costs
With the advent of broadband Internet connections, multiple streaming providers have come onto the market in the last couple of years. The main providers are Netflix, Hulu and Amazon. Some of these providers such as Hulu advertise and charge a monthly fee. Other such as Netflix and Amazon charge users a monthly fee and have no commercials. Netflix is the largest provider with more than 217 million subscribers. The rise of internet TV has resulted in cable companies losing customers to a new kind of customer called "cord cutters". Cord cutters are consumers who are cancelling their cable TV or satellite TV subscriptions and choosing instead to stream TV series, films and other content via the Internet. Cord cutters are forming communities. With the increasing availability of Online video platform (e.g., YouTube) and streaming services, there is an alternative to cable and satellite television subscriptions. Cord cutters tend to be younger people.
Overview of platforms and availability
| Technology | Broadcasting | null |
989954 | https://en.wikipedia.org/wiki/Fish%20hook | Fish hook | A fish hook or fishhook, formerly also called an angle (from Old English angol and Proto-Germanic *angulaz), is a hook used to catch fish either by piercing and embedding onto the inside of the fish mouth (angling) or, more rarely, by impaling and snagging the external fish body. Fish hooks are normally attached to a line, which tethers the target fish to the angler for retrieval, and are typically dressed with some form of bait or lure that entices the fish to swallow the hook out of its own natural instinct to forage or hunt.
Fish hooks have been employed for millennia by fishermen to catch freshwater and saltwater fish. There is an enormous variety of fish hooks in the world of fishing. Sizes, designs, shapes, and materials are all variable depending on the intended purpose of the hook. Fish hooks are manufactured for a range of purposes from general fishing to extremely limited and specialized applications. Fish hooks are designed to hold various types of artificial, processed, dead or live baits (bait fishing); to act as the foundation for artificial representations of invertebrate prey (e.g. fly fishing); or to be attached to or integrated into other devices that mimic prey (lure fishing). In 2005, the fish hook was chosen by Forbes as one of the Top 20 tools in human history.
History
The fish hook or similar angling device has been made by humans for many thousands of years. The earliest prehistoric tackle is known as a gorge, which consisted of a double-pointed stick with a thin rope tied to the middle. When angling, the gorge is laid parallel to the line and buried inside a bait ball, which can be swallowed easily by the fish. Once inside the fish's mouth, the bait ball often softens and gets fragmented by the pharyngeal teeth, and any pulling along the line with cause the freed-up gorge to rotate transversely and get stuck across the fish's gullet, similar to how a fish bone or chicken bone may pierce and obstruct a man's esophagus. They performed similar anchoring functions to hooks, but needed both ends to claw firmly into the fish's gullet to work properly.
The world's oldest fish hooks (made from sea snail shells) were discovered in Sakitari Cave in Okinawa Island dated between 22,380 and 22,770 years old. They are older than the fish hooks from the Jerimalai cave in East Timor dated between 23,000 and 16,000 years old, and New Ireland in Papua New Guinea dated 20,000 to 18,000 years old.
The earliest fish hooks in the Americas, dating from about 11,000 B.P., have been reported from Cedros Island on the west coast of Mexico. These fish hooks were made from sea shells. Shells provided a common material for fish hooks found in several parts of the world, with the shapes of prehistoric shell fish hook specimens occasionally being compared to determine if they provide information about the migration of people into the Americas.
An early written reference to a fish hook is found with reference to the Leviathan in the Book of Job 41:1; Canst thou draw out leviathan with a hook? Fish hooks have been crafted from all sorts of materials including wood, animal and human bone, horn, shells, stone, bronze, iron, and up to present day materials. In many cases, hooks were created from multiple materials to leverage the strength and positive characteristics of each material. Norwegians as late as the 1950s still used juniper wood to craft Burbot hooks. Quality steel hooks began to make their appearance in Europe in the 17th century and hook making became a task for specialists.
Sections
The hook can be divided into different portions from the back ends to the front:
The eye is the circular ring/loop at the back end to which fishing lines can be attached via knots, and (typically) receives the pulling force from the line.
The shank is the (usually) straight shaft section of the hook, which relays pulling force from the line to the hook bend.
The bend is the section where the hook curves back from the shank.
The barb is a small reverse-pointing (relative to the main hook point) spike that grabs the surrounding fish tissue and stops the hook from sliding back out of its anchorage. Hooks that lack barbs are thus barbless.
The point is the distalmost portion where the hook tapers into a sharp end, which pierces and embeds into the fish's tissue.
The perpendicular distance between the hook point and the frontmost inner arc of the bend is known as the bite of the hook, which indicates the maximum depth the hook can be embedded or set. The width of the opening between the point and the shank is called the gap or mouth of the hook, which dictates the thickness of the tissue that the hook can catch.
Hook types
There are a large number of different types of fish hooks. At the macro level, there are bait hooks, fly hooks and lure hooks. Within these broad categories there are wide varieties of hook types designed for different applications. Hook types differ in shape, materials, points and barbs, and eye type, and ultimately in their intended application. When individual hook types are designed the specific characteristics of each of these hook components are optimized relative to the hook's intended purpose. For example, a delicate dry fly hook is made of thin wire with a tapered eye because weight is the overriding factor. Whereas Carlisle or Aberdeen light wire bait hooks make use of thin wire to reduce injury to live bait but the eyes are not tapered because weight is not an issue. Many factors contribute to hook design, including corrosion resistance, weight, strength, hooking efficiency, and whether the hook is being used for specific types of bait, on different types of lures or for different styles of flies. For each hook type, there are ranges of acceptable sizes. For all types of hooks, sizes range from 32 (the smallest) to 20/0 (the largest).
Shapes and names
Hook shapes and names are as varied as fish themselves. In some cases, hooks are identified by a traditional or historic name, e.g. Aberdeen, Limerick or O'Shaughnessy. In other cases, hooks are merely identified by their general purpose or have included in their name, one or more of their physical characteristics. Some manufacturers just give their hooks model numbers and describe their general purpose and characteristics. For example:
Eagle Claw: 139 is a Snelled Baitholder, Offset, Down Eye, Two Slices, Medium Wire
Lazer Sharp: L2004EL is a Circle Sea, Wide Gap, Non-Offset, Ringed Eye, Light Wire
Mustad Model: 92155 is a Beak Baitholder hook
Mustad Model: 91715D is an O'Shaughnessy Jig Hook, 90-degree angle
TMC Model 300: Streamer D/E, 6XL, Heavy wire, Forged, Bronze
TMC Model 200R: Nymph & Dry Fly Straight eye, 3XL, Standard wire, Semi-dropped point, Forged, Bronze
The shape of the hook shank can vary widely from merely straight to all sorts of curves, kinks, bends and offsets. These different shapes contribute in some cases to better hook penetration, fly imitations or bait holding ability. Many hooks intended to hold dead or artificial baits have sliced shanks which create barbs for better baiting holding ability. Jig hooks are designed to have lead weight molded onto the hook shank. Hook descriptions may also include shank length as standard, extra-long, 2XL, short, etc. and wire size such as fine wire, extra heavy, 2X heavy, etc.
Single, double and triple hooks
Hooks are designed as either single hooks—a single eye, shank and point; double hooks—a single eye merged with two shanks and points; or triple—a single eye merged with three shanks and three evenly spaced points. Double hooks are formed from a single piece of wire and may or may not have their shanks brazed together for strength. Treble hooks are formed by adding a single eyeless hook to a double hook and brazing all three shanks together. Double hooks are used on some artificial lures and are a traditional fly hook for Atlantic Salmon flies, but are otherwise fairly uncommon. Treble hooks are used on all sorts of artificial lures as well as for a wide variety of bait applications.
Bait hook shapes and names
Bait hook shapes and names include the Salmon Egg, Beak, O'Shaughnessy, Baitholder, Shark Hook, Aberdeen, Carlisle, Carp Hook, Tuna Circle, Offset Worm, Circle Hook, suicide hook, Long Shank, Short Shank, J Hook, Octopus Hook and Big Game Jobu hooks.
Fly hook shapes and names
Fly hook shapes include Sproat, Sneck, Limerick, Kendal, Viking, Captain Hamilton, Barleet, Swimming Nymph, Bend Back, Model Perfect, Keel, and Kink-shank.
Points and barbs
The hook point is probably the most important part of the hook, because it is the point that must penetrate into the fish's flesh first if the hook is to have any anchorage whatsoever. Both the profile of the hook point and its angulation influence how well the point will pierce the tissue. Hook points are mechanically (ground) or chemically sharpened.
Most modern hooks are barbed, with a backward-protruding spike (i.e. barb) that helps secure the hook anchorge by catching surrounding flesh to stop the point from sliding back out of the penetration. Because the barb increases the practical cross-sectional area of the hook point, it also negatively affect how far the point penetrates under the same force (especially when piercing harder tissue), although the tissue-grabbing ability of the barb alone is usually sufficient for maintaining the hook anchorage without needing a deep penetration.
Some hooks are barbless, with a simply tapered point and lacking any barb. Historically, ancient fish hooks were all barbless, but today barbless hooks are still used mainly to facilitate quicker hook removal and make catch-and-release less hurtful for the fish. The downside of barbless hooks is that because there is no barb to help secure the point anchorage, the hook is theoretically more susceptible to dislodging unless the penetration is maintained with a constantly taut line tension. There are however also some arguments that a barbless hook point will penetrate more smoothly into the fish tissue and thus allow a deeper hookset, compensating for the absence of barbs. Having a deeper hookset also means the stress tends to be concentrated nearer towards the hook's bend rather than the point, allowing it to better withstand a heavier pulling load.
Hook point types
Hook points are also described relative to their offset from the hook shank. A kerbed hook point is offset to the left, a straight point has no offset and a reversed point is offset to the right.
Hook points are commonly referred to by these names: needle point, rolled-in, hollow, spear, beak, mini-barb, semi-dropped and knife edge. Some other hook point names are used for branding by manufacturers.
Eyes
The eye of the hook is the widened ring/loop at its proximal end, with a hole where the fishing line (typically the leader line) is passed through (threaded) for fastening via knot-tying. Hook eye design is usually optimized for either strength, weight and/or presentation. Typical eye types include:
Ringed eye or ball eye — a circular loop often with a closely opposed gap between the loop end and the loop base;
Brazed eye — like a ringed eye, but the loop end is welded shut fully without any gap;
Tapered eye — like a ringed eye, but with a pointed loop end;
Looped eye — the loop end is elongated with the extended portion laid parallel to the hook shank;
Needle eye — the eye hole is ellipsical, or just a narrow slit.
Most hook eyes are directly knotted to the fishing line and are responsible for relaying the pulling force from the line onto the hook body, but sometimes the line is passed cleanly through the eye and tied directly onto the shank instead of onto the eye loop — this is known as a snell knot or "snelling", and the eye does not take part in transferring any force, merely serving to restrict line wobbling and knot sliding. In fishing lures, it is also not uncommon to see the hook being linked to the lure via a split ring through the eye, which allows the hook more range of motion.
Hook eyes can also be categorized into three types according to the angulation of the loop plane against the shank, where hooks with bent/"turned" eyes being more optimized for snelling:
Straight — the eye is in-line with the shank;
Up-turned — the eye is angled away from the hook point;
Down-turned — the eye is angled towards the hook point.
Some hooks, such as the traditional Japanese Tenkara hooks, lack any opening for the line to be threaded, and are thus eyeless. Eyeless hooks instead have a widened "spade end" to help snelling the line onto the shank without slipping.
Size
There are no internationally recognized standards for hooks and thus size is somewhat inconsistent between manufacturers. However, within a manufacturer's range of hooks, hook sizes are consistent.
Hook sizes generally are referred to by a numbering system that places the size 1 hook in the middle of the size range. Smaller hooks are referenced by larger whole numbers (e.g. 1, 2, 3...). Larger hooks are referenced for size increases by increasing whole numbers followed by a "/" and a "0" (i.e. sizes over zero), for example, 1/0 (read as "one nought"), 2/0, 3/0.... The numbers represent relative sizes, normally associated with the gap (the distance from the point tip to the shank). The smallest size available is 32 and largest is 20/0.
| Technology | Hunting and fishing | null |
990505 | https://en.wikipedia.org/wiki/Mental%20health | Mental health | Mental health encompasses emotional, psychological, and social well-being, influencing cognition, perception, and behavior. According to the World Health Organization (WHO), it is a "state of well-being in which the individual realizes his or her abilities, can cope with the normal stresses of life, can work productively and fruitfully, and can contribute to his or her community". It likewise determines how an individual handles stress, interpersonal relationships, and decision-making. Mental health includes subjective well-being, perceived self-efficacy, autonomy, competence, intergenerational dependence, and self-actualization of one's intellectual and emotional potential, among others.
From the perspectives of positive psychology or holism, mental health may include an individual's ability to enjoy life and to create a balance between life activities and efforts to achieve psychological resilience. Cultural differences, personal philosophy, subjective assessments, and competing professional theories all affect how one defines "mental health". Some early signs related to mental health difficulties are sleep irritation, lack of energy, lack of appetite, thinking of harming oneself or others, self-isolating (though introversion and isolation aren't necessarily unhealthy), and frequently zoning out.
Mental disorders
Mental health, as defined by the Public Health Agency of Canada, is an individual's capacity to feel, think, and act in ways to achieve a better quality of life while respecting personal, social, and cultural boundaries. Impairment of any of these are risk factor for mental disorders, or mental illnesses, which are a component of mental health. In 2019, about 970 million people worldwide suffered from a mental disorder, with anxiety and depression being the most common. The number of people suffering from mental disorders has risen significantly throughout the years. Mental disorders are defined as health conditions that affect and alter cognitive functioning, emotional responses, and behavior associated with distress and/or impaired functioning. The ICD-11 is the global standard used to diagnose, treat, research, and report various mental disorders. In the United States, the DSM-5 is used as the classification system of mental disorders.
Mental health is associated with a number of lifestyle factors such as diet, exercise, stress, drug abuse, social connections and interactions. Psychiatrists, psychologists, licensed professional clinical counselors, social workers, nurse practitioners, and family physicians can help manage mental illness with treatments such as therapy, counseling, and medication.
History
Early history
In the mid-19th century, William Sweetser was the first to coin the term mental hygiene, which can be seen as the precursor to contemporary approaches to work on promoting positive mental health. Isaac Ray, the fourth president of the American Psychiatric Association and one of its founders, further defined mental hygiene as "the art of preserving the mind against all incidents and influences calculated to deteriorate its qualities, impair its energies, or derange its movements".
In American history, mentally ill patients were thought to be religiously punished. This response persisted through the 1700s, along with the inhumane confinement and stigmatization of such individuals. Dorothea Dix (1802–1887) was an important figure in the development of the "mental hygiene" movement. Dix was a school teacher who endeavored to help people with mental disorders and to expose the sub-standard conditions into which they were put. This became known as the "mental hygiene movement". Before this movement, it was not uncommon that people affected by mental illness would be considerably neglected, often left alone in deplorable conditions without sufficient clothing. From 1840 to 1880, she won the support of the federal government to set up over 30 state psychiatric hospitals; however, they were understaffed, under-resourced, and were accused of violating human rights.
Emil Kraepelin in 1896 developed the taxonomy of mental disorders which has dominated the field for nearly 80 years. Later, the proposed disease model of abnormality was subjected to analysis and considered normality to be relative to the physical, geographical and cultural aspects of the defining group.
At the beginning of the 20th century, Clifford Beers founded "Mental Health America – National Committee for Mental Hygiene", after publication of his accounts as a patient in several lunatic asylums, A Mind That Found Itself, in 1908 and opened the first outpatient mental health clinic in the United States.
The mental hygiene movement, similar to the social hygiene movement, had at times been associated with advocating eugenics and sterilization of those considered too mentally deficient to be assisted into productive work and contented family life. In the post-WWII years, references to mental hygiene were gradually replaced by the term 'mental health' due to its positive aspect that evolves from the treatment of illness to preventive and promotive areas of healthcare.
Deinstitutionalization and transinstitutionalization
When US government-run hospitals were accused of violating human rights, advocates pushed for deinstitutionalization: the replacement of federal mental hospitals for community mental health services. The closure of state-provisioned psychiatric hospitals was enforced by the Community Mental Health Centers Act in 1963 that laid out terms in which only patients who posed an imminent danger to others or themselves could be admitted into state facilities. This was seen as an improvement from previous conditions. However, there remains a debate on the conditions of these community resources.
It has been proven that this transition was beneficial for many patients: there was an increase in overall satisfaction, a better quality of life, and more friendships between patients all at an affordable cost. This proved to be true only in the circumstance that treatment facilities had enough funding for staff and equipment as well as proper management. However, this idea is a polarizing issue. Critics of deinstitutionalization argue that poor living conditions prevailed, patients were lonely, and they did not acquire proper medical care in these treatment homes. Additionally, patients that were moved from state psychiatric care to nursing and residential homes had deficits in crucial aspects of their treatment. Some cases result in the shift of care from health workers to patients' families, where they do not have the proper funding or medical expertise to give proper care. On the other hand, patients that are treated in community mental health centers lack sufficient cancer testing, vaccinations, or otherwise regular medical check-ups.
Other critics of state deinstitutionalization argue that this was simply a transition to "transinstitutionalization", or the idea that prisons and state-provisioned hospitals are interdependent. In other words, patients become inmates. This draws on the Penrose Hypothesis of 1939, which theorized that there was an inverse relationship between prisons' population size and the number of psychiatric hospital beds. This means that populations that require psychiatric mental care will transition between institutions, which in this case, includes state psychiatric hospitals and criminal justice systems. Thus, a decrease in available psychiatric hospital beds occurred at the same time as an increase in inmates. Although some are skeptical that this is due to other external factors, others will reason this conclusion to a lack of empathy for the mentally ill. There is no argument for the social stigmatization of those with mental illnesses, they have been widely marginalized and discriminated against in society. In this source, researchers analyze how most compensation prisoners (detainees who are unable or unwilling to pay a fine for petty crimes) are unemployed, homeless, and with an extraordinarily high degree of mental illnesses and substance use disorders. Compensation prisoners then lose prospective job opportunities, face social marginalization, and lack access to resocialization programs, which ultimately facilitate reoffending. The research sheds light on how the mentally ill—and in this case, the poor—are further punished for certain circumstances that are beyond their control, and that this is a vicious cycle that repeats itself. Thus, prisons embody another state-provisioned mental hospital.
Families of patients, advocates, and mental health professionals still call for increase in more well-structured community facilities and treatment programs with a higher quality of long-term inpatient resources and care. With this more structured environment, the United States will continue with more access to mental health care and an increase in the overall treatment of the mentally ill.
However, there is still a lack of studies for mental health conditions (MHCs) to raise awareness, knowledge development, and attitudes toward seeking medical treatment for MHCs in Bangladesh. People in rural areas often seek treatment from the traditional healers and MHCs are sometimes considered a spiritual matter.
Epidemiology
Mental illnesses are more common than cancer, diabetes, or heart disease. As of 2021, over 22 percent of all Americans over the age of 18 meet the criteria for having a mental illness. Evidence suggests that 970 million people worldwide have a mental disorder. Major depression ranks third among the top 10 leading causes of disease worldwide. By 2030, it is predicted to become the leading cause of disease worldwide. Over 700 thousand people commit suicide every year and around 14 million attempt it. A World Health Organization (WHO) report estimates the global cost of mental illness at nearly $2.5 trillion (two-thirds in indirect costs) in 2010, with a projected increase to over $6 trillion by 2030.
Evidence from the WHO suggests that nearly half of the world's population is affected by mental illness with an impact on their self-esteem, relationships and ability to function in everyday life. An individual's emotional health can impact their physical health. Poor mental health can lead to problems such as the inability to make adequate decisions and substance use disorders.
Good mental health can improve life quality whereas poor mental health can worsen it. According to Richards, Campania, & Muse-Burke, "There is growing evidence that is showing emotional abilities are associated with pro-social behaviors such as stress management and physical health." Their research also concluded that people who lack emotional expression are inclined to anti-social behaviors (e.g., substance use disorder and alcohol use disorder, physical fights, vandalism), which reflects one's mental health and suppressed emotions. Adults and children who face mental illness may experience social stigma, which can exacerbate the issues.
Global prevalence
Mental health can be seen as a continuum, where an individual's mental health may have many different possible values. Mental wellness is viewed as a positive attribute; this definition of mental health highlights emotional well-being, the capacity to live a full and creative life, and the flexibility to deal with life's inevitable challenges. Some discussions are formulated in terms of contentment or happiness. Many therapeutic systems and self-help books offer methods and philosophies espousing strategies and techniques vaunted as effective for further improving the mental wellness. Positive psychology is increasingly prominent in mental health.
A holistic model of mental health generally includes concepts based upon anthropological, educational, psychological, religious, and sociological perspectives. There are also models as theoretical perspectives from personality, social, clinical, health and developmental psychology.
The tripartite model of mental well-being views mental well-being as encompassing three components of emotional well-being, social well-being, and psychological well-being. Emotional well-being is defined as having high levels of positive emotions, whereas social and psychological well-being are defined as the presence of psychological and social skills and abilities that contribute to optimal functioning in daily life. The model has received empirical support across cultures. The Mental Health Continuum-Short Form (MHC-SF) is the most widely used scale to measure the tripartite model of mental well-being.
Demographics
Children and young adults
As of 2019, about one in seven of the world's 10–19 year olds experienced a mental health disorder; about 165 million young people in total. A person's teenage years are a unique period where much crucial psychological development occurs, and is also a time of increased vulnerability to the development of adverse mental health conditions. More than half of mental health conditions start before a child reaches 20 years of age, with onset occurring in adolescence much more frequently than it does in early childhood or adulthood. Many such cases go undetected and untreated.
In the United States alone, in 2021, at least roughly 17.5% of the population (ages 18 and older) were recorded as having a mental illness. The comparison between reports and statistics of mental health issues in newer generations (18–25 years old to 26–49 years old) and the older generation (50 years or older) signifies an increase in mental health issues as only 15% of the older generation reported a mental health issue whereas the newer generations reported 33.7% (18-25) and 28.1% (26-49). The role of caregivers for youth with mental health needs is valuable, and caregivers benefit most when they have sufficient psychoeducation and peer support. Depression is one of the leading causes of illness and disability among adolescents. Suicide is the fourth leading cause of death in 15-19-year-olds. Exposure to childhood trauma can cause mental health disorders and poor academic achievement. Ignoring mental health conditions in adolescents can impact adulthood. 50% of preschool children show a natural reduction in behavioral problems. The remaining experience long-term consequences. It impairs physical and mental health and limits opportunities to live fulfilling lives. A result of depression during adolescence and adulthood may be substance abuse. The average age of onset is between 11 and 14 years for depressive disorders. Only approximately 25% of children with behavioral problems refer to medical services. The majority of children go untreated.
Homeless population
Mental illness is thought to be highly prevalent among homeless populations, though access to proper diagnoses is limited. An article written by Lisa Goodman and her colleagues summarized Smith's research into PTSD in homeless single women and mothers in St. Louis, Missouri, which found that 53% of the respondents met diagnostic criteria, and which describes homelessness as a risk factor for mental illness. At least two commonly reported symptoms of psychological trauma, social disaffiliation and learned helplessness are highly prevalent among homeless individuals and families.
While mental illness is prevalent, people infrequently receive appropriate care. Case management linked to other services is an effective care approach for improving symptoms in people experiencing homelessness. Case management reduced admission to hospitals, and it reduced substance use by those with substance abuse problems more than typical care.
Immigrants and refugees
States that produce refugees are sites of social upheaval, civil war, even genocide. Most refugees experience trauma. It can be in the form of torture, sexual assault, family fragmentation, and death of loved ones.
Refugees and immigrants experience psychosocial stressors after resettlement. These include discrimination, lack of economic stability, and social isolation causing emotional distress. For example, Not far into the 1900s, campaigns targeting Japanese immigrants were being formed that inhibited their ability to participate in U.S life, painting them as a threat to the American working-class. They were subject to prejudice and slandered by American media as well as anti-Japanese legislation being implemented. For refugees family reunification can be one of the primary needs to improve quality of life. Post-migration trauma is a cause of depressive disorders and psychological distress for immigrants.
Cultural and religious considerations
Mental health is a socially constructed concept; different societies, groups, cultures (both ethnic and national/regional), institutions, and professions have very different ways of conceptualizing its nature and causes, determining what is mentally healthy, and deciding what interventions, if any, are appropriate. Thus, different professionals will have different cultural, class, political and religious backgrounds, which will impact the methodology applied during treatment. In the context of deaf mental health care, it is necessary for professionals to have cultural competency of deaf and hard of hearing people and to understand how to properly rely on trained, qualified, and certified interpreters when working with culturally Deaf clients.
Research has shown that there is stigma attached to mental illness. Due to such stigma, individuals may resist labeling and may be driven to respond to mental health diagnoses with denialism. Family caregivers of individuals with mental disorders may also suffer discrimination or face stigma.
Addressing and eliminating the social stigma and perceived stigma attached to mental illness has been recognized as crucial to education and awareness surrounding mental health issues. In the United Kingdom, the Royal College of Psychiatrists organized the campaign Changing Minds (1998–2003) to help reduce stigma, while in the United States, efforts by entities such as the Born This Way Foundation and The Manic Monologues specifically focus on removing the stigma surrounding mental illness. The National Alliance on Mental Illness (NAMI) is a U.S. institution founded in 1979 to represent and advocate for those struggling with mental health issues. NAMI helps to educate about mental illnesses and health issues, while also working to eliminate stigma attached to these disorders.
Many mental health professionals are beginning to, or already understand, the importance of competency in religious diversity and spirituality, or the lack thereof. They are also partaking in cultural training to better understand which interventions work best for these different groups of people. The American Psychological Association explicitly states that religion must be respected. Education in spiritual and religious matters is also required by the American Psychiatric Association, however, far less attention is paid to the damage that more rigid, fundamentalist faiths commonly practiced in the United States can cause. This theme has been widely politicized in 2018 such as with the creation of the Religious Liberty Task Force in July of that year. Also, many providers and practitioners in the United States are only beginning to realize that the institution of mental healthcare lacks knowledge and competence of many non-Western cultures, leaving providers in the United States ill-equipped to treat patients from different cultures.
Occupations
Occupational therapy
Occupational therapy practitioners aim to improve and enable a client or group's participation in meaningful, everyday occupations. In this sense, occupation is defined as any activity that "occupies one's time". Examples of those activities include daily tasks (dressing, bathing, eating, house chores, driving, etc.), sleep and rest, education, work, play, leisure (hobbies), and social interactions. The OT profession offers a vast range of services for all stages of life in a myriad of practice settings, though the foundations of OT come from mental health. Community support for mental health through expert-moderated support groups can aid those who want to recover from mental illness or otherwise improve their emotional well-being.
OT services focused on mental health can be provided to persons, groups, and populations across the lifespan and experiencing varying levels of mental health performance. For example, occupational therapy practitioners provide mental health services in school systems, military environments, hospitals, outpatient clinics, and inpatient mental health rehabilitation settings. Interventions or support can be provided directly through specific treatment interventions or indirectly by providing consultation to businesses, schools, or other larger groups to incorporate mental health strategies on a programmatic level. Even people who are mentally healthy can benefit from the health promotion and additional prevention strategies to reduce the impact of difficult situations.
The interventions focus on positive functioning, sensory strategies, managing emotions, interpersonal relationships, sleep, community engagement, and other cognitive skills (i.e. visual-perceptual skills, attention, memory, arousal/energy management, etc.).
Mental health in social work
Social work in mental health, also called psychiatric social work, is a process where an individual in a setting is helped to attain freedom from overlapping internal and external problems (social and economic situations, family and other relationships, the physical and organizational environment, psychiatric symptoms, etc.). It aims for harmony, quality of life, self-actualization and personal adaptation across all systems. Psychiatric social workers are mental health professionals that can assist patients and their family members in coping with both mental health issues and various economic or social problems caused by mental illness or psychiatric dysfunctions and to attain improved mental health and well-being. They are vital members of the treatment teams in Departments of Psychiatry and Behavioral Sciences in hospitals. They are employed in both outpatient and inpatient settings of a hospital, nursing homes, state and local governments, substance use clinics, correctional facilities, health care services, private practice, etc.
In the United States, social workers provide most of the mental health services. According to government sources, 60 percent of mental health professionals are clinically trained social workers, 10 percent are psychiatrists, 23 percent are psychologists, and 5 percent are psychiatric nurses.
Mental health social workers in Japan have professional knowledge of health and welfare and skills essential for person's well-being. Their social work training enables them as a professional to carry out Consultation assistance for mental disabilities and their social reintegration; Consultation regarding the rehabilitation of the victims; Advice and guidance for post-discharge residence and re-employment after hospitalized care, for major life events in regular life, money and self-management and other relevant matters to equip them to adapt in daily life. Social workers provide individual home visits for mentally ill and do welfare services available, with specialized training a range of procedural services are coordinated for home, workplace and school. In an administrative relationship, Psychiatric social workers provides consultation, leadership, conflict management and work direction. Psychiatric social workers who provides assessment and psychosocial interventions function as a clinician, counselor and municipal staff of the health centers.
Risk factors and causes of mental health problems
There are many things that can contribute to mental health problems, including biological factors, genetic factors, life experiences (such as psychological trauma or abuse), and a family history of mental health problems.
Biological factors
According to the National Institute of Health Curriculum Supplement Series book, most scientists believe that changes in neurotransmitters can cause mental illnesses. In the section "The Biology of Mental Illnesses" the issue is explained in detail, "...there may be disruptions in the neurotransmitters dopamine, glutamate, and norepinephrine in individuals who have schizophrenia".
Demographic factors
Gender, age, ethnicity, life expectancy, longevity, population density, and community diversity are all demographic characteristics that can increase the risk and severity of mental disorders. Existing evidence demonstrates that the female gender is connected with an elevated risk of depression at different phases of life, commencing in adolescence in different contexts. Females, for example, have a higher risk of anxiety and eating disorders, whereas males have a higher chance of substance abuse and behavioral and developmental issues. This does not imply that women are less likely to suffer from developmental disorders such autism spectrum disorder, attention deficit hyperactivity disorder, Tourette syndrome, or early-onset schizophrenia. Ethnicity and ethnic heterogeneity have also been identified as risk factors for the prevalence of mental disorders, with minority groups being at a higher risk due to discrimination and exclusion. Approximately 8 in 10 people with autism suffer from a mental health problem in their life time, in comparison to 1 in 4 of the general population that suffers from a mental health problem in their lifetimes.
Unemployment has been shown to hurt an individual's emotional well-being, self-esteem, and more broadly their mental health. Increasing unemployment has been shown to have a significant impact on mental health, predominantly depressive disorders. This is an important consideration when reviewing the triggers for mental health disorders in any population survey. According to a 2009 meta-analysis by Paul and Moser, countries with high income inequality and poor unemployment protections experience worse mental health outcomes among the unemployed.
Emotional mental disorders are a leading cause of disabilities worldwide. Investigating the degree and severity of untreated emotional mental disorders throughout the world is a top priority of the World Mental Health (WMH) survey initiative, which was created in 1998 by the World Health Organization (WHO). "Neuropsychiatric disorders are the leading causes of disability worldwide, accounting for 37% of all healthy life years lost through disease. These disorders are most destructive to low and middle-income countries due to their inability to provide their citizens with proper aid. Despite modern treatment and rehabilitation for emotional mental health disorders, "even economically advantaged societies have competing priorities and budgetary constraints".
Unhappily married couples suffer 3–25 times the risk of developing clinical depression.
The World Mental Health survey initiative has suggested a plan for countries to redesign their mental health care systems to best allocate resources.
"A first step is documentation of services being used and the extent and nature of unmet treatment needs. A second step could be to do a cross-national comparison of service use and unmet needs in countries with different mental health care systems. Such comparisons can help to uncover optimum financing, national policies, and delivery systems for mental health care."
Knowledge of how to provide effective emotional mental health care has become imperative worldwide. Unfortunately, most countries have insufficient data to guide decisions, absent or competing visions for resources, and near-constant pressures to cut insurance and entitlements. WMH surveys were done in Africa (Nigeria, South Africa), the Americas (Colombia, Mexico, United States), Asia and the Pacific (Japan, New Zealand, Beijing and Shanghai in the People's Republic of China), Europe (Belgium, France, Germany, Italy, Netherlands, Spain, Ukraine), and the Middle East (Israel, Lebanon). Countries were classified with World Bank criteria as low-income (Nigeria), lower-middle-income (China, Colombia, South Africa, Ukraine), higher middle-income (Lebanon, Mexico), and high-income.
The coordinated surveys on emotional mental health disorders, their severity, and treatments were implemented in the aforementioned countries. These surveys assessed the frequency, types, and adequacy of mental health service use in 17 countries in which WMH surveys are complete. The WMH also examined unmet needs for treatment in strata defined by the seriousness of mental disorders. Their research showed that "the number of respondents using any 12-month mental health service was generally lower in developing than in developed countries, and the proportion receiving services tended to correspond to countries' percentages of gross domestic product spent on health care".
"High levels of unmet need worldwide are not surprising, since WHO Project ATLAS' findings of much lower mental health expenditures than was suggested by the magnitude of burdens from mental illnesses. Generally, unmet needs in low-income and middle-income countries might be attributable to these nations spending reduced amounts (usually <1%) of already diminished health budgets on mental health care, and they rely heavily on out-of-pocket spending by citizens who are ill-equipped for it".
Stress
The Centre for Addiction and Mental Health discusses how a certain amount of stress is a normal part of daily life. Small doses of stress help people meet deadlines, be prepared for presentations, be productive and arrive on time for important events. However, long-term stress can become harmful. When stress becomes overwhelming and prolonged, the risks for mental health problems and medical problems increase." Also on that note, some studies have found language to deteriorate mental health and even harm humans.
The impact of a stressful environment has also been highlighted by different models. Mental health has often been understood from the lens of the vulnerability-stress model. In that context, stressful situations may contribute to a preexisting vulnerability to negative mental health outcomes being realized. On the other hand, the differential susceptibility hypothesis suggests that mental health outcomes are better explained by an increased sensitivity to the environment than by vulnerability. For example, it was found that children scoring higher on observer-rated environmental sensitivity often derive more harm from low-quality parenting, but also more benefits from high-quality parenting than those children scoring lower on that measure.
Poverty
Environmental factors
Prevention and promotion
"The terms mental health promotion and prevention have often been confused. Promotion is defined as intervening to optimize positive mental health by addressing determinants of positive mental health (i.e. protective factors) before a specific mental health problem has been identified, with the ultimate goal of improving the positive mental health of the population. Mental health prevention is defined as intervening to minimize mental health problems (i.e. risk factors) by addressing determinants of mental health problems before a specific mental health problem has been identified in the individual, group, or population of focus with the ultimate goal of reducing the number of future mental health problems in the population."
In order to improve mental health, the root of the issue has to be resolved. "Prevention emphasizes the avoidance of risk factors; promotion aims to enhance an individual's ability to achieve a positive sense of self-esteem, mastery, well-being, and social inclusion." Mental health promotion attempts to increase protective factors and healthy behaviors that can help prevent the onset of a diagnosable mental disorder and reduce risk factors that can lead to the development of a mental disorder. Yoga is an example of an activity that calms one's entire body and nerves. According to a study on well-being by Richards, Campania, and Muse-Burke, "mindfulness is considered to be a purposeful state, it may be that those who practice it belief in its importance and value being mindful, so that valuing of self-care activities may influence the intentional component of mindfulness." Akin to surgery, sometimes the body must be further damaged, before it can properly heal
Mental health is conventionally defined as a hybrid of the absence of a mental disorder and the presence of well-being. Focus is increasing on preventing mental disorders.
Prevention is beginning to appear in mental health strategies, including the 2004 WHO report "Prevention of Mental Disorders", the 2008 EU "Pact for Mental Health" and the 2011 US National Prevention Strategy. Some commentators have argued that a pragmatic and practical approach to mental disorder prevention at work would be to treat it the same way as physical injury prevention.
Prevention of a disorder at a young age may significantly decrease the chances that a child will have a disorder later in life, and shall be the most efficient and effective measure from a public health perspective. Prevention may require the regular consultation of a physician for at least twice a year to detect any signs that reveal any mental health concerns.
Additionally, social media is becoming a resource for prevention. In 2004, the Mental Health Services Act began to fund marketing initiatives to educate the public on mental health. This California-based project is working to combat the negative perception with mental health and reduce the stigma associated with it. While social media can benefit mental health, it can also lead to deterioration if not managed properly. Limiting social media intake is beneficial.
Studies report that patients in mental health care who can access and read their Electronic Health Records (EHR) or Open | Biology and health sciences | Health, fitness, and medicine | null |
990534 | https://en.wikipedia.org/wiki/Norm%20%28mathematics%29 | Norm (mathematics) | In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude or length of the vector. This norm can be defined as the square root of the inner product of a vector with itself.
A seminorm satisfies the first two properties of a norm, but may be zero for vectors other than the origin. A vector space with a specified norm is called a normed vector space. In a similar manner, a vector space with a seminorm is called a seminormed vector space.
The term pseudonorm has been used for several related meanings. It may be a synonym of "seminorm". It can also refer to a norm that can take infinite values, or to certain functions parametrised by a directed set.
Definition
Given a vector space over a subfield of the complex numbers a norm on is a real-valued function with the following properties, where denotes the usual absolute value of a scalar :
Subadditivity/Triangle inequality: for all
Absolute homogeneity: for all and all scalars
Positive definiteness/positiveness/: for all if then
Because property (2.) implies some authors replace property (3.) with the equivalent condition: for every if and only if
A seminorm on is a function that has properties (1.) and (2.) so that in particular, every norm is also a seminorm (and thus also a sublinear functional). However, there exist seminorms that are not norms. Properties (1.) and (2.) imply that if is a norm (or more generally, a seminorm) then and that also has the following property:
Non-negativity: for all
Some authors include non-negativity as part of the definition of "norm", although this is not necessary.
Although this article defined "" to be a synonym of "positive definite", some authors instead define "" to be a synonym of "non-negative"; these definitions are not equivalent.
Equivalent norms
Suppose that and are two norms (or seminorms) on a vector space Then and are called equivalent, if there exist two positive real constants and such that for every vector
The relation " is equivalent to " is reflexive, symmetric ( implies ), and transitive and thus defines an equivalence relation on the set of all norms on
The norms and are equivalent if and only if they induce the same topology on Any two norms on a finite-dimensional space are equivalent but this does not extend to infinite-dimensional spaces.
Notation
If a norm is given on a vector space then the norm of a vector is usually denoted by enclosing it within double vertical lines: Such notation is also sometimes used if is only a seminorm. For the length of a vector in Euclidean space (which is an example of a norm, as explained below), the notation with single vertical lines is also widespread.
Examples
Every (real or complex) vector space admits a norm: If is a Hamel basis for a vector space then the real-valued map that sends (where all but finitely many of the scalars are ) to is a norm on There are also a large number of norms that exhibit additional properties that make them useful for specific problems.
Absolute-value norm
The absolute value
is a norm on the vector space formed by the real or complex numbers. The complex numbers form a one-dimensional vector space over themselves and a two-dimensional vector space over the reals; the absolute value is a norm for these two structures.
Any norm on a one-dimensional vector space is equivalent (up to scaling) to the absolute value norm, meaning that there is a norm-preserving isomorphism of vector spaces where is either or and norm-preserving means that
This isomorphism is given by sending to a vector of norm which exists since such a vector is obtained by multiplying any non-zero vector by the inverse of its norm.
Euclidean norm
On the -dimensional Euclidean space the intuitive notion of length of the vector is captured by the formula
This is the Euclidean norm, which gives the ordinary distance from the origin to the point X—a consequence of the Pythagorean theorem.
This operation may also be referred to as "SRSS", which is an acronym for the square root of the sum of squares.
The Euclidean norm is by far the most commonly used norm on but there are other norms on this vector space as will be shown below.
However, all these norms are equivalent in the sense that they all define the same topology on finite-dimensional spaces.
The inner product of two vectors of a Euclidean vector space is the dot product of their coordinate vectors over an orthonormal basis.
Hence, the Euclidean norm can be written in a coordinate-free way as
The Euclidean norm is also called the quadratic norm, norm, norm, 2-norm, or square norm; see space.
It defines a distance function called the Euclidean length, distance, or distance.
The set of vectors in whose Euclidean norm is a given positive constant forms an -sphere.
Euclidean norm of complex numbers
The Euclidean norm of a complex number is the absolute value (also called the modulus) of it, if the complex plane is identified with the Euclidean plane This identification of the complex number as a vector in the Euclidean plane, makes the quantity (as first suggested by Euler) the Euclidean norm associated with the complex number. For , the norm can also be written as where is the complex conjugate of
Quaternions and octonions
There are exactly four Euclidean Hurwitz algebras over the real numbers. These are the real numbers the complex numbers the quaternions and lastly the octonions where the dimensions of these spaces over the real numbers are respectively.
The canonical norms on and are their absolute value functions, as discussed previously.
The canonical norm on of quaternions is defined by
for every quaternion in This is the same as the Euclidean norm on considered as the vector space Similarly, the canonical norm on the octonions is just the Euclidean norm on
Finite-dimensional complex normed spaces
On an -dimensional complex space the most common norm is
In this case, the norm can be expressed as the square root of the inner product of the vector and itself:
where is represented as a column vector and denotes its conjugate transpose.
This formula is valid for any inner product space, including Euclidean and complex spaces. For complex spaces, the inner product is equivalent to the complex dot product. Hence the formula in this case can also be written using the following notation:
Taxicab norm or Manhattan norm
The name relates to the distance a taxi has to drive in a rectangular street grid (like that of the New York borough of Manhattan) to get from the origin to the point
The set of vectors whose 1-norm is a given constant forms the surface of a cross polytope, which has dimension equal to the dimension of the vector space minus 1.
The Taxicab norm is also called the norm. The distance derived from this norm is called the Manhattan distance or distance.
The 1-norm is simply the sum of the absolute values of the columns.
In contrast,
is not a norm because it may yield negative results.
p-norm
Let be a real number.
The -norm (also called -norm) of vector is
For we get the taxicab norm, for we get the Euclidean norm, and as approaches the -norm approaches the infinity norm or maximum norm:
The -norm is related to the generalized mean or power mean.
For the -norm is even induced by a canonical inner product meaning that for all vectors This inner product can be expressed in terms of the norm by using the polarization identity.
On this inner product is the defined by
while for the space associated with a measure space which consists of all square-integrable functions, this inner product is
This definition is still of some interest for but the resulting function does not define a norm, because it violates the triangle inequality.
What is true for this case of even in the measurable analog, is that the corresponding class is a vector space, and it is also true that the function
(without th root) defines a distance that makes into a complete metric topological vector space. These spaces are of great interest in functional analysis, probability theory and harmonic analysis.
However, aside from trivial cases, this topological vector space is not locally convex, and has no continuous non-zero linear forms. Thus the topological dual space contains only the zero functional.
The partial derivative of the -norm is given by
The derivative with respect to therefore, is
where denotes Hadamard product and is used for absolute value of each component of the vector.
For the special case of this becomes
or
Maximum norm (special case of: infinity norm, uniform norm, or supremum norm)
If is some vector such that then:
The set of vectors whose infinity norm is a given constant, forms the surface of a hypercube with edge length
Energy norm
The energy norm of a vector is defined in terms of a symmetric positive definite matrix as
It is clear that if is the identity matrix, this norm corresponds to the Euclidean norm. If is diagonal, this norm is also called a weighted norm. The energy norm is induced by the inner product given by for .
In general, the value of the norm is dependent on the spectrum of : For a vector with a Euclidean norm of one, the value of is bounded from below and above by the smallest and largest absolute eigenvalues of respectively, where the bounds are achieved if coincides with the corresponding (normalized) eigenvectors. Based on the symmetric matrix square root , the energy norm of a vector can be written in terms of the standard Euclidean norm as
Zero norm
In probability and functional analysis, the zero norm induces a complete metric topology for the space of measurable functions and for the F-space of sequences with F–norm
Here we mean by F-norm some real-valued function on an F-space with distance such that The F-norm described above is not a norm in the usual sense because it lacks the required homogeneity property.
Hamming distance of a vector from zero
In metric geometry, the discrete metric takes the value one for distinct points and zero otherwise. When applied coordinate-wise to the elements of a vector space, the discrete distance defines the Hamming distance, which is important in coding and information theory.
In the field of real or complex numbers, the distance of the discrete metric from zero is not homogeneous in the non-zero point; indeed, the distance from zero remains one as its non-zero argument approaches zero.
However, the discrete distance of a number from zero does satisfy the other properties of a norm, namely the triangle inequality and positive definiteness.
When applied component-wise to vectors, the discrete distance from zero behaves like a non-homogeneous "norm", which counts the number of non-zero components in its vector argument; again, this non-homogeneous "norm" is discontinuous.
In signal processing and statistics, David Donoho referred to the zero "norm" with quotation marks.
Following Donoho's notation, the zero "norm" of is simply the number of non-zero coordinates of or the Hamming distance of the vector from zero.
When this "norm" is localized to a bounded set, it is the limit of -norms as approaches 0.
Of course, the zero "norm" is not truly a norm, because it is not positive homogeneous.
Indeed, it is not even an F-norm in the sense described above, since it is discontinuous, jointly and severally, with respect to the scalar argument in scalar–vector multiplication and with respect to its vector argument.
Abusing terminology, some engineers omit Donoho's quotation marks and inappropriately call the number-of-non-zeros function the norm, echoing the notation for the Lebesgue space of measurable functions.
Infinite dimensions
The generalization of the above norms to an infinite number of components leads to and spaces for with norms
for complex-valued sequences and functions on respectively, which can be further generalized (see Haar measure). These norms are also valid in the limit as , giving a supremum norm, and are called and
Any inner product induces in a natural way the norm
Other examples of infinite-dimensional normed vector spaces can be found in the Banach space article.
Generally, these norms do not give the same topologies. For example, an infinite-dimensional space gives a strictly finer topology than an infinite-dimensional space when
Composite norms
Other norms on can be constructed by combining the above; for example
is a norm on
For any norm and any injective linear transformation we can define a new norm of equal to
In 2D, with a rotation by 45° and a suitable scaling, this changes the taxicab norm into the maximum norm. Each applied to the taxicab norm, up to inversion and interchanging of axes, gives a different unit ball: a parallelogram of a particular shape, size, and orientation.
In 3D, this is similar but different for the 1-norm (octahedrons) and the maximum norm (prisms with parallelogram base).
There are examples of norms that are not defined by "entrywise" formulas. For instance, the Minkowski functional of a centrally-symmetric convex body in (centered at zero) defines a norm on (see below).
All the above formulas also yield norms on without modification.
There are also norms on spaces of matrices (with real or complex entries), the so-called matrix norms.
In abstract algebra
Let be a finite extension of a field of inseparable degree and let have algebraic closure If the distinct embeddings of are then the Galois-theoretic norm of an element is the value As that function is homogeneous of degree , the Galois-theoretic norm is not a norm in the sense of this article. However, the -th root of the norm (assuming that concept makes sense) is a norm.
Composition algebras
The concept of norm in composition algebras does share the usual properties of a norm since null vectors are allowed. A composition algebra consists of an algebra over a field an involution and a quadratic form called the "norm".
The characteristic feature of composition algebras is the homomorphism property of : for the product of two elements and of the composition algebra, its norm satisfies In the case of division algebras and the composition algebra norm is the square of the norm discussed above. In those cases the norm is a definite quadratic form. In the split algebras the norm is an isotropic quadratic form.
Properties
For any norm on a vector space the reverse triangle inequality holds:
If is a continuous linear map between normed spaces, then the norm of and the norm of the transpose of are equal.
For the norms, we have Hölder's inequality
A special case of this is the Cauchy–Schwarz inequality:
Every norm is a seminorm and thus satisfies all properties of the latter. In turn, every seminorm is a sublinear function and thus satisfies all properties of the latter. In particular, every norm is a convex function.
Equivalence
The concept of unit circle (the set of all vectors of norm 1) is different in different norms: for the 1-norm, the unit circle is a square oriented as a diamond; for the 2-norm (Euclidean norm), it is the well-known unit circle; while for the infinity norm, it is an axis-aligned square. For any -norm, it is a superellipse with congruent axes (see the accompanying illustration). Due to the definition of the norm, the unit circle must be convex and centrally symmetric (therefore, for example, the unit ball may be a rectangle but cannot be a triangle, and for a -norm).
In terms of the vector space, the seminorm defines a topology on the space, and this is a Hausdorff topology precisely when the seminorm can distinguish between distinct vectors, which is again equivalent to the seminorm being a norm. The topology thus defined (by either a norm or a seminorm) can be understood either in terms of sequences or open sets. A sequence of vectors is said to converge in norm to if as Equivalently, the topology consists of all sets that can be represented as a union of open balls. If is a normed space then
Two norms and on a vector space are called if they induce the same topology, which happens if and only if there exist positive real numbers and such that for all
For instance, if on then
In particular,
That is,
If the vector space is a finite-dimensional real or complex one, all norms are equivalent. On the other hand, in the case of infinite-dimensional vector spaces, not all norms are equivalent.
Equivalent norms define the same notions of continuity and convergence and for many purposes do not need to be distinguished. To be more precise the uniform structure defined by equivalent norms on the vector space is uniformly isomorphic.
Classification of seminorms: absolutely convex absorbing sets
All seminorms on a vector space can be classified in terms of absolutely convex absorbing subsets of To each such subset corresponds a seminorm called the gauge of defined as
where is the infimum, with the property that
Conversely:
Any locally convex topological vector space has a local basis consisting of absolutely convex sets. A common method to construct such a basis is to use a family of seminorms that separates points: the collection of all finite intersections of sets turns the space into a locally convex topological vector space so that every p is continuous.
Such a method is used to design weak and weak* topologies.
norm case:
Suppose now that contains a single since is separating, is a norm, and is its open unit ball. Then is an absolutely convex bounded neighbourhood of 0, and is continuous.
The converse is due to Andrey Kolmogorov: any locally convex and locally bounded topological vector space is normable. Precisely:
If is an absolutely convex bounded neighbourhood of 0, the gauge (so that is a norm.
| Mathematics | Linear algebra | null |
990632 | https://en.wikipedia.org/wiki/Dynamical%20systems%20theory | Dynamical systems theory | Dynamical systems theory is an area of mathematics used to describe the behavior of complex dynamical systems, usually by employing differential equations or difference equations. When differential equations are employed, the theory is called continuous dynamical systems. From a physical point of view, continuous dynamical systems is a generalization of classical mechanics, a generalization where the equations of motion are postulated directly and are not constrained to be Euler–Lagrange equations of a least action principle. When difference equations are employed, the theory is called discrete dynamical systems. When the time variable runs over a set that is discrete over some intervals and continuous over other intervals or is any arbitrary time-set such as a Cantor set, one gets dynamic equations on time scales. Some situations may also be modeled by mixed operators, such as differential-difference equations.
This theory deals with the long-term qualitative behavior of dynamical systems, and studies the nature of, and when possible the solutions of, the equations of motion of systems that are often primarily mechanical or otherwise physical in nature, such as planetary orbits and the behaviour of electronic circuits, as well as systems that arise in biology, economics, and elsewhere. Much of modern research is focused on the study of chaotic systems and bizarre systems.
This field of study is also called just dynamical systems, mathematical dynamical systems theory or the mathematical theory of dynamical systems.
Overview
Dynamical systems theory and chaos theory deal with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible steady states?", or "Does the long-term behavior of the system depend on its initial condition?"
An important goal is to describe the fixed points, or steady states of a given dynamical system; these are values of the variable that do not change over time. Some of these fixed points are attractive, meaning that if the system starts out in a nearby state, it converges towards the fixed point.
Similarly, one is interested in periodic points, states of the system that repeat after several timesteps. Periodic points can also be attractive. Sharkovskii's theorem is an interesting statement about the number of periodic points of a one-dimensional discrete dynamical system.
Even simple nonlinear dynamical systems often exhibit seemingly random behavior that has been called chaos. The branch of dynamical systems that deals with the clean definition and investigation of chaos is called chaos theory.
History
The concept of dynamical systems theory has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is given implicitly by a relation that gives the state of the system only a short time into the future.
Before the advent of fast computing machines, solving a dynamical system required sophisticated mathematical techniques and could only be accomplished for a small class of dynamical systems.
Some excellent presentations of mathematical dynamic system theory include , , , and .
Concepts
Dynamical systems
The dynamical system concept is a mathematical formalization for any fixed "rule" that describes the time dependence of a point's position in its ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each spring in a lake.
A dynamical system has a state determined by a collection of real numbers, or more generally by a set of points in an appropriate state space. Small changes in the state of the system correspond to small changes in the numbers. The numbers are also the coordinates of a geometrical space—a manifold. The evolution rule of the dynamical system is a fixed rule that describes what future states follow from the current state. The rule may be deterministic (for a given time interval one future state can be precisely predicted given the current state) or stochastic (the evolution of the state can only be predicted with a certain probability).
Dynamicism
Dynamicism, also termed the dynamic hypothesis or the dynamic hypothesis in cognitive science or dynamic cognition, is a new approach in cognitive science exemplified by the work of philosopher Tim van Gelder. It argues that differential equations are more suited to modelling cognition than more traditional computer models.
Nonlinear system
In mathematics, a nonlinear system is a system that is not linear—i.e., a system that does not satisfy the superposition principle. Less technically, a nonlinear system is any problem where the variable(s) to solve for cannot be written as a linear sum of independent components. A nonhomogeneous system, which is linear apart from the presence of a function of the independent variables, is nonlinear according to a strict definition, but such systems are usually studied alongside linear systems, because they can be transformed to a linear system as long as a particular solution is known.
Related fields
Arithmetic dynamics
Arithmetic dynamics is a field that emerged in the 1990s that amalgamates two areas of mathematics, dynamical systems and number theory. Classically, discrete dynamics refers to the study of the iteration of self-maps of the complex plane or real line. Arithmetic dynamics is the study of the number-theoretic properties of integer, rational, -adic, and/or algebraic points under repeated application of a polynomial or rational function.
Chaos theory
Chaos theory describes the behavior of certain dynamical systems – that is, systems whose state evolves with time – that may exhibit dynamics that are highly sensitive to initial conditions (popularly referred to as the butterfly effect). As a result of this sensitivity, which manifests itself as an exponential growth of perturbations in the initial conditions, the behavior of chaotic systems appears random. This happens even though these systems are deterministic, meaning that their future dynamics are fully defined by their initial conditions, with no random elements involved. This behavior is known as deterministic chaos, or simply chaos.
Complex systems
Complex systems is a scientific field that studies the common properties of systems considered complex in nature, society, and science. It is also called complex systems theory, complexity science, study of complex systems and/or sciences of complexity. The key problems of such systems are difficulties with their formal modeling and simulation. From such perspective, in different research contexts complex systems are defined on the base of their different attributes.
The study of complex systems is bringing new vitality to many areas of science where a more typical reductionist strategy has fallen short. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines including neurosciences, social sciences, meteorology, chemistry, physics, computer science, psychology, artificial life, evolutionary computation, economics, earthquake prediction, molecular biology and inquiries into the nature of living cells themselves.
Control theory
Control theory is an interdisciplinary branch of engineering and mathematics, in part it deals with influencing the behavior of dynamical systems.
Ergodic theory
Ergodic theory is a branch of mathematics that studies dynamical systems with an invariant measure and related problems. Its initial development was motivated by problems of statistical physics.
Functional analysis
Functional analysis is the branch of mathematics, and specifically of analysis, concerned with the study of vector spaces and operators acting upon them. It has its historical roots in the study of functional spaces, in particular transformations of functions, such as the Fourier transform, as well as in the study of differential and integral equations. This usage of the word functional goes back to the calculus of variations, implying a function whose argument is a function. Its use in general has been attributed to mathematician and physicist Vito Volterra and its founding is largely attributed to mathematician Stefan Banach.
Graph dynamical systems
The concept of graph dynamical systems (GDS) can be used to capture a wide range of processes taking place on graphs or networks. A major theme in the mathematical and computational analysis of graph dynamical systems is to relate their structural properties (e.g. the network connectivity) and the global dynamics that result.
Projected dynamical systems
Projected dynamical systems is a mathematical theory investigating the behaviour of dynamical systems where solutions are restricted to a constraint set. The discipline shares connections to and applications with both the static world of optimization and equilibrium problems and the dynamical world of ordinary differential equations. A projected dynamical system is given by the flow to the projected differential equation.
Symbolic dynamics
Symbolic dynamics is the practice of modelling a topological or smooth dynamical system by a discrete space consisting of infinite sequences of abstract symbols, each of which corresponds to a state of the system, with the dynamics (evolution) given by the shift operator.
System dynamics
System dynamics is an approach to understanding the behaviour of systems over time. It deals with internal feedback loops and time delays that affect the behaviour and state of the entire system. What makes using system dynamics different from other approaches to studying systems is the language used to describe feedback loops with stocks and flows. These elements help describe how even seemingly simple systems display baffling nonlinearity.
Topological dynamics
Topological dynamics is a branch of the theory of dynamical systems in which qualitative, asymptotic properties of dynamical systems are studied from the viewpoint of general topology.
Applications
In biomechanics
In sports biomechanics, dynamical systems theory has emerged in the movement sciences as a viable framework for modeling athletic performance and efficiency. It comes as no surprise, since dynamical systems theory has its roots in Analytical mechanics. From psychophysiological perspective, the human movement system is a highly intricate network of co-dependent sub-systems (e.g. respiratory, circulatory, nervous, skeletomuscular, perceptual) that are composed of a large number of interacting components (e.g. blood cells, oxygen molecules, muscle tissue, metabolic enzymes, connective tissue and bone). In dynamical systems theory, movement patterns emerge through generic processes of self-organization found in physical and biological systems. There is no research validation of any of the claims associated to the conceptual application of this framework.
In cognitive science
Dynamical system theory has been applied in the field of neuroscience and cognitive development, especially in the neo-Piagetian theories of cognitive development. It is the belief that cognitive development is best represented by physical theories rather than theories based on syntax and AI. It also believed that differential equations are the most appropriate tool for modeling human behavior. These equations are interpreted to represent an agent's cognitive trajectory through state space. In other words, dynamicists argue that psychology should be (or is) the description (via differential equations) of the cognitions and behaviors of an agent under certain environmental and internal pressures. The language of chaos theory is also frequently adopted.
In it, the learner's mind reaches a state of disequilibrium where old patterns have broken down. This is the phase transition of cognitive development. Self-organization (the spontaneous creation of coherent forms) sets in as activity levels link to each other. Newly formed macroscopic and microscopic structures support each other, speeding up the process. These links form the structure of a new state of order in the mind through a process called scalloping (the repeated building up and collapsing of complex performance.) This new, novel state is progressive, discrete, idiosyncratic and unpredictable.
Dynamic systems theory has recently been used to explain a long-unanswered problem in child development referred to as the A-not-B error.
Further, since the middle of the 1990s cognitive science, oriented towards a system theoretical connectionism, has increasingly adopted the methods from (nonlinear) “Dynamic Systems Theory (DST)“. A variety of neurosymbolic cognitive neuroarchitectures in modern connectionism, considering their mathematical structural core, can be categorized as (nonlinear) dynamical systems. These attempts in neurocognition to merge connectionist cognitive neuroarchitectures with DST come from not only neuroinformatics and connectionism, but also recently from developmental psychology (“Dynamic Field Theory (DFT)”) and from “evolutionary robotics” and “developmental robotics” in connection with the mathematical method of “evolutionary computation (EC)”. For an overview see Maurer.
In second language development
The application of Dynamic Systems Theory to study second language acquisition is attributed to Diane Larsen-Freeman who published an article in 1997 in which she claimed that second language acquisition should be viewed as a developmental process which includes language attrition as well as language acquisition. In her article she claimed that language should be viewed as a dynamic system which is dynamic, complex, nonlinear, chaotic, unpredictable, sensitive to initial conditions, open, self-organizing, feedback sensitive, and adaptive.
| Mathematics | Other | null |
991054 | https://en.wikipedia.org/wiki/Sol%20%28colloid%29 | Sol (colloid) | A sol is a colloidal suspension made out of tiny solid particles in a continuous liquid medium. Sols are stable, so that they do not settle down when left undisturbed, and exhibit the Tyndall effect, which is the scattering of light by the particles in the colloid. The size of the particles can vary from 1 nm - 100 nm. Examples include amongst others blood, pigmented ink, cell fluids, paint, antacids and mud.
Artificial sols can be prepared by two main methods: dispersion and condensation. In the dispersion method, solid particles are reduced to colloidal dimensions through techniques such as ball milling and Bredig's arc method. In the condensation method, small particles are formed from larger molecules through a chemical reaction.
The stability of sols can be maintained through the use of dispersing agents, which prevent the particles from clumping together or settling out of the suspension. Sols are often used in the sol-gel process, in which a sol is converted into a gel through the addition of a crosslinking agent.
In a sol, solid particles are dispersed in a liquid continuous phase, while in an emulsion, liquid droplets are dispersed in a liquid or semi-solid continuous phase.
| Physical sciences | Mixture | Chemistry |
991169 | https://en.wikipedia.org/wiki/Rodenticide | Rodenticide | Rodenticides are chemicals made and sold for the purpose of killing rodents. While commonly referred to as "rat poison", rodenticides are also used to kill mice, woodchucks, chipmunks, porcupines, nutria, beavers, and voles.
Some rodenticides are lethal after one exposure while others require more than one. Rodents are disinclined to gorge on an unknown food (perhaps reflecting an adaptation to their inability to vomit), preferring to sample, wait and observe whether it makes them or other rats sick. This phenomenon of poison shyness is the rationale for poisons that kill only after multiple doses.
Besides being directly toxic to the mammals that ingest them, including dogs, cats, and humans, many rodenticides present a secondary poisoning risk to animals that hunt or scavenge the dead corpses of rats.
Classes of rodenticides
Anticoagulants
Anticoagulants are defined as chronic (death occurs one to two weeks after ingestion of the lethal dose, rarely sooner), single-dose (second generation) or multiple-dose (first generation) rodenticides, acting by effective blocking of the vitamin-K cycle, resulting in inability to produce essential blood-clotting factors—mainly coagulation factors II (prothrombin) and VII (proconvertin).
In addition to this specific metabolic disruption, massive toxic doses of 4-hydroxycoumarin, 4-thiochromenone and 1,3-indandione anticoagulants cause damage to tiny blood vessels (capillaries), increasing their permeability, causing internal bleeding. These effects are gradual, developing over several days. In the final phase of the intoxication, the exhausted rodent collapses due to hemorrhagic shock or severe anemia and dies. The question of whether the use of these rodenticides can be considered humane has been raised.
The main benefit of anticoagulants over other poisons is that the time taken for the poison to induce death means that the rats do not associate the damage with their feeding habits.
First-generation rodenticidal anticoagulants generally have shorter elimination half-lives, require higher concentrations (usually between 0.005% and 0.1%) and consecutive intake over days in order to accumulate the lethal dose, and are less toxic than second-generation agents.
Second-generation anticoagulant rodenticides (or SGARs) are far more toxic than those of the first generation. They are generally applied in lower concentrations in baits—usually on the order of 0.001% to 0.005%—are lethal after a single ingestion of bait and are also effective against strains of rodents that became resistant to first-generation anticoagulants; thus, the second-generation anticoagulants are sometimes referred to as "superwarfarins".
Phylloquinone has been suggested, and successfully used, as antidote for pets or humans accidentally or intentionally exposed to anticoagulant poisons. Some of these poisons act by inhibiting liver functions and in advanced stages of poisoning, several blood-clotting factors are absent, and the volume of circulating blood is diminished, so that a blood transfusion (optionally with the clotting factors present) can save a person who has been poisoned, an advantage over some older poisons. A unique enzyme produced by the liver enables the body to recycle vitamin K. To produce the blood clotting factors that prevent excessive bleeding, the body needs vitamin K. Anticoagulants hinder this enzyme's ability to function. Internal bleeding could start if the body's reserve of anticoagulant runs out from exposure to enough of it. Because they bind more closely to the enzyme that produces blood clotting agents, single-dose anticoagulants are more hazardous. They may also obstruct several stages of the recycling of vitamin K. Single-dose or second-generation anticoagulants can be stored in the liver because they are not quickly eliminated from the body.
Metal phosphides
Metal phosphides have been used as a means of killing rodents and are considered single-dose fast acting rodenticides (death occurs commonly within 1–3 days after single bait ingestion). A bait consisting of food and a phosphide (usually zinc phosphide) is left where the rodents can eat it. The acid in the digestive system of the rodent reacts with the phosphide to generate toxic phosphine gas. This method of vermin control has possible use in places where rodents are resistant to some of the anticoagulants, particularly for control of house and field mice; zinc phosphide baits are also cheaper than most second-generation anticoagulants, so that sometimes, in the case of large infestation by rodents, their population is initially reduced by copious amounts of zinc phosphide bait applied, and the rest of population that survived the initial fast-acting poison is then eradicated by prolonged feeding on anticoagulant bait. Inversely, the individual rodents that survived anticoagulant bait poisoning (rest population) can be eradicated by pre-baiting them with nontoxic bait for a week or two (this is important to overcome bait shyness, and to get rodents used to feeding in specific areas by specific food, especially in eradicating rats) and subsequently applying poisoned bait of the same sort as used for pre-baiting until all consumption of the bait ceases (usually within 2–4 days). These methods of alternating rodenticides with different modes of action gives actual or almost 100% eradications of the rodent population in the area, if the acceptance/palatability of baits are good (i.e., rodents feed on it readily).
Zinc phosphide is typically added to rodent baits in a concentration of 0.75% to 2.0%. The baits have strong, pungent garlic-like odor due to the phosphine liberated by hydrolysis. The odor attracts (or, at least, does not repel) rodents, but has a repulsive effect on other mammals. Birds, notably wild turkeys, are not sensitive to the smell, and might feed on the bait, and thus fall victim to the poison.
The tablets or pellets (usually aluminium, calcium or magnesium phosphide for fumigation/gassing) may also contain other chemicals which evolve ammonia, which helps reduce the potential for spontaneous combustion or explosion of the phosphine gas.
Metal phosphides do not accumulate in the tissues of poisoned animals, so the risk of secondary poisoning is low.
Before the advent of anticoagulants, phosphides were the favored kind of rat poison. During World War II, they came into use in United States because of shortage of strychnine due to the Japanese occupation of the territories where the strychnine tree is grown. Phosphides are rather fast-acting rat poisons, resulting in the rats dying usually in open areas, instead of in the affected buildings.
Phosphides used as rodenticides include:
aluminium phosphide (fumigant and bait)
calcium phosphide (fumigant only)
magnesium phosphide (fumigant only)
zinc phosphide (bait only)
Hypercalcemia (vitamin D overdose)
Cholecalciferol (vitamin D3) and ergocalciferol (vitamin D2) are used as rodenticides. They are toxic to rodents for the same reason they are important to humans: they affect calcium and phosphate homeostasis in the body. Vitamins D are essential in minute quantities (few IUs per kilogram body weight daily, only a fraction of a milligram), and like most fat soluble vitamins, they are toxic in larger doses, causing hypervitaminosis D. If the poisoning is severe enough (that is, if the dose of the toxin is high enough), it leads to death. In rodents that consume the rodenticidal bait, it causes hypercalcemia, raising the calcium level, mainly by increasing calcium absorption from food, mobilising bone-matrix-fixed calcium into ionised form (mainly monohydrogencarbonate calcium cation, partially bound to plasma proteins, [CaHCO3]+), which circulates dissolved in the blood plasma. After ingestion of a lethal dose, the free calcium levels are raised sufficiently that blood vessels, kidneys, the stomach wall and lungs are mineralised/calcificated (formation of calcificates, crystals of calcium salts/complexes in the tissues, damaging them), leading further to heart problems (myocardial tissue is sensitive to variations of free calcium levels, affecting both myocardial contractibility and action potential propagation between the atria and ventricles), bleeding (due to capillary damage) and possibly kidney failure. It is considered to be single-dose, cumulative (depending on concentration used; the common 0.075% bait concentration is lethal to most rodents after a single intake of larger portions of the bait) or sub-chronic (death occurring usually within days to one week after ingestion of the bait). Applied concentrations are 0.075% cholecalciferol (30,000 IU/g)<ref name=usda2006>{{cite conference |last1=Rizor |first1=Suzanne E. |last2=Arjo |first2=Wendy M. |last3=Bulkin |first3=Stephan |last4=Nolte |first4=Dale L. |title=Efficacy of Cholecalciferol Baits for Pocket Gopher Control and Possible Effects on Non-Target Rodents in Pacific Northwest Forests |url=https://naldc.nal.usda.gov/download/39036/PDF |conference=Vertebrate Pest Conference (2006) |publisher=USDA |quote= 0.15% cholecalciferol bait appears to have application for pocket gopher control.' Cholecalciferol can be a single high-dose toxicant or a cumulative multiple low-dose toxicant.' |access-date=27 August 2019 |archive-date=14 September 2012 |archive-url=https://web.archive.org/web/20120914083512/http://naldc.nal.usda.gov/download/39036/PDF |url-status=dead }}</ref> and 0.1% ergocalciferol (40,000 IU/g) when used alone, which can kill a rodent or a rat.
There is an important feature of calciferols toxicology, that they are synergistic with anticoagulant toxicant. In other words, mixtures of anticoagulants and calciferols in same bait are more toxic than a sum of toxicities of the anticoagulant and the calciferol in the bait, so that a massive hypercalcemic effect can be achieved by a substantially lower calciferol content in the bait, and vice versa, a more pronounced anticoagulant/hemorrhagic effects are observed if the calciferol is present. This synergism is mostly used in calciferol low concentration baits, because effective concentrations of calciferols are more expensive than effective concentrations of most anticoagulants.
The first application of a calciferol in rodenticidal bait was in the Sorex product Sorexa D (with a different formula than today's Sorexa D), back in the early 1970s, which contained 0.025% warfarin and 0.1% ergocalciferol. Today, Sorexa CD contains a 0.0025% difenacoum and 0.075% cholecalciferol combination. Numerous other brand products containing either 0.075-0.1% calciferols (e.g. Quintox) alone or alongside an anticoagulant are marketed.
The Merck Veterinary Manual states the following:
Although this rodenticide [cholecalciferol] was introduced with claims that it was less toxic to nontarget species than to rodents, clinical experience has shown that rodenticides containing cholecalciferol are a significant health threat to dogs and cats. Cholecalciferol produces hypercalcemia, which results in systemic calcification of soft tissue, leading to kidney failure, cardiac abnormalities, hypertension, CNS depression and GI upset. Signs generally develop within 18-36 hours of ingestion and can include depression, anorexia, polyuria and polydipsia. As serum calcium concentrations increase, clinical signs become more severe. ... GI smooth muscle excitability decreases and is manifest by anorexia, vomiting and constipation. ... Loss of renal concentrating ability is a direct result of hypercalcemia. As hypercalcemia persists, mineralization of the kidneys results in progressive renal insufficiency."
Additional anticoagulant renders the bait more toxic to pets as well as humans. Upon single ingestion, solely calciferol-based baits are considered generally safer to birds than second generation anticoagulants or acute toxicants. Treatment in pets is mostly supportive, with intravenous fluids and pamidronate disodium. The hormone calcitonin is no longer commonly used.
Other
Other chemical poisons include:
ANTU (α-naphthylthiourea; specific against Brown rat, Rattus norvegicus'')
Arsenic trioxide
Barium carbonate (sometimes called Witherite)
Chloralose (a narcotic prodrug)
Crimidine (inhibits metabolism of vitamin B6)
1,3-Difluoro-2-propanol ("Gliftor")
Endrin (organochlorine insecticide, used in the past for extermination of voles in fields)
Fluoroacetamide ("1081")
Phosacetim (a delayed-action acetylcholinesterase inhibitor)
Phosphorus allotropes
Pyrinuron (a urea derivative)
Scilliroside and other cardiac glycosides like oleandrin or digoxin
Sodium fluoroacetate ("1080")
Strychnine (A naturally occurring convulsant and stimulant)
Tetramethylenedisulfotetramine ("tetramine") - Deadly toxic to humans so use should be avoided
Thallium sulfate
Mitochondrial toxins like bromethalin and 2,4-dinitrophenol (cause high fever and brain swelling)
Zyklon B/Uragan D2 (hydrogen cyanide gas absorbed in an inert carrier)
Combinations
In some countries, fixed three-component rodenticides, i.e., anticoagulant + antibiotic + vitamin D, are used. Associations of a second-generation anticoagulant with an antibiotic and/or vitamin D are considered to be effective even against most resistant strains of rodents, though some second generation anticoagulants (namely brodifacoum and difethialone), in bait concentrations of 0.0025% to 0.005% are so toxic that resistance is unknown, and even rodents resistant to other rodenticides are reliably exterminated by application of these most toxic anticoagulants.
Low-toxicity/Eco-friendly rodenticides
Powdered corn cob and corn meal gluten have been developed as rodenticides. They were approved in the EU and patented in the US in 2013. These preparations rely on dehydration and electrolyte imbalance to cause death.
Inert gas killing of burrowing pest animals is another method with no impact on scavenging wildlife. One such method has been commercialized and sold under the brand name Rat Ice.
Non-target issues
Secondary poisoning and risks to wildlife
One of the potential problems when using rodenticides is that dead or weakened rodents may be eaten by other wildlife, either predators or scavengers. Members of the public deploying rodenticides may not be aware of this or may not follow the product's instructions closely enough. There is evidence of secondary poisoning being caused by exposure to prey.
The faster a rodenticide acts, the more critical this problem may be. For the fast-acting rodenticide bromethalin, for example, there is no diagnostic test or antidote.
This has led environmental researchers to conclude that low strength, long duration rodenticides (generally first generation anticoagulants) are the best balance between maximum effect and minimum risk.
Proposed US legislation change
In 2008, after assessing human health and ecological effects, as well as benefits, the US Environmental Protection Agency (EPA) announced measures to reduce risks associated with ten rodenticides. New restrictions by sale and distribution restrictions, minimum package size requirements, use site restriction, and tamper resistant products would have taken effect in 2011. The regulations were delayed pending a legal challenge by manufacturer Reckitt-Benkiser.
Notable rat eradications
The entire rat populations of several islands have been eradicated, most notably New Zealand's Campbell Island, Hawadax Island, Alaska (formerly known as Rat Island), Macquarie Island and Canna, Scotland (declared rat-free in 2008). According to the Friends of South Georgia Island, all of the rats have been eliminated from South Georgia.
Alberta, Canada, through a combination of climate and control, is also believed to be rat-free.
| Technology | Pest and disease control | null |
991210 | https://en.wikipedia.org/wiki/Divisibility%20rule | Divisibility rule | A divisibility rule is a shorthand and useful way of determining whether a given integer is divisible by a fixed divisor without performing the division, usually by examining its digits. Although there are divisibility tests for numbers in any radix, or base, and they are all different, this article presents rules and examples only for decimal, or base 10, numbers. Martin Gardner explained and popularized these rules in his September 1962 "Mathematical Games" column in Scientific American.
Divisibility rules for numbers 1−30
The rules given below transform a given number into a generally smaller number, while preserving divisibility by the divisor of interest. Therefore, unless otherwise noted, the resulting number should be evaluated for divisibility by the same divisor. In some cases the process can be iterated until the divisibility is obvious; for others (such as examining the last n digits) the result must be examined by other means.
For divisors with multiple rules, the rules are generally ordered first for those appropriate for numbers with many digits, then those useful for numbers with fewer digits.
To test the divisibility of a number by a power of 2 or a power of 5 (2n or 5n, in which n is a positive integer), one only need to look at the last n digits of that number.
To test divisibility by any number expressed as the product of prime factors , we can separately test for divisibility by each prime to its appropriate power. For example, testing divisibility by 24 is equivalent to testing divisibility by 8 (23) and 3 simultaneously, thus we need only show divisibility by 8 and by 3 to prove divisibility by 24.
Step-by-step examples
Divisibility by 2
First, take any number (for this example it will be 376) and note the last digit in the number, discarding the other digits. Then take that digit (6) while ignoring the rest of the number and determine if it is divisible by 2. If it is divisible by 2, then the original number is divisible by 2.
Example
376 (The original number)
37 6 (Take the last digit)
6 ÷ 2 = 3 (Check to see if the last digit is divisible by 2)
376 ÷ 2 = 188 (If the last digit is divisible by 2, then the whole number is divisible by 2)
Divisibility by 3 or 9
First, take any number (for this example it will be 492) and add together each digit in the number (4 + 9 + 2 = 15). Then take that sum (15) and determine if it is divisible by 3. The original number is divisible by 3 (or 9) if and only if the sum of its digits is divisible by 3 (or 9).
Adding the digits of a number up, and then repeating the process with the result until only one digit remains, will give the remainder of the original number if it were divided by nine (unless that single digit is nine itself, in which case the number is divisible by nine and the remainder is zero).
This can be generalized to any standard positional system, in which the divisor in question then becomes one less than the radix; thus, in base-twelve, the digits will add up to the remainder of the original number if divided by eleven, and numbers are divisible by eleven only if the digit sum is divisible by eleven.
Example.
492 (The original number)
4 + 9 + 2 = 15 (Add each individual digit together)
15 is divisible by 3 at which point we can stop. Alternatively we can continue using the same method if the number is still too large:
1 + 5 = 6 (Add each individual digit together)
6 ÷ 3 = 2 (Check to see if the number received is divisible by 3)
492 ÷ 3 = 164 (If the number obtained by using the rule is divisible by 3, then the whole number is divisible by 3)
Divisibility by 4
The basic rule for divisibility by 4 is that if the number formed by the last two digits in a number is divisible by 4, the original number is divisible by 4; this is because 100 is divisible by 4 and so adding hundreds, thousands, etc. is simply adding another number that is divisible by 4. If any number ends in a two digit number that you know is divisible by 4 (e.g. 24, 04, 08, etc.), then the whole number will be divisible by 4 regardless of what is before the last two digits.
Alternatively, one can just add half of the last digit to the penultimate digit (or the remaining number). If that number is an even natural number, the original number is divisible by 4
Also, one can simply divide the number by 2, and then check the result to find if it is divisible by 2. If it is, the original number is divisible by 4. In addition, the result of this test is the same as the original number divided by 4.
Example.
General rule
2092 (The original number)
20 92 (Take the last two digits of the number, discarding any other digits)
92 ÷ 4 = 23 (Check to see if the number is divisible by 4)
2092 ÷ 4 = 523 (If the number that is obtained is divisible by 4, then the original number is divisible by 4)
Second method
6174 (the original number)
check that last digit is even, otherwise 6174 can't be divisible by 4.
61 7 4 (Separate the last 2 digits from the rest of the number)
4 ÷ 2 = 2 (last digit divided by 2)
7 + 2 = 9 (Add half of last digit to the penultimate digit)
Since 9 isn't even, 6174 is not divisible by 4
Third method
1720 (The original number)
1720 ÷ 2 = 860 (Divide the original number by 2)
860 ÷ 2 = 430 (Check to see if the result is divisible by 2)
1720 ÷ 4 = 430 (If the result is divisible by 2, then the original number is divisible by 4)
Divisibility by 5
Divisibility by 5 is easily determined by checking the last digit in the number (475), and seeing if it is either 0 or 5. If the last number is either 0 or 5, the entire number is divisible by 5.
If the last digit in the number is 0, then the result will be the remaining digits multiplied by 2. For example, the number 40 ends in a zero, so take the remaining digits (4) and multiply that by two (4 × 2 = 8). The result is the same as the result of 40 divided by 5(40/5 = 8).
If the last digit in the number is 5, then the result will be the remaining digits multiplied by two, plus one. For example, the number 125 ends in a 5, so take the remaining digits (12), multiply them by two (12 × 2 = 24), then add one (24 + 1 = 25). The result is the same as the result of 125 divided by 5 (125/5=25).
Example.
If the last digit is 0
110 (The original number)
11 0 (Take the last digit of the number, and check if it is 0 or 5)
11 0 (If it is 0, take the remaining digits, discarding the last)
11 × 2 = 22 (Multiply the result by 2)
110 ÷ 5 = 22 (The result is the same as the original number divided by 5)
If the last digit is 5
85 (The original number)
8 5 (Take the last digit of the number, and check if it is 0 or 5)
8 5 (If it is 5, take the remaining digits, discarding the last)
8 × 2 = 16 (Multiply the result by 2)
16 + 1 = 17 (Add 1 to the result)
85 ÷ 5 = 17 (The result is the same as the original number divided by 5)
Divisibility by 6
Divisibility by 6 is determined by checking the original number to see if it is both an even number (divisible by 2) and divisible by 3.
If the final digit is even the number is divisible by two, and thus may be divisible by 6. If it is divisible by 2 continue by adding the digits of the original number and checking if that sum is a multiple of 3. Any number which is both a multiple of 2 and of 3 is a multiple of 6.
Example.
324 (The original number)
Final digit 4 is even, so 324 is divisible by 2, and may be divisible by 6.
3 + 2 + 4 = 9 which is a multiple of 3. Therefore the original number is divisible by both 2 and 3 and is divisible by 6.
Divisibility by 7
Divisibility by 7 can be tested by a recursive method. A number of the form 10x + y is divisible by 7 if and only if x − 2y is divisible by 7. In other words, subtract twice the last digit from the number formed by the remaining digits. Continue to do this until a number is obtained for which it is known whether it is divisible by 7. The original number is divisible by 7 if and only if the number obtained using this procedure is divisible by 7. For example, the number 371: 37 − (2×1) = 37 − 2 = 35; 3 − (2 × 5) = 3 − 10 = −7; thus, since −7 is divisible by 7, 371 is divisible by 7.
Similarly a number of the form 10x + y is divisible by 7 if and only if x + 5y is divisible by 7. So add five times the last digit to the number formed by the remaining digits, and continue to do this until a number is obtained for which it is known whether it is divisible by 7.
Another method is multiplication by 3. A number of the form 10x + y has the same remainder when divided by 7 as 3x + y. One must multiply the leftmost digit of the original number by 3, add the next digit, take the remainder when divided by 7, and continue from the beginning: multiply by 3, add the next digit, etc. For example, the number 371: 3×3 + 7 = 16 remainder 2, and 2×3 + 1 = 7. This method can be used to find the remainder of division by 7.
A more complicated algorithm for testing divisibility by 7 uses the fact that 100 ≡ 1, 101 ≡ 3, 102 ≡ 2, 103 ≡ 6, 104 ≡ 4, 105 ≡ 5, 106 ≡ 1, ... (mod 7). Take each digit of the number (371) in reverse order (173), multiplying them successively by the digits 1, 3, 2, 6, 4, 5, repeating with this sequence of multipliers as long as necessary (1, 3, 2, 6, 4, 5, 1, 3, 2, 6, 4, 5, ...), and adding the products (1×1 + 7×3 + 3×2 = 1 + 21 + 6 = 28). The original number is divisible by 7 if and only if the number obtained using this procedure is divisible by 7 (hence 371 is divisible by 7 since 28 is).
This method can be simplified by removing the need to multiply. All it would take with this simplification is to memorize the sequence above (132645...), and to add and subtract, but always working with one-digit numbers.
The simplification goes as follows:
Take for instance the number 371
Change all occurrences of 7, 8 or 9 into 0, 1 and 2, respectively. In this example, we get: 301. This second step may be skipped, except for the left most digit, but following it may facilitate calculations later on.
Now convert the first digit (3) into the following digit in the sequence 13264513... In our example, 3 becomes 2.
Add the result in the previous step (2) to the second digit of the number, and substitute the result for both digits, leaving all remaining digits unmodified: 2 + 0 = 2. So 301 becomes 21.
Repeat the procedure until you have a recognizable multiple of 7, or to make sure, a number between 0 and 6. So, starting from 21 (which is a recognizable multiple of 7), take the first digit (2) and convert it into the following in the sequence above: 2 becomes 6. Then add this to the second digit: 6 + 1 = 7.
If at any point the first digit is 8 or 9, these become 1 or 2, respectively. But if it is a 7 it should become 0, only if no other digits follow. Otherwise, it should simply be dropped. This is because that 7 would have become 0, and numbers with at least two digits before the decimal dot do not begin with 0, which is useless. According to this, our 7 becomes 0.
If through this procedure you obtain a 0 or any recognizable multiple of 7, then the original number is a multiple of 7. If you obtain any number from 1 to 6, that will indicate how much you should subtract from the original number to get a multiple of 7. In other words, you will find the remainder of dividing the number by 7. For example, take the number 186:
First, change the 8 into a 1: 116.
Now, change 1 into the following digit in the sequence (3), add it to the second digit, and write the result instead of both: 3 + 1 = 4. So 116 becomes now 46.
Repeat the procedure, since the number is greater than 7. Now, 4 becomes 5, which must be added to 6. That is 11.
Repeat the procedure one more time: 1 becomes 3, which is added to the second digit (1): 3 + 1 = 4.
Now we have a number smaller than 7, and this number (4) is the remainder of dividing 186/7. So 186 minus 4, which is 182, must be a multiple of 7.
Note: The reason why this works is that if we have: a+b=c and b is a multiple of any given number n, then a and c will necessarily produce the same remainder when divided by n. In other words, in 2 + 7 = 9, 7 is divisible by 7. So 2 and 9 must have the same remainder when divided by 7. The remainder is 2.
Therefore, if a number n is a multiple of 7 (i.e.: the remainder of n/7 is 0), then adding (or subtracting) multiples of 7 cannot change that property.
What this procedure does, as explained above for most divisibility rules, is simply subtract little by little multiples of 7 from the original number until reaching a number that is small enough for us to remember whether it is a multiple of 7. If 1 becomes a 3 in the following decimal position, that is just the same as converting 10×10n into a 3×10n. And that is actually the same as subtracting 7×10n (clearly a multiple of 7) from 10×10n.
Similarly, when you turn a 3 into a 2 in the following decimal position, you are turning 30×10n into 2×10n, which is the same as subtracting 30×10n−28×10n, and this is again subtracting a multiple of 7. The same reason applies for all the remaining conversions:
20×10n − 6×10n=14×10n
60×10n − 4×10n=56×10n
40×10n − 5×10n=35×10n
50×10n − 1×10n=49×10n
First method example
1050 → 105 − 0=105 → 10 − 10 = 0. ANSWER: 1050 is divisible by 7.
Second method example
1050 → 0501 (reverse) → 0×1 + 5×3 + 0×2 + 1×6 = 0 + 15 + 0 + 6 = 21 (multiply and add). ANSWER: 1050 is divisible by 7.
Vedic method of divisibility by osculation
Divisibility by seven can be tested by multiplication by the Ekhādika. Convert the divisor seven to the nines family by multiplying by seven. 7×7=49. Add one, drop the units digit and, take the 5, the Ekhādika, as the multiplier. Start on the right. Multiply by 5, add the product to the next digit to the left. Set down that result on a line below that digit. Repeat that method of multiplying the units digit by five and adding that product to the number of tens. Add the result to the next digit to the left. Write down that result below the digit. Continue to the end. If the result is zero or a multiple of seven, then yes, the number is divisible by seven. Otherwise, it is not. This follows the Vedic ideal, one-line notation.
Vedic method example:
Is 438,722,025 divisible by seven? Multiplier = 5.
4 3 8 7 2 2 0 2 5
42 37 46 37 6 40 37 27
YES
Pohlman–Mass method of divisibility by 7
The Pohlman–Mass method provides a quick solution that can determine if most integers are divisible by seven in three steps or less. This method could be useful in a mathematics competition such as MATHCOUNTS, where time is a factor to determine the solution without a calculator in the Sprint Round.
Step A:
If the integer is 1000 or less, subtract twice the last digit from the number formed by the remaining digits. If the result is a multiple of seven, then so is the original number (and vice versa). For example:
112 -> 11 − (2×2) = 11 − 4 = 7 YES
98 -> 9 − (8×2) = 9 − 16 = −7 YES
634 -> 63 − (4×2) = 63 − 8 = 55 NO
Because 1001 is divisible by seven, an interesting pattern develops for repeating sets of 1, 2, or 3 digits that form 6-digit numbers (leading zeros are allowed) in that all such numbers are divisible by seven. For example:
001 001 = 1,001 / 7 = 143
010 010 = 10,010 / 7 = 1,430
011 011 = 11,011 / 7 = 1,573
100 100 = 100,100 / 7 = 14,300
101 101 = 101,101 / 7 = 14,443
110 110 = 110,110 / 7 = 15,730
01 01 01 = 10,101 / 7 = 1,443
10 10 10 = 101,010 / 7 = 14,430
111,111 / 7 = 15,873
222,222 / 7 = 31,746
999,999 / 7 = 142,857
576,576 / 7 = 82,368
For all of the above examples, subtracting the first three digits from the last three results in a multiple of seven. Notice that leading zeros are permitted to form a 6-digit pattern.
This phenomenon forms the basis for Steps B and C.
Step B:
If the integer is between 1001 and one million, find a repeating pattern of 1, 2, or 3 digits that forms a 6-digit number that is close to the integer (leading zeros are allowed and can help you visualize the pattern). If the positive difference is less than 1000, apply Step A. This can be done by subtracting the first three digits from the last three digits. For example:
341,355 − 341,341 = 14 -> 1 − (4×2) = 1 − 8 = −7 YES
67,326 − 067,067 = 259 -> 25 − (9×2) = 25 − 18 = 7 YES
The fact that 999,999 is a multiple of 7 can be used for determining divisibility of integers larger than one million by reducing the integer to a 6-digit number that can be determined using Step B. This can be done easily by adding the digits left of the first six to the last six and follow with Step A.
Step C:
If the integer is larger than one million, subtract the nearest multiple of 999,999 and then apply Step B. For even larger numbers, use larger sets such as 12-digits (999,999,999,999) and so on. Then, break the integer into a smaller number that can be solved using Step B. For example:
22,862,420 − (999,999 × 22) = 22,862,420 − 21,999,978 -> 862,420 + 22 = 862,442
862,442 -> 862 − 442 (Step B) = 420 -> 42 − (0×2) (Step A) = 42 YES
This allows adding and subtracting alternating sets of three digits to determine divisibility by seven. Understanding these patterns allows you to quickly calculate divisibility of seven as seen in the following examples:
Pohlman–Mass method of divisibility by 7, examples:
Is 98 divisible by seven?
98 -> 9 − (8×2) = 9 − 16 = −7 YES (Step A)
Is 634 divisible by seven?
634 -> 63 − (4×2) = 63 − 8 = 55 NO (Step A)
Is 355,341 divisible by seven?
355,341 − 341,341 = 14,000 (Step B) -> 014 − 000 (Step B) -> 14 = 1 − (4×2) (Step A) = 1 − 8 = −7 YES
Is 42,341,530 divisible by seven?
42,341,530 -> 341,530 + 42 = 341,572 (Step C)
341,572 − 341,341 = 231 (Step B)
231 -> 23 − (1×2) = 23 − 2 = 21 YES (Step A)
Using quick alternating additions and subtractions:
42,341,530 -> 530 − 341 + 42 = 189 + 42 = 231 -> 23 − (1×2) = 21 YES
Multiplication by 3 method of divisibility by 7, examples:
Is 98 divisible by seven?
98 -> 9 remainder 2 -> 2×3 + 8 = 14 YES
Is 634 divisible by seven?
634 -> 6×3 + 3 = 21 -> remainder 0 -> 0×3 + 4 = 4 NO
Is 355,341 divisible by seven?
3 × 3 + 5 = 14 -> remainder 0 -> 0×3 + 5 = 5 -> 5×3 + 3 = 18 -> remainder 4 -> 4×3 + 4 = 16 -> remainder 2 -> 2×3 + 1 = 7 YES
Find remainder of 1036125837 divided by 7
1×3 + 0 = 3
3×3 + 3 = 12 remainder 5
5×3 + 6 = 21 remainder 0
0×3 + 1 = 1
1×3 + 2 = 5
5×3 + 5 = 20 remainder 6
6×3 + 8 = 26 remainder 5
5×3 + 3 = 18 remainder 4
4×3 + 7 = 19 remainder 5
Answer is 5
Finding remainder of a number when divided by 7
7 − (1, 3, 2, −1, −3, −2, cycle repeats for the next six digits) Period: 6 digits.
Recurring numbers: 1, 3, 2, −1, −3, −2
Minimum magnitude sequence
(1, 3, 2, 6, 4, 5, cycle repeats for the next six digits) Period: 6 digits.
Recurring numbers: 1, 3, 2, 6, 4, 5
Positive sequence
Multiply the right most digit by the left most digit in the sequence and multiply the second right most digit by the second left most digit in the sequence and so on and so for. Next, compute the sum of all the values and take the modulus of 7.
Example: What is the remainder when 1036125837 is divided by 7?
Multiplication of the rightmost digit = 1 × 7 = 7
Multiplication of the second rightmost digit = 3 × 3 = 9
Third rightmost digit = 8 × 2 = 16
Fourth rightmost digit = 5 × −1 = −5
Fifth rightmost digit = 2 × −3 = −6
Sixth rightmost digit = 1 × −2 = −2
Seventh rightmost digit = 6 × 1 = 6
Eighth rightmost digit = 3 × 3 = 9
Ninth rightmost digit = 0
Tenth rightmost digit = 1 × −1 = −1
Sum = 33
33 modulus 7 = 5
Remainder = 5
Digit pair method of divisibility by 7
This method uses 1, −3, 2 pattern on the digit pairs. That is, the divisibility of any number by seven can be tested by first separating the number into digit pairs, and then applying the algorithm on three digit pairs (six digits). When the number is smaller than six digits, then fill zero's to the right side until there are six digits. When the number is larger than six digits, then repeat the cycle on the next six digit group and then add the results. Repeat the algorithm until the result is a small number. The original number is divisible by seven if and only if the number obtained using this algorithm is divisible by seven. This method is especially suitable for large numbers.
Example 1:
The number to be tested is 157514.
First we separate the number into three digit pairs: 15, 75 and 14.
Then we apply the algorithm: 1 × 15 − 3 × 75 + 2 × 14 = 182
Because the resulting 182 is less than six digits, we add zero's to the right side until it is six digits.
Then we apply our algorithm again: 1 × 18 − 3 × 20 + 2 × 0 = −42
The result −42 is divisible by seven, thus the original number 157514 is divisible by seven.
Example 2:
The number to be tested is 15751537186.
(1 × 15 − 3 × 75 + 2 × 15) + (1 × 37 − 3 × 18 + 2 × 60) = −180 + 103 = −77
The result −77 is divisible by seven, thus the original number 15751537186 is divisible by seven.
Another digit pair method of divisibility by 7
Method
This is a non-recursive method to find the remainder left by a number on dividing by 7:
Separate the number into digit pairs starting from the ones place. Prepend the number with 0 to complete the final pair if required.
Calculate the remainders left by each digit pair on dividing by 7.
Multiply the remainders with the appropriate multiplier from the sequence 1, 2, 4, 1, 2, 4, ... : the remainder from the digit pair consisting of ones place and tens place should be multiplied by 1, hundreds and thousands by 2, ten thousands and hundred thousands by 4, million and ten million again by 1 and so on.
Calculate the remainders left by each product on dividing by 7.
Add these remainders.
The remainder of the sum when divided by 7 is the remainder of the given number when divided by 7.
For example:
The number 194,536 leaves a remainder of 6 on dividing by 7.
The number 510,517,813 leaves a remainder of 1 on dividing by 7.
Proof of correctness of the method
The method is based on the observation that 100 leaves a remainder of 2 when divided by 7. And since we are breaking the number into digit pairs we essentially have powers of 100.
1 mod 7 = 1
100 mod 7 = 2
10,000 mod 7 = 2^2 = 4
1,000,000 mod 7 = 2^3 = 8; 8 mod 7 = 1
100,000,000 mod 7 = 2^4 = 16; 16 mod 7 = 2
10,000,000,000 mod 7 = 2^5 = 32; 32 mod 7 = 4
And so on.
The correctness of the method is then established by the following chain of equalities:
Let N be the given number .
Divisibility by 11
Method
In order to check divisibility by 11, consider the alternating sum of the digits. For example with 907,071:
so 907,071 is divisible by 11.
We can either start with or since multiplying the whole by does not change anything.
Proof of correctness of the method
Considering that , we can write for any integer:
Divisibility by 13
Remainder Test
13 (1, −3, −4, −1, 3, 4, cycle goes on.)
If you are not comfortable with negative numbers, then use this sequence. (1, 10, 9, 12, 3, 4)
Multiply the right most digit of the number with the left most number in the sequence shown above and the second right most digit to the second left most digit of the number in the sequence. The cycle goes on.
Example: What is the remainder when 321 is divided by 13?
Using the first sequence,
Ans: 1 × 1 + 2 × −3 + 3 × −4 = −17
Remainder = −17 mod 13 = 9
Example: What is the remainder when 1234567 is divided by 13?
Using the second sequence,
Answer: 7 × 1 + 6 × 10 + 5 × 9 + 4 × 12 + 3 × 3 + 2 × 4 + 1 × 1 = 178 mod 13 = 9
Remainder = 9
A recursive method can be derived using the fact that and that . This implies that a number is divisible by 13 iff removing the first digit and subtracting 3 times that digit from the new first digit yields a number divisible by 13. We also have the rule that 10 x + y is divisible iff x + 4 y is divisible by 13. For example, to test the divisibility of 1761 by 13 we can reduce this to the divisibility of 461 by the first rule. Using the second rule, this reduces to the divisibility of 50, and doing that again yields 5. So, 1761 is not divisible by 13.
Testing 871 this way reduces it to the divisibility of 91 using the second rule, and then 13 using that rule again, so we see that 871 is divisible by 13.
Beyond 30
Divisibility properties of numbers can be determined in two ways, depending on the type of the divisor.
Composite divisors
A number is divisible by a given divisor if it is divisible by the highest power of each of its prime factors. For example, to determine divisibility by 36, check divisibility by 4 and by 9. Note that checking 3 and 12, or 2 and 18, would not be sufficient. A table of prime factors may be useful.
A composite divisor may also have a rule formed using the same procedure as for a prime divisor, given below, with the caveat that the manipulations involved may not introduce any factor which is present in the divisor. For instance, one cannot make a rule for 14 that involves multiplying the equation by 7. This is not an issue for prime divisors because they have no smaller factors.
Prime divisors
The goal is to find an inverse to 10 modulo the prime under consideration (does not work for 2 or 5) and use that as a multiplier to make the divisibility of the original number by that prime depend on the divisibility of the new (usually smaller) number by the same prime.
Using 31 as an example, since 10 × (−3) = −30 = 1 mod 31, we get the rule for using y − 3x in the table below. Likewise, since 10 × (28) = 280 = 1 mod 31 also, we obtain a complementary rule y + 28x of the same kind - our choice of addition or subtraction being dictated by arithmetic convenience of the smaller value. In fact, this rule for prime divisors besides 2 and 5 is really a rule for divisibility by any integer relatively prime to 10 (including 33 and 39; see the table below). This is why the last divisibility condition in the tables above and below for any number relatively prime to 10 has the same kind of form (add or subtract some multiple of the last digit from the rest of the number).
Generalized divisibility rule
To test for divisibility by D, where D ends in 1, 3, 7, or 9, the following method can be used. Find any multiple of D ending in 9. (If D ends respectively in 1, 3, 7, or 9, then multiply by 9, 3, 7, or 1.) Then add 1 and divide by 10, denoting the result as m. Then a number N = 10t + q is divisible by D if and only if mq + t is divisible by D. If the number is too large, you can also break it down into several strings with e digits each, satisfying either 10e = 1 or 10e = −1 (mod D). The sum (or alternating sum) of the numbers have the same divisibility as the original one.
For example, to determine whether 913 = 10 × 91 + 3 is divisible by 11, find that m = (11 × 9 + 1) ÷ 10 = 10. Then mq + t = 10 × 3 + 91 = 121; this is divisible by 11 (with quotient 11), so 913 is also divisible by 11. As another example, to determine whether 689 = 10 × 68 + 9 is divisible by 53, find that m = (53 × 3 + 1) ÷ 10 = 16. Then mq + t = 16 × 9 + 68 = 212, which is divisible by 53 (with quotient 4); so 689 is also divisible by 53.
Alternatively, any number Q = 10c + d is divisible by n = 10a + b, such that gcd(n, 2, 5) = 1, if c + D(n)d = An for some integer A, where
The first few terms of the sequence, generated by D(n), are 1, 1, 5, 1, 10, 4, 12, 2, ... .
The piece wise form of D(n) and the sequence generated by it were first published by Bulgarian mathematician Ivan Stoykov in March 2020.
Proofs
Proof using basic algebra
Many of the simpler rules can be produced using only algebraic manipulation, creating binomials and rearranging them. By writing a number as the sum of each digit times a power of 10 each digit's power can be manipulated individually.
Case where all digits are summed
This method works for divisors that are factors of 10 − 1 = 9.
Using 3 as an example, 3 divides 9 = 10 − 1. That means (see modular arithmetic). The same for all the higher powers of 10: They are all congruent to 1 modulo 3. Since two things that are congruent modulo 3 are either both divisible by 3 or both not, we can interchange values that are congruent modulo 3. So, in a number such as the following, we can replace all the powers of 10 by 1:
which is exactly the sum of the digits.
Case where the alternating sum of digits is used
This method works for divisors that are factors of 10 + 1 = 11.
Using 11 as an example, 11 divides 11 = 10 + 1. That means . For the higher powers of 10, they are congruent to 1 for even powers and congruent to −1 for odd powers:
Like the previous case, we can substitute powers of 10 with congruent values:
which is also the difference between the sum of digits at odd positions and the sum of digits at even positions.
Case where only the last digit(s) matter
This applies to divisors that are a factor of a power of 10. This is because sufficiently high powers of the base are multiples of the divisor, and can be eliminated.
For example, in base 10, the factors of 101 include 2, 5, and 10. Therefore, divisibility by 2, 5, and 10 only depend on whether the last 1 digit is divisible by those divisors. The factors of 102 include 4 and 25, and divisibility by those only depend on the last 2 digits.
Case where only the last digit(s) are removed
Most numbers do not divide 9 or 10 evenly, but do divide a higher power of 10n or 10n − 1. In this case the number is still written in powers of 10, but not fully expanded.
For example, 7 does not divide 9 or 10, but does divide 98, which is close to 100. Thus, proceed from
where in this case a is any integer, and b can range from 0 to 99. Next,
and again expanding
and after eliminating the known multiple of 7, the result is
which is the rule "double the number formed by all but the last two digits, then add the last two digits".
Case where the last digit(s) is multiplied by a factor
The representation of the number may also be multiplied by any number relatively prime to the divisor without changing its divisibility. After observing that 7 divides 21, we can perform the following:
after multiplying by 2, this becomes
and then
Eliminating the 21 gives
and multiplying by −1 gives
Either of the last two rules may be used, depending on which is easier to perform. They correspond to the rule "subtract twice the last digit from the rest".
Proof using modular arithmetic
This section will illustrate the basic method; all the rules can be derived following the same procedure. The following requires a basic grounding in modular arithmetic; for divisibility other than by 2's and 5's the proofs rest on the basic fact that 10 mod m is invertible if 10 and m are relatively prime.
For 2n or 5n
Only the last n digits need to be checked.
Representing x as
and the divisibility of x is the same as that of z.
For 7
Since 10 × 5 ≡ 10 × (−2) ≡ 1 (mod 7), we can do the following:
Representing x as
so x is divisible by 7 if and only if y − 2z is divisible by 7.
| Mathematics | Basics | null |
10851309 | https://en.wikipedia.org/wiki/Acidity%20function | Acidity function | An acidity function is a measure of the acidity of a medium or solvent system, usually expressed in terms of its ability to donate protons to (or accept protons from) a solute (Brønsted acidity). The pH scale is by far the most commonly used acidity function, and is ideal for dilute aqueous solutions. Other acidity functions have been proposed for different environments, most notably the Hammett acidity function, H0, for superacid media and its modified version H− for superbasic media. The term acidity function is also used for measurements made on basic systems, and the term basicity function is uncommon.
Hammett-type acidity functions are defined in terms of a buffered medium containing a weak base B and its conjugate acid BH+:
where pKa is the dissociation constant of BH+. They were originally measured by using nitroanilines as weak bases or acid-base indicators and by measuring the concentrations of the protonated and unprotonated forms with UV-visible spectroscopy. Other spectroscopic methods, such as NMR, may also be used. The function H− is defined similarly for strong bases:
Here BH is a weak acid used as an acid-base indicator, and B− is its conjugate base.
Comparison of acidity functions with aqueous acidity
In dilute aqueous solution, the predominant acid species is the hydrated hydrogen ion H3O+ (or more accurately [H(OH2)n]+). In this case H0 and H− are equivalent to pH values determined by the buffer equation or Henderson-Hasselbalch equation.
However, an H0 value of −21 (a 25% solution of SbF5 in HSO3F) does not imply a hydrogen ion concentration of 1021 mol/dm3: such a "solution" would have a density more than a hundred times greater than a neutron star. Rather, H0 = −21 implies that the reactivity (protonating power) of the solvated hydrogen ions is 1021 times greater than the reactivity of the hydrated hydrogen ions in an aqueous solution of pH 0. The actual reactive species are different in the two cases, but both can be considered to be sources of H+, i.e. Brønsted acids. The hydrogen ion H+ never exists on its own in a condensed phase, as it is always solvated to a certain extent. The high negative value of H0 in SbF5/HSO3F mixtures indicates that the solvation of the hydrogen ion is much weaker in this solvent system than in water. Other way of expressing the same phenomenon is to say that SbF5·FSO3H is a much stronger proton donor than H3O+.
| Physical sciences | Concepts | Chemistry |
10854684 | https://en.wikipedia.org/wiki/Karnaugh%20map | Karnaugh map | A Karnaugh map (KM or K-map) is a diagram that can be used to simplify a Boolean algebra expression. Maurice Karnaugh introduced it in 1953 as a refinement of Edward W. Veitch's 1952 Veitch chart, which itself was a rediscovery of Allan Marquand's 1881 logical diagram (aka. Marquand diagram). It is also useful for understanding logic circuits. Karnaugh maps are also known as Marquand–Veitch diagrams, Svoboda charts -(albeit only rarely)- and Karnaugh–Veitch maps (KV maps).
Definition
A Karnaugh map reduces the need for extensive calculations by taking advantage of humans' pattern-recognition capability. It also permits the rapid identification and elimination of potential race conditions.
The required Boolean results are transferred from a truth table onto a two-dimensional grid where, in Karnaugh maps, the cells are ordered in Gray code, and each cell position represents one combination of input conditions. Cells are also known as minterms, while each cell value represents the corresponding output value of the Boolean function. Optimal groups of 1s or 0s are identified, which represent the terms of a canonical form of the logic in the original truth table. These terms can be used to write a minimal Boolean expression representing the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using the minimal number of logic gates. A sum-of-products expression (SOP) can always be implemented using AND gates feeding into an OR gate, and a product-of-sums expression (POS) leads to OR gates feeding an AND gate. The POS expression gives a complement of the function (if F is the function so its complement will be F'). Karnaugh maps can also be used to simplify logic expressions in software design. Boolean conditions, as used for example in conditional statements, can get very complicated, which makes the code difficult to read and to maintain. Once minimised, canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic operators.
Example
Karnaugh maps are used to facilitate the simplification of Boolean algebra functions. For example, consider the Boolean function described by the following truth table.
Following are two different notations describing the same function in unsimplified Boolean algebra, using the Boolean variables , , , and their inverses.
where are the minterms to map (i.e., rows that have output 1 in the truth table).
where are the maxterms to map (i.e., rows that have output 0 in the truth table).
Construction
In the example above, the four input variables can be combined in 16 different ways, so the truth table has 16 rows, and the Karnaugh map has 16 positions. The Karnaugh map is therefore arranged in a 4 × 4 grid.
The row and column indices (shown across the top and down the left side of the Karnaugh map) are ordered in Gray code rather than binary numerical order. Gray code ensures that only one variable changes between each pair of adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the function's output for that combination of inputs.
Grouping
After the Karnaugh map has been constructed, it is used to find one of the simplest possible forms — a canonical form — for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify the expression. The minterms ('minimal terms') for the final expression are found by encircling groups of 1s in the map. Minterm groups must be rectangular and must have an area that is a power of two (i.e., 1, 2, 4, 8...). Minterm rectangles should be as large as possible without containing any 0s. Groups may overlap in order to make each one larger. The optimal groupings in the example below are marked by the green, red and blue lines, and the red and green groups overlap. The red group is a 2 × 2 square, the green group is a 4 × 1 rectangle, and the overlap area is indicated in brown.
The cells are often denoted by a shorthand which describes the logical value of the inputs that the cell covers. For example, would mean a cell which covers the 2x2 area where and are true, i.e. the cells numbered 13, 9, 15, 11 in the diagram above. On the other hand, would mean the cells where is true and is false (that is, is true).
The grid is toroidally connected, which means that rectangular groups can wrap across the edges (see picture). Cells on the extreme right are actually 'adjacent' to those on the far left, in the sense that the corresponding input values only differ by one bit; similarly, so are those at the very top and those at the bottom. Therefore, can be a valid term—it includes cells 12 and 8 at the top, and wraps to the bottom to include cells 10 and 14—as is , which includes the four corners.
Solution
Once the Karnaugh map has been constructed and the adjacent 1s linked by rectangular and square boxes, the algebraic minterms can be found by examining which variables stay the same within each box.
For the red grouping:
A is the same and is equal to 1 throughout the box, therefore it should be included in the algebraic representation of the red minterm.
B does not maintain the same state (it shifts from 1 to 0), and should therefore be excluded.
C does not change. It is always 0, so its complement, NOT-C, should be included. Thus, should be included.
D changes, so it is excluded.
Thus the first minterm in the Boolean sum-of-products expression is .
For the green grouping, A and B maintain the same state, while C and D change. B is 0 and has to be negated before it can be included. The second term is therefore . Note that it is acceptable that the green grouping overlaps with the red one.
In the same way, the blue grouping gives the term .
The solutions of each grouping are combined: the normal form of the circuit is .
Thus the Karnaugh map has guided a simplification of
It would also have been possible to derive this simplification by carefully applying the axioms of Boolean algebra, but the time it takes to do that grows exponentially with the number of terms.
Inverse
The inverse of a function is solved in the same way by grouping the 0s instead.
The three terms to cover the inverse are all shown with grey boxes with different colored borders:
:
:
:
This yields the inverse:
Through the use of De Morgan's laws, the product of sums can be determined:
Don't cares
Karnaugh maps also allow easier minimizations of functions whose truth tables include "don't care" conditions. A "don't care" condition is a combination of inputs for which the designer doesn't care what the output is. Therefore, "don't care" conditions can either be included in or excluded from any rectangular group, whichever makes it larger. They are usually indicated on the map with a dash or X.
The example on the right is the same as the example above but with the value of f(1,1,1,1) replaced by a "don't care". This allows the red term to expand all the way down and, thus, removes the green term completely.
This yields the new minimum equation:
Note that the first term is just , not . In this case, the don't care has dropped a term (the green rectangle); simplified another (the red one); and removed the race hazard (removing the yellow term as shown in the following section on race hazards).
The inverse case is simplified as follows:
Through the use of De Morgan's laws, the product of sums can be determined:
Race hazards
Elimination
Karnaugh maps are useful for detecting and eliminating race conditions. Race hazards are very easy to spot using a Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions circumscribed on the map. However, because of the nature of Gray coding, adjacent has a special definition explained above – we're in fact moving on a torus, rather than a rectangle, wrapping around the top, bottom, and the sides.
In the example above, a potential race condition exists when C is 1 and D is 0, A is 1, and B changes from 1 to 0 (moving from the blue state to the green state). For this case, the output is defined to remain unchanged at 1, but because this transition is not covered by a specific term in the equation, a potential for a glitch (a momentary transition of the output to 0) exists.
There is a second potential glitch in the same example that is more difficult to spot: when D is 0 and A and B are both 1, with C changing from 1 to 0 (moving from the blue state to the red state). In this case the glitch wraps around from the top of the map to the bottom.
Whether glitches will actually occur depends on the physical nature of the implementation, and whether we need to worry about it depends on the application. In clocked logic, it is enough that the logic settles on the desired value in time to meet the timing deadline. In our example, we are not considering clocked logic.
In our case, an additional term of would eliminate the potential race hazard, bridging between the green and blue output states or blue and red output states: this is shown as the yellow region (which wraps around from the bottom to the top of the right half) in the adjacent diagram.
The term is redundant in terms of the static logic of the system, but such redundant, or consensus terms, are often needed to assure race-free dynamic performance.
Similarly, an additional term of must be added to the inverse to eliminate another potential race hazard. Applying De Morgan's laws creates another product of sums expression for f, but with a new factor of .
2-variable map examples
The following are all the possible 2-variable, 2 × 2 Karnaugh maps. Listed with each is the minterms as a function of and the race hazard free (see previous section) minimum equation. A minterm is defined as an expression that gives the most minimal form of expression of the mapped variables. All possible horizontal and vertical interconnected blocks can be formed. These blocks must be of the size of the powers of 2 (1, 2, 4, 8, 16, 32, ...). These expressions create a minimal logical mapping of the minimal logic variable expressions for the binary expressions to be mapped. Here are all the blocks with one field.
A block can be continued across the bottom, top, left, or right of the chart. That can even wrap beyond the edge of the chart for variable minimization. This is because each logic variable corresponds to each vertical column and horizontal row. A visualization of the k-map can be considered cylindrical. The fields at edges on the left and right are adjacent, and the top and bottom are adjacent. K-Maps for four variables must be depicted as a donut or torus shape. The four corners of the square drawn by the k-map are adjacent. Still more complex maps are needed for 5 variables and more.
Related graphical methods
Related graphical minimization methods include:
Marquand diagram (1881) by Allan Marquand (1853–1924)
Veitch chart (1952) by Edward W. Veitch (1924–2013)
Svoboda chart (1956) by Antonín Svoboda (1907–1980)
Mahoney map (M-map, designation numbers, 1963) by Matthew V. Mahoney (a reflection-symmetrical extension of Karnaugh maps for larger numbers of inputs)
Reduced Karnaugh map (RKM) techniques (from 1969) like infrequent variables, map-entered variables (MEV), variable-entered map (VEM) or variable-entered Karnaugh map (VEKM) by G. W. Schultz, Thomas E. Osborne, Christopher R. Clare, J. Robert Burgoon, Larry L. Dornhoff, William I. Fletcher, Ali M. Rushdi and others (several successive Karnaugh map extensions based on variable inputs for a larger numbers of inputs)
Minterm-ring map (MRM, 1990) by Thomas R. McCalla (a three-dimensional extension of Karnaugh maps for larger numbers of inputs)
| Mathematics | Mathematical logic | null |
3010589 | https://en.wikipedia.org/wiki/Flory%E2%80%93Huggins%20solution%20theory | Flory–Huggins solution theory | Flory–Huggins solution theory is a lattice model of the thermodynamics of polymer solutions which takes account of the great dissimilarity in molecular sizes in adapting the usual expression for the entropy of mixing. The result is an equation for the Gibbs free energy change for mixing a polymer with a solvent. Although it makes simplifying assumptions, it generates useful results for interpreting experiments.
Theory
The thermodynamic equation for the Gibbs energy change accompanying mixing at constant temperature and (external) pressure is
A change, denoted by , is the value of a variable for a solution or mixture minus the values for the pure components considered separately. The objective is to find explicit formulas for and , the enthalpy and entropy increments associated with the mixing process.
The result obtained by Flory and Huggins is
The right-hand side is a function of the number of moles and volume fraction of solvent (component ), the number of moles and volume fraction of polymer (component ), with the introduction of a parameter to take account of the energy of interdispersing polymer and solvent molecules. is the gas constant and is the absolute temperature. The volume fraction is analogous to the mole fraction, but is weighted to take account of the relative sizes of the molecules. For a small solute, the mole fractions would appear instead, and this modification is the innovation due to Flory and Huggins. In the most general case the mixing parameter, , is a free energy parameter, thus including an entropic component.
Derivation
We first calculate the entropy of mixing, the increase in the uncertainty about the locations of the molecules when they are interspersed. In the pure condensed phases – solvent and polymer – everywhere we look we find a molecule. Of course, any notion of "finding" a molecule in a given location is a thought experiment since we can't actually examine spatial locations the size of molecules. The expression for the entropy of mixing of small molecules in terms of mole fractions is no longer reasonable when the solute is a macromolecular chain. We take account of this dissymmetry in molecular sizes by assuming that individual polymer segments and individual solvent molecules occupy sites on a lattice. Each site is occupied by exactly one molecule of the solvent or by one monomer of the polymer chain, so the total number of sites is
where is the number of solvent molecules and is the number of polymer molecules, each of which has segments.
For a random walk on a lattice we can calculate the entropy change (the increase in spatial uncertainty) as a result of mixing solute and solvent.
where is the Boltzmann constant. Define the lattice volume fractions and
These are also the probabilities that a given lattice site, chosen at random, is occupied by a solvent molecule or a polymer segment, respectively. Thus
For a small solute whose molecules occupy just one lattice site, equals one, the volume fractions reduce to molecular or mole fractions, and we recover the usual entropy of mixing.
In addition to the entropic effect, we can expect an enthalpy change. There are three molecular interactions to consider: solvent-solvent , monomer-monomer (not the covalent bonding, but between different chain sections), and monomer-solvent . Each of the last occurs at the expense of the average of the other two, so the energy increment per monomer-solvent contact is
The total number of such contacts is
where is the coordination number, the number of nearest neighbors for a lattice site, each one occupied either by one chain segment or a solvent molecule. That is, is the total number of polymer segments (monomers) in the solution, so is the number of nearest-neighbor sites to all the polymer segments. Multiplying by the probability that any such site is occupied by a solvent molecule, we obtain the total number of polymer-solvent molecular interactions. An approximation following mean field theory is made by following this procedure, thereby reducing the complex problem of many interactions to a simpler problem of one interaction.
The enthalpy change is equal to the energy change per polymer monomer-solvent interaction multiplied by the number of such interactions
The polymer-solvent interaction parameter chi is defined as
It depends on the nature of both the solvent and the solute, and is the only material-specific parameter in the model. The enthalpy change becomes
Assembling terms, the total free energy change is
where we have converted the expression from molecules and to moles and by transferring the Avogadro constant to the gas constant .
The value of the interaction parameter can be estimated from the Hildebrand solubility parameters and
where is the actual volume of a polymer segment.
In the most general case the interaction and the ensuing mixing parameter, , is a free energy parameter, thus including an entropic component. This means that aside to the regular mixing entropy there is another entropic contribution from the interaction between solvent and monomer. This contribution is sometimes very important in order to make quantitative predictions of thermodynamic properties.
More advanced solution theories exist, such as the Flory–Krigbaum theory.
Liquid-liquid phase separation
Polymers can separate out from the solvent, and do so in a characteristic way. The Flory–Huggins free energy per unit volume, for a polymer with monomers, can be written in a simple dimensionless form
for the volume fraction of monomers, and . The osmotic pressure (in reduced units) is
.
The polymer solution is stable with respect to small fluctuations when the second derivative of this free energy is positive. This second derivative is
and the solution first becomes unstable when this and the third derivative
are both equal to zero. A little algebra then shows that the polymer solution first becomes unstable at a critical point at
This means that for all values of the monomer-solvent effective interaction is weakly repulsive, but this is too weak to cause liquid/liquid separation. However, when , there is separation into two coexisting phases, one richer in polymer but poorer in solvent, than the other.
The unusual feature of the liquid/liquid phase separation is that it is highly asymmetric: the volume fraction of monomers at the critical point is approximately , which is very small for large polymers. The amount of polymer in the solvent-rich/polymer-poor coexisting phase is extremely small for long polymers. The solvent-rich phase is close to pure solvent. This is peculiar to polymers, a mixture of small molecules can be approximated using the Flory–Huggins expression with , and then and both coexisting phases are far from pure.
Polymer blends
Synthetic polymers rarely consist of chains of uniform length in solvent. The Flory–Huggins free energy density can be generalized to an N-component mixture of polymers with lengths by
For a binary polymer blend, where one species consists of monomers and the other monomers this simplifies to
As in the case for dilute polymer solutions, the first two terms on the right-hand side represent the entropy of mixing. For large polymers of and these terms are negligibly small. This implies that for a stable mixture to exist , so for polymers A and B to blend their segments must attract one another.
Limitations
Flory–Huggins theory tends to agree well with experiments in the semi-dilute concentration regime and can be used to fit data for even more complicated blends with higher concentrations. The theory qualitatively predicts phase separation, the tendency for high molecular weight species to be immiscible, the interaction-temperature dependence and other features commonly observed in polymer mixtures. However, unmodified Flory–Huggins theory fails to predict the lower critical solution temperature observed in some polymer blends and the lack of dependence of the critical temperature on chain length . Additionally, it can be shown that for a binary blend of polymer species with equal chain lengths the critical concentration should be ; however, polymers blends have been observed where this parameter is highly asymmetric. In certain blends, mixing entropy can dominate over monomer interaction. By adopting the mean-field approximation, parameter complex dependence on temperature, blend composition, and chain length was discarded. Specifically, interactions beyond the nearest neighbor may be highly relevant to the behavior of the blend and the distribution of polymer segments is not necessarily uniform, so certain lattice sites may experience interaction energies disparate from that approximated by the mean-field theory.
One well-studied effect on interaction energies neglected by unmodified Flory–Huggins theory is chain correlation. In dilute polymer mixtures, where chains are well separated, intramolecular forces between monomers of the polymer chain dominate and drive demixing leading to regions where polymer concentration is high. As the polymer concentration increases, chains tend to overlap and the effect becomes less important. In fact, the demarcation between dilute and semi-dilute solutions is commonly defined by the concentration where polymers begin to overlap which can be estimated as
Here, m is the mass of a single polymer chain, and is the chain's radius of gyration.
| Physical sciences | Thermodynamics | Chemistry |
3011098 | https://en.wikipedia.org/wiki/Euler%20method | Euler method | In mathematics and computational science, the Euler method (also called the forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who first proposed it in his book Institutionum calculi integralis (published 1768–1770).
The Euler method is a first-order method, which means that the local error (error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size.
The Euler method often serves as the basis to construct more complex methods, e.g., predictor–corrector method.
Geometrical description
Purpose and why it works
Consider the problem of calculating the shape of an unknown curve which starts at a given point and satisfies a given differential equation. Here, a differential equation can be thought of as a formula by which the slope of the tangent line to the curve can be computed at any point on the curve, once the position of that point has been calculated.
The idea is that while the curve is initially unknown, its starting point, which we denote by is known (see Figure 1). Then, from the differential equation, the slope to the curve at can be computed, and so, the tangent line.
Take a small step along that tangent line up to a point Along this small step, the slope does not change too much, so will be close to the curve. If we pretend that is still on the curve, the same reasoning as for the point above can be used. After several steps, a polygonal curve () is computed. In general, this curve does not diverge too far from the original unknown curve, and the error between the two curves can be made small if the step size is small enough and the interval of computation is finite.
First-order process
When given the values for and , and the derivative of is a given function of and denoted as . Begin the process by setting . Next, choose a value for the size of every step along t-axis, and set (or equivalently ). Now, the Euler method is used to find from and :
The value of is an approximation of the solution at time , i.e., . The Euler method is explicit, i.e. the solution is an explicit function of for .
Higher-order process
While the Euler method integrates a first-order ODE, any ODE of order can be represented as a system of first-order ODEs. When given the ODE of order defined as
as well as , , and , we implement the following formula until we reach the approximation of the solution to the ODE at the desired time:
These first-order systems can be handled by Euler's method or, in fact, by any other scheme for first-order systems.
First-order example
Given the initial value problem
we would like to use the Euler method to approximate .
Using step size equal to 1 ()
The Euler method is
so first we must compute . In this simple differential equation, the function is defined by . We have
By doing the above step, we have found the slope of the line that is tangent to the solution curve at the point . Recall that the slope is defined as the change in divided by the change in , or .
The next step is to multiply the above value by the step size , which we take equal to one here:
Since the step size is the change in , when we multiply the step size and the slope of the tangent, we get a change in value. This value is then added to the initial value to obtain the next value to be used for computations.
The above steps should be repeated to find , and .
Due to the repetitive nature of this algorithm, it can be helpful to organize computations in a chart form, as seen below, to avoid making errors.
{| class="wikitable"
|-
! !! !! !! !! !! !!
|-
| 0 || 1 || 0 || 1 || 1 || 1 || 2
|-
| 1 || 2 || 1 || 2 || 1 || 2 || 4
|-
| 2 || 4 || 2 || 4 || 1 || 4 || 8
|-
| 3 || 8 || 3 || 8 || 1 || 8 || 16
|}
The conclusion of this computation is that . The exact solution of the differential equation is , so . Although the approximation of the Euler method was not very precise in this specific case, particularly due to a large value step size , its behaviour is qualitatively correct as the figure shows.
Using other step sizes
As suggested in the introduction, the Euler method is more accurate if the step size is smaller. The table below shows the result with different step sizes. The top row corresponds to the example in the previous section, and the second row is illustrated in the figure.
{| class="wikitable"
|-
! step size !! result of Euler's method !! error
|-
| 1 || 16.00 || 38.60
|-
| 0.25 || 35.53 || 19.07
|-
| 0.1 || 45.26 || 9.34
|-
| 0.05 || 49.56 || 5.04
|-
| 0.025 || 51.98 || 2.62
|-
| 0.0125 || 53.26 || 1.34
|}
The error recorded in the last column of the table is the difference between the exact solution at and the Euler approximation. In the bottom of the table, the step size is half the step size in the previous row, and the error is also approximately half the error in the previous row. This suggests that the error is roughly proportional to the step size, at least for fairly small values of the step size. This is true in general, also for other equations; see the section Global truncation error for more details.
Other methods, such as the midpoint method also illustrated in the figures, behave more favourably: the global error of the midpoint method is roughly proportional to the square of the step size. For this reason, the Euler method is said to be a first-order method, while the midpoint method is second order.
We can extrapolate from the above table that the step size needed to get an answer that is correct to three decimal places is approximately 0.00001, meaning that we need 400,000 steps. This large number of steps entails a high computational cost. For this reason, higher-order methods are employed such as Runge–Kutta methods or linear multistep methods, especially if a high accuracy is desired.
Higher-order example
For this third-order example, assume that the following information is given:
From this we can isolate y''' to get the equation:
Using that we can get the solution for :And using the solution for , we can get the solution for :We can continue this process using the same formula as long as necessary to find whichever desired.
Derivation
The Euler method can be derived in a number of ways.
(1) Firstly, there is the geometrical description above.
(2) Another possibility is to consider the Taylor expansion of the function around :
The differential equation states that . If this is substituted in the Taylor expansion and the quadratic and higher-order terms are ignored, the Euler method arises.
The Taylor expansion is used below to analyze the error committed by the Euler method, and it can be extended to produce Runge–Kutta methods.
(3) A closely related derivation is to substitute the forward finite difference formula for the derivative,
in the differential equation . Again, this yields the Euler method.
A similar computation leads to the midpoint method and the backward Euler method.
(4) Finally, one can integrate the differential equation from to and apply the fundamental theorem of calculus to get:
Now approximate the integral by the left-hand rectangle method (with only one rectangle):
Combining both equations, one finds again the Euler method.
This line of thought can be continued to arrive at various linear multistep methods.
Local truncation error
The local truncation error of the Euler method is the error made in a single step. It is the difference between the numerical solution after one step, , and the exact solution at time . The numerical solution is given by
For the exact solution, we use the Taylor expansion mentioned in the section Derivation above:
The local truncation error (LTE) introduced by the Euler method is given by the difference between these equations:
This result is valid if has a bounded third derivative.
This shows that for small , the local truncation error is approximately proportional to . This makes the Euler method less accurate than higher-order techniques such as Runge-Kutta methods and linear multistep methods, for which the local truncation error is proportional to a higher power of the step size.
A slightly different formulation for the local truncation error can be obtained by using the Lagrange form for the remainder term in Taylor's theorem. If has a continuous second derivative, then there exists a such that
In the above expressions for the error, the second derivative of the unknown exact solution can be replaced by an expression involving the right-hand side of the differential equation. Indeed, it follows from the equation that
Global truncation error
The global truncation error is the error at a fixed time , after however many steps the method needs to take to reach that time from the initial time. The global truncation error is the cumulative effect of the local truncation errors committed in each step. The number of steps is easily determined to be , which is proportional to , and the error committed in each step is proportional to (see the previous section). Thus, it is to be expected that the global truncation error will be proportional to .
This intuitive reasoning can be made precise. If the solution has a bounded second derivative and is Lipschitz continuous in its second argument, then the global truncation error (denoted as ) is bounded by
where is an upper bound on the second derivative of on the given interval and is the Lipschitz constant of . Or more simply, when , the value (such that is treated as a constant). In contrast, where function is the exact solution which only contains the variable.
The precise form of this bound is of little practical importance, as in most cases the bound vastly overestimates the actual error committed by the Euler method. What is important is that it shows that the global truncation error is (approximately) proportional to . For this reason, the Euler method is said to be first order.
Example
If we have the differential equation , and the exact solution , and we want to find and for when . Thus we can find the error bound at t=2.5 and h=0.5:
Notice that t0 is equal to 2 because it is the lower bound for t in .
Numerical stability
The Euler method can also be numerically unstable, especially for stiff equations, meaning that the numerical solution grows very large for equations where the exact solution does not. This can be illustrated using the linear equation
The exact solution is , which decays to zero as . However, if the Euler method is applied to this equation with step size , then the numerical solution is qualitatively wrong: It oscillates and grows (see the figure). This is what it means to be unstable. If a smaller step size is used, for instance , then the numerical solution does decay to zero.
If the Euler method is applied to the linear equation , then the numerical solution is unstable if the product is outside the region
illustrated on the right. This region is called the (linear) stability region. In the example, , so if then which is outside the stability region, and thus the numerical solution is unstable.
This limitation — along with its slow convergence of error with — means that the Euler method is not often used, except as a simple example of numerical integration. Frequently models of physical systems contain terms representing fast-decaying elements (i.e. with large negative exponential arguments). Even when these are not of interest in the overall solution, the instability they can induce means that an exceptionally small timestep would be required if the Euler method is used.
Rounding errors
In step of the Euler method, the rounding error is roughly of the magnitude where is the machine epsilon. Assuming that the rounding errors are independent random variables, the expected total rounding error is proportional to . Thus, for extremely small values of the step size the truncation error will be small but the effect of rounding error may be big. Most of the effect of rounding error can be easily avoided if compensated summation is used in the formula for the Euler method.
Modifications and extensions
A simple modification of the Euler method which eliminates the stability problems noted above is the backward Euler method:
This differs from the (standard, or forward) Euler method in that the function is evaluated at the end point of the step, instead of the starting point. The backward Euler method is an implicit method, meaning that the formula for the backward Euler method has on both sides, so when applying the backward Euler method we have to solve an equation. This makes the implementation more costly.
Other modifications of the Euler method that help with stability yield the exponential Euler method or the semi-implicit Euler method.
More complicated methods can achieve a higher order (and more accuracy). One possibility is to use more function evaluations. This is illustrated by the midpoint method which is already mentioned in this article:
.
This leads to the family of Runge–Kutta methods.
The other possibility is to use more past values, as illustrated by the two-step Adams–Bashforth method:
This leads to the family of linear multistep methods. There are other modifications which uses techniques from compressive sensing to minimize memory usage
In popular culture
In the film Hidden Figures, Katherine Goble resorts to the Euler method in calculating the re-entry of astronaut John Glenn from Earth orbit.
| Mathematics | Differential equations | null |
3011436 | https://en.wikipedia.org/wiki/Florigen | Florigen | Florigens (or flowering hormone) are proteins capable of inducing flowering time in angiosperms. The prototypical florigen is encoded by the FT gene and its orthologs in Arabidopsis and other plants. Florigens are produced in the leaves, and act in the shoot apical meristem of buds and growing tips.
Mechanism
For a plant to begin flowering, it must undergo changes in its shoot apical meristem (SAM). However, there are multiple environmental factors affecting the plant even before it begins this process — in particular, light. It is through "the evolution of both internal and external control systems that enables plants to precisely regulate flowering so that it occurs at the optimal time for reproductive success." The way the plant determines this optimal time is through day-night periods through the use of photoperiodism. Although it was originally thought that the accumulation of photosynthetic products controlled the flowering of plants, two men by the names of Wightman Garner and Henry Allard proved it was not. They instead found that it was a matter of day length rather than the accumulation of the products within the plants that affected their flowering abilities.
Flowering plants fall into two main photoperiodic response categories:
"Short-day plants (SDPs) flower only in short days (qualitative SDPs), or their flowering is accelerated by short days (quantitative SDPs)"
"Long-day plants (LDPs) flower only in long days (qualitative LDPs), or their flowering is accelerated by long days (quantitative LDPs)"
These types of flowering plants are differentiated by the whether the day has exceeded some duration - usually calculated by 24-hour cycles - known as the critical day length. It is also important to note that there is no absolute value for the minimum day length as it varies greatly amid species. Until the correct amount of day length is reached, the plants ensure no flowering results. They do so through adaptations like preventing immature plants from responding to inadequate day lengths. Plants also have the ability to prevent the response of the photoperiodic stimulus until a certain temperature is reached. Species like winter wheat that rely on just that. The wheat require a cold period before being able to respond to the photoperiod. This is known as vernalization or overwintering.
This ebb-and-flow of flowering in plants is essentially controlled by an internal clock known as the endogenous oscillator. It is thought that these internal pacemakers "are regulated by the interaction of four sets of genes expressed in the dawn, morning, afternoon, and evening hours [and that] light may augment the amplitude of the oscillations by activating the morning and evening genes." The rhythms between these different genes are generated internally in the plants, starts with the leaves, but requires an environmental stimulus such as light. The light essentially stimulates the transmission of a floral stimulus (florigen) to the shoot apex when the correct amount of day-length is perceived. This process is known as photoperiodic induction and is a photoperiod-regulated process that is also dependent on the endogenous oscillator..
The current model suggests the involvement of multiple different factors. Research into florigen is predominately centred on the model organism and long day plant, Arabidopsis thaliana. Whilst much of the florigen pathways appear to be well conserved in other studied species, variations do exist. The mechanism may be broken down into three stages: photoperiod-regulated initiation, signal translocation via the phloem, and induction of flowering at the shoot apical meristem.
Initiation
In Arabidopsis thaliana, the signal is initiated by the production of messenger RNA (mRNA) coding a transcription factor called CONSTANS (CO). CO mRNA is produced approximately 12 hours after dawn, a cycle regulated by the plant's circadian rhythms, and is then translated into CO protein. However CO protein is stable only in light, so levels stay low throughout short days and are only able to peak at dusk during long days when there is still some light. CO protein promotes transcription of another gene called (FT). By this mechanism, CO protein may only reach levels capable of promoting FT transcription when exposed to long days. Hence, the transmission of florigen—and thus, the induction of flowering—relies on a comparison between the plant's perception of day/night and its own internal biological clock.
Translocation
The FT protein resulting from the short period of CO transcription factor activity is then transported via the phloem to the shoot apical meristem.
Flowering
Florigen is a systemically mobile signal that is synthesized in leaves and the transported via the phloem to the shoot apical meristem (SAM) where it initiates flowering. In Arabidopsis, the FLOWERING LOCUS T (FT) genes encode for the flowering hormone and in rice the hormone is encoded by Hd3a genes thereby making these genes orthologs. It was found though the use of transgenic plants that the Hd3a promoter in rice is located in the phloem of the leaf along with the Hd3a mRNA. However, the Hd3a protein is found in neither of these places but instead accumulates in the SAM which shows that Hd3a protein is first translated in leaves and then transported to the SAM via the phloem where floral transition is initiated; the same results occurred when looked at Arabidopsis. These results conclude that FT/Hd3a is the florigen signal that induces floral transition in plants.
Upon this conclusion, it became important to understand the process by which the FT protein causes floral transition once it reaches the SAM. The first clue came with looking at models from Arabidposis which suggested that a bZIP domain containing transcription factor, FD, is somehow interacting with FT to form a transcriptional complex that activates floral genes. Studies using rice found that there is an interaction between Hd3a and OsFD1, homologs of FT and FD respectively, that is mediated by the 14-3-3 protein GF14c. The 14-3-3 protein acts as intracellular florigen receptor that interacts directly with Hd3a and OsFD1 to form a tri-protein complex called the florigen activation complex (FAC) because it is essential for florigen function. The FAC works to activate genes needed to initiate flowering at the SAM; flowering genes in Arabidopsis include AP1, SOC1 and several SPL genes, which are targeted by a microRNA and in rice the flowering gene is OsMADS15 (a homolog of AP1).
Antiflorigen
Florigen is regulated by the action of an antiflorigen. Antiflorigens are hormones that are encoded by the same genes for florigen that work to counteract its function. The antiflorigen in Arabidopsis is TERMINAL FLOWER1 (TFL1) and in tomato it is SELF PRUNING (SP).
Research history
Florigen was first described by Soviet Armenian plant physiologist Mikhail Chailakhyan, who in 1937 demonstrated that floral induction can be transmitted through a graft from an induced plant to one that has not been induced to flower. Anton Lang showed that several long-day plants and biennials could be made to flower by treatment with gibberellin, even when grown under a non-flower-inducing (or non-inducing) photoperiod. This led to the suggestion that florigen may be made up of two classes of flowering hormones: Gibberellins and Anthesins. It was later postulated that during non-inducing photoperiods, long-day plants produce anthesin, but no gibberellin, while short-day plants produce gibberellin, but no anthesin. However, these findings did not account for the fact that short-day plants grown under non-inducing conditions (thus producing gibberellin) will not cause flowering of grafted long-day plants that are also under noninductive conditions (thus producing anthesin).
As a result of the problems with isolating florigen, and of the inconsistent results acquired, it has been suggested that florigen does not exist as an individual substance; rather, florigen's effect could be the result of a particular ratio of other hormones. However, more recent findings indicate that florigen does exist and is produced, or at least activated, in the leaves of the plant and that this signal is then transported via the phloem to the growing tip at the shoot apical meristem where the signal acts by inducing flowering. In Arabidopsis thaliana, some researchers have identified this signal as mRNA coded by the FLOWERING LOCUS T (FT) gene, others as the resulting FT protein. First report of FT mRNA being the signal transducer that moves from leaf to shoot apex came from the publication in Science Magazine. However, in 2007 other group of scientists made a breakthrough saying that it is not the mRNA, but the FT Protein that is transmitted from leaves to shoot possibly acting as "Florigen". The initial article that described FT mRNA as flowering stimuli was retracted by the authors themselves.
Triggers of gene transcription
There are three genes involved in clock-controlled flowering pathway, GIGANTEA (GI), CONSTANS (CO), and FLOWERING LOCUS T (FT). Constant overexpression of GI from the Cauliflower mosaic virus 35S promoter causes early flowering under short day so an increase in GI mRNA expression induces flowering. Also, GI increases the expression of FT and CO mRNA, and FT and CO mutants showed later flowering time than GI mutant. In other words, functional FT and CO genes are required for flowering under short day. In addition, these flowering genes accumulate during light phase and decline during dark phase, which are measured by green fluorescent protein. Thus, their expressions oscillate during the 24-hour light-dark-cycle. In conclusion, the accumulation of GI mRNA alone or GI, FT, and CO mRNA promote flowering in Arabidopsis thaliana and these genes expressed in the temporal sequence GI-CO-FT.
Action potential triggers calcium flux into neurons in animal or root apex cells in plants. The intracellular calcium signals are responsible for regulation of many biological functions in organisms. For instance, Ca2+ binding to calmodulin, a Ca2+-binding protein in animals and plants, controls gene transcriptions.
Flowering mechanism
A biological mechanism is proposed based on the information we have above. Light is the flowering signal of Arabidopsis thaliana. Light activates photo-receptors and triggers signal cascades in plant cells of apical or lateral meristems. Action potential is spread via the phloem to the root and more voltage-gated calcium channels are opened along the stem. This causes an influx of calcium ions in the plant. These ions bind to calmodulin and the Ca2+/CaM signaling system triggers the expression of GI mRNA or FT and CO mRNA. The accumulation of GI mRNA or GI-CO-FT mRNA during the day causing the plant to flower.
| Biology and health sciences | Plant hormone | Biology |
3012322 | https://en.wikipedia.org/wiki/Glucose%20test | Glucose test | Many types of glucose tests exist and they can be used to estimate blood sugar levels at a given time or, over a longer period of time, to obtain average levels or to see how fast the body is able to normalize changed glucose levels. Eating food for example leads to elevated blood sugar levels. In healthy people, these levels quickly return to normal via increased cellular glucose uptake which is primarily mediated by increase in blood insulin levels.
Glucose tests can reveal temporary/long-term hyperglycemia or hypoglycemia. These conditions may not have obvious symptoms and can damage organs in the long-term. Abnormally high/low levels, slow return to normal levels from either of these conditions and/or inability to normalize blood sugar levels means that the person being tested probably has some kind of medical condition like type 2 diabetes which is caused by cellular insensitivity to insulin. Glucose tests are thus often used to diagnose such conditions.
Testing methods
Tests that can be performed at home are used in blood glucose monitoring for illnesses that have already been diagnosed medically so that these illnesses can be maintained via medication and meal timing. Some of the home testing methods include
fingerprick type of glucose meter - need to prick self finger 8-12 times a day.
continuous glucose monitor - the CGM monitors the glucose levels every 5 minutes approximately.
Laboratory tests are often used to diagnose illnesses and such methods include
fasting blood sugar (FBS), fasting plasma glucose (FPG): 10–16 hours after eating
glucose tolerance test: continuous testing
postprandial glucose test (PC): 2 hours after eating
random glucose test
Some laboratory tests don't measure glucose levels directly from body fluids or tissues but still indicate elevated blood sugar levels. Such tests measure the levels of glycated hemoglobin, other glycated proteins, 1,5-anhydroglucitol etc. from blood.
Use in medical diagnosis
Glucose testing can be used to diagnose or indicate certain medical conditions.
High blood sugar may indicate
gestational diabetes. This temporary form of diabetes appears during pregnancy, and with glucose-controlling medication or insulin symptoms can be improved.
type 1 and type 2 diabetes or prediabetes. If diagnosed with diabetes, regular glucose tests can help manage or maintain conditions. Type 1, is commonly seen in children or teenagers whose bodies are not producing enough insulin. Type 2 diabetes, is typically seen in adults who are overweight. The insulin in their bodies are either not working normally, or there is not being enough produced.
Low blood sugar may indicate
insulin overuse
starvation
underactive thyroid
Addison's disease
insulinoma
kidney disease
Preparing for testing
Fasting prior to glucose testing may be required with some test types. Fasting blood sugar test, for example, requires 10–16 hour-long period of not eating before the test.
Blood sugar levels can be affected by some drugs and prior to some glucose tests these medications should be temporarily given up or their dosages should be decreased. Such drugs may include salicylates (Aspirin), birth control pills, corticosteroids, tricyclic antidepressants, lithium, diuretics and phenytoin.
Some foods contain caffeine (coffee, tea, colas, energy drinks etc.). Blood sugar levels of healthy people are generally not significantly changed by caffeine, but in diabetics caffeine intake may elevate these levels via its ability to stimulate the adrenergic nervous system.
Reference ranges
Fasting blood sugar
A level below 5.6 mmol/L (100 mg/dL) 10–16 hours without eating is normal. 5.6–6 mmol/L (100–109 mg/dL) may indicate prediabetes and oral glucose tolerance test (OGTT) should be offered to high-risk individuals (old people, those with high blood pressure etc.). 6.1–6.9 mmol/L (110–125 mg/dL) means OGTT should be offered even if other indicators of diabetes are not present. 7 mmol/L (126 mg/dL) and above indicates diabetes and the fasting test should be repeated.
Glucose tolerance test
Postprandial glucose test
Random glucose test
| Biology and health sciences | Diagnostics | Health |
3012682 | https://en.wikipedia.org/wiki/Rheic%20Ocean | Rheic Ocean | The Rheic Ocean (; ) was an ocean which separated two major paleocontinents, Gondwana and Laurussia (Laurentia-Baltica-Avalonia). One of the principal oceans of the Paleozoic, its sutures today stretch from Mexico to Turkey and its closure resulted in the assembly of the supercontinent Pangaea and the formation of the Variscan–Alleghenian–Ouachita orogenies.
Etymology
The ocean located between Gondwana and Laurentia in the Early Cambrian was named for Iapetus, in Greek mythology the father of Atlas (from which source the Atlantic Ocean ultimately gets its name), just as the Iapetus Ocean was the predecessor of the Atlantic Ocean. The ocean between Gondwana and Laurussia (Laurentia–Baltica–Avalonia) that existed from the Early Ordovician to the Early Carboniferous was named the Rheic Ocean after Rhea, sister of Iapetus.
Geodynamic evolution
At the beginning of the Paleozoic Era, about 540 million years ago, most of the continental mass on Earth was clustered around the south pole as the paleocontinent Gondwana. The exception was formed by a number of smaller continents, such as Laurentia and Baltica. The Paleozoic ocean between Gondwana, Laurentia and Baltica is called the Iapetus Ocean. The northern edge of Gondwana had been dominated by the Cadomian orogeny during the Ediacaran period. This orogeny formed a cordillera-type volcanic arc where oceanic crust subducted below Gondwana. When a mid-oceanic ridge subducted at an oblique angle, extensional basins developed along the northern margin of Gondwana. During the late Cambrian to Early Ordovician these extensional basins had evolved a rift running along the northern edge of Gondwana. The rift in its turn evolved into a mid-oceanic ridge that separated small continental fragments such as Avalonia and Carolina from the main Gondwanan land mass, leading to the formation of the Rheic Ocean in the Early Ordovician.
As Avalonia-Carolina drifted north from Gondwana, the Rheic Ocean grew and reached its maximum width () in the Silurian. In this process, the Iapetus Ocean closed as Avalonia-Carolina collided with Laurentia and the Appalachian orogeny formed.
The closure of the Rheic began in the Early Devonian and was completed in the Mississippian when Gondwana and Laurentia collided to form Pangaea. This closure resulted in the largest collisional orogen of the Palaeozoic: the Variscan and Alleghanian orogens between Gondwana's West African margin and southern Baltica and eastern Laurentia and the Ouachita orogeny between the Amazonian margin of Gondwana and southern Laurentia.
Effects on life
The Prague Basin, which was an archipelago of humid volcanic islands in the Rheic Ocean on the outer edges of what was then the Gondwanan shelf during the Silurian, was a major hotspot of plant biodiversity during the early stages of the Silurian-Devonian Terrestrial Revolution. The geologically rapid environmental changes associated with the formation and erosion of volcanic islands and high rates of endemism associated with island ecosystems likely played an important role in driving the rapid early diversification of vascular plants.
It is believed that the closure of the Rheic, alongside the simultaneous onset of the Late Palaeozoic Ice Age, may have sparked the Carboniferous-Earliest Permian Biodiversification Event, an evolutionary radiation of marine life dominated by increase in species richness of fusulinids and brachiopods.
| Physical sciences | Paleogeography | Earth science |
5508267 | https://en.wikipedia.org/wiki/Teleosauridae | Teleosauridae | Teleosauridae is a family of extinct typically marine crocodylomorphs similar to the modern gharial that lived during the Jurassic period. Teleosaurids were thalattosuchians closely related to the fully aquatic metriorhynchoids, but were less adapted to an open-ocean, pelagic lifestyle. The family was originally coined to include all the semi-aquatic (i.e. non-metriorhynchoid) thalattosuchians and was equivalent to the modern superfamily Teleosauroidea. However, as teleosauroid relationships and diversity was better studied in the 21st century, the division of teleosauroids into two distinct evolutionary lineages led to the establishment of Teleosauridae as a more restrictive family within the group, together with its sister family Machimosauridae.
Amongst teleosauroids, teleosaurids were generally smaller and less common than machimosaurids, suggesting the two families occupied different niches, similar to modern species of crocodilians. However, teleosaurids were more diverse than machimosaurids, with generalist coastal predators (Mystriosaurus), long-snouted marine piscivores (Bathysuchus), and potentially even long-snouted, semi-terrestrial predators (Teleosaurus). Additionally, teleosaurids occupied a wider range of habitats than machimosaurids, from semi-marine coasts and estuaries, the open-ocean, freshwater, and potentially even semi-terrestrial environments.
Classification
Teleosauridae is phylogenetically defined in the PhyloCode by Mark T. Young and colleagues as "the largest clade within Teleosauroidea containing Teleosaurus cadomensis but not Machimosaurus hugii. Teleosauridae is split into two subfamilies, the Teleosaurinae and the Aeolodontinae, the former defined in the PhyloCode as "the largest clade within Teleosauroidea containing Teleosaurus cadomensis, but not Aeolodon priscus and the latter defined in the PhyloCode as "the largest clade within Teleosauroidea containing Aeolodon pricus, but not Teleosaurus cadomensis.
Palaeobiology
Teleosaurids were originally regarded as marine analogues to modern gharials, as they both typically share long, tubular snouts and narrow teeth. However, differences in the jaws, teeth, and skeleton of different teleosaurids suggest that they were more ecologically diverse than this. Earlier teleosaurids were coastal semi-aquatic generalists, while the two subfamilies were more specialised. Teleosaurines appear to have been semi-terrestrial, as they were more heavily armoured and had forward-facing nostrils. In contrast, aeolodontines have been found in deep marine waters and had reduced armour, implying that they were open water predators similar to metriorhynchoids (although the oldest aeolodontine, Mycterosuchus, appears to have been semi-terrestrial, similar to teleosaurines).
Palaeoecology
Distribution
Definitive fossils of teleosaurids are restricted to Laurasia, with material found in Europe(England, France, Germany, Italy, Portugal, Russia and Switzerland) and Asia (China and Thailand, and possibly India).
| Biology and health sciences | Other prehistoric archosaurs | Animals |
5512111 | https://en.wikipedia.org/wiki/Lygaeidae | Lygaeidae | The Lygaeidae are a family in the Hemiptera (true bugs), with more than 110 genera in four subfamilies. The family is commonly referred to as seed bugs, and less commonly, milkweed bugs, or ground bugs. Many species feed on seeds, some on sap or seed pods, others are omnivores and a few, such as the wekiu bug, are insectivores. Insects in this family are distributed across the world.
The family was vastly larger, but numerous former subfamilies have been removed and given independent family status, including the Artheneidae, Blissidae, Cryptorhamphidae, Cymidae, Geocoridae, Heterogastridae, Ninidae, Oxycarenidae and Rhyparochromidae, which together constituted well over half of the former family.
The bizarre and mysterious beetle-like Psamminae were formerly often placed in the Piesmatidae, but this is almost certainly incorrect. Their true affiliations are not entirely resolved.
Distinguishing characteristics
Lygaeidae are oval or elongate in body shape and have four-segmented antennae. Lygaeidae can be distinguished from Miridae (plant bugs) by the presence of ocelli, or simple eyes. They are distinguished from Coreidae (squash bugs) by the number of veins in the membrane of the front wings, as Lygaeidae have only four or five veins.
Subfamilies and selected genera
An incomplete list of Lygaeidae genera is subdivided as:
subfamily Ischnorhynchinae Stål, 1872
Crompus Stål, 1874
Kleidocerys Stephens, 1829
subfamily Lygaeinae Schilling, 1829
Lygaeus Fabricius, 1794
Oncopeltus Stål, 1868
Melanocoryphus Stål, 1872
Spilostethus Stål, 1868
Tropidothorax Bergroth, 1894
subfamily Orsillinae Stål, 1872
Nysius Dallas, 1852
Orsillus Dallas, 1852
subfamily † Lygaenocorinae
Unplaced genera
Lygaeites Heer, 1853
The Pachygronthinae Stål, 1865 (type genus Pachygrontha Germar, 1840) may be placed here or elevated to the family Pachygronthidae.
Gallery
| Biology and health sciences | Hemiptera (true bugs) | Animals |
5512223 | https://en.wikipedia.org/wiki/Egyptian%20mongoose | Egyptian mongoose | The Egyptian mongoose (Herpestes ichneumon), also known as ichneumon (), is a mongoose species native to the tropical and subtropical grasslands, savannas, and shrublands of Africa and around the Mediterranean Basin in North Africa, the Middle East and the Iberian Peninsula. Whether it is introduced or native to the Iberian Peninsula is in some doubt. Because of its widespread occurrence, it is listed as Least Concern on the IUCN Red List.
Characteristics
The Egyptian mongoose's long, coarse fur is grey to reddish brown and ticked with brown and yellow flecks. Its snout is pointed, its ears are small. Its slender body is long with a long black tipped tail. Its hind feet and a small area around the eyes are furless. It has 35–40 teeth, with highly developed carnassials, used for shearing meat. It weighs .
Sexually dimorphic Egyptian mongooses were observed in Portugal, where some females are smaller than males.
Female Egyptian mongooses have 44 chromosomes, and males 43, as one Y chromosome is translocated to an autosome.
Distribution and habitat
The Egyptian mongoose lives in swampy and marshy habitats near streams, rivers, lakes and in coastal areas. Where it inhabits maquis shrubland in the Iberian Peninsula, it prefers areas close to rivers with dense vegetation. It does not occur in deserts.
It has been recorded in Portugal from north of the Douro River to the south, and in Spain from the central plateau, Andalucía to the Strait of Gibraltar.
In North Africa, it occurs along the coast of the Mediterranean Sea and the Atlas Mountains from Western Sahara, Morocco, Algeria and Tunisia into Libya, and from northern Egypt across the Sinai Peninsula.
In Egypt, one individual was observed in Faiyum Oasis in 1993. In the same year, its tracks were recorded in sand dunes close to the coast near Sidi Barrani.
An individual was observed on an island in Lake Burullus in the Nile Delta during an ecological survey in the late 1990s.
In the Palestinian territories, it was recorded in the Gaza Strip and Jericho Governorate in the West Bank during surveys carried out between 2012 and 2016.
In western Syria, it was observed in the Latakia Governorate between 1989 and 1995; taxidermied specimens were offered in local shops.
In southern Turkey, it was recorded in the Hatay and Adana Provinces.
In Sudan, it is present in the vicinity of human settlements along the Rahad River and in Dinder National Park. It was also recorded in the Dinder–Alatash protected area complex during surveys between 2015 and 2018. In Ethiopia, the Egyptian mongoose was recorded at elevations of in the Ethiopian Highlands.
In Senegal, it was observed in 2000 in Niokolo-Koba National Park, which mainly encompasses open habitat dominated by grasses.
In Guinea's National Park of Upper Niger, the occurrence of the Egyptian mongoose was first documented during surveys in spring 1997. Surveyors found dead individuals on bushmeat markets in villages located in the vicinity of the park.
In Gabon's Moukalaba-Doudou National Park, it was recorded only in savanna habitats.
In the Republic of Congo, it was repeatedly observed in the Western Congolian forest–savanna mosaic of Odzala-Kokoua National Park during surveys in 2007.
In the 1990s, it was considered a common species in Tanzania's Mkomazi National Park.
Occurrence in Iberian Peninsula
Several hypotheses were proposed to explain the occurrence of the Egyptian mongoose in the Iberian Peninsula:
TraditionalIy, it was thought to have been introduced following the Muslim invasion in the 8th century.
Bones of Egyptian mongoose excavated in Spain and Portugal were radiocarbon dated to the first century. The scientists therefore suggested an introduction during the Roman Hispania era and use for eliminating rats and mice in domestic areas.
Other authors proposed a natural colonisation of the Iberian Peninsula during the Pleistocene across a land bridge when sea levels were low between glacial and interglacial periods. This population would have remained isolated from populations in Africa after the Last Glacial Period.
Behaviour and ecology
The Egyptian mongoose is diurnal.
In Doñana National Park, single Egyptian mongooses, pairs and groups of up to five individuals were observed. Adult males showed territorial behaviour, and shared their home ranges with one or several females. The home ranges of adult females overlapped to some degree, except in core areas where they raised their offspring.
It preys on rodents, fish, birds, reptiles, amphibians, and insects. It also feeds on fruit and eggs. To crack eggs open, it throws them between its legs against a rock or wall.
In Doñana National Park, 30 Egyptian mongooses were radio-tracked in 1985 and their faeces collected. These samples contained remains of European rabbit (Oryctolagus cuniculus), sand lizards (Psammodromus), Iberian spadefoot toad (Pelobates cultripes), greater white-toothed shrew (Crocidura russula), three-toed skink (Chalcides chalcides), dabbling ducks (Anas), western cattle egret (Bubulcus ibis), wild boar (Sus scrofa) meat, Algerian mouse (Mus spretus) and rat species (Rattus).
Research in southeastern Nigeria revealed that it also feeds on giant pouched rats (Cricetomys), Temminck's mouse (Mus musculoides), Tullberg's soft-furred mouse (Praomys tulbergi), Nigerian shrew (Crocidura nigeriae), Hallowell's toad (Amietophrynus maculatus), African brown water snake (Afronatrix anoscopus), and Mabuya skinks.
It attacks and feeds on venomous snakes, and is resistant to the venom of Palestine viper (Daboia palaestinae), black desert cobra (Walterinnesia aegyptia) and black-necked spitting cobra (Naja nigricollis).
In Spain, it has been recorded less frequently in areas where the Iberian lynx was reintroduced.
Reproduction
Captive males and females reach sexual maturity at the age of two years. In Doñana National Park, courtship and mating happens in spring between February and June. Two to three pups are born between mid April and mid August after a gestation of 11 weeks. They are hairless at first, and open their eyes after about a week. Females take care of them for up to one year, occasionally also longer. They start foraging on their own at the age of four months, but compete for food brought back to them after that age. In the wild, Egyptian mongooses probably reach 12 years of age. A captive Egyptian mongoose was over 20 years old.
Its generation length is 7.5 years.
Taxonomy
In 1758, Carl Linnaeus described an Egyptian mongoose from the area of the Nile River in Egypt in his work Systema Naturae and gave it the scientific name Viverra ichneumon.
H. i. ichneumon (Linnaeus, 1758) is the nominate subspecies. The following zoological specimen were described between the late 18th century and the early 1930s as subspecies:
Viverra cafra (Gmelin, 1788) − based on a description of a specimen from the Cape of Good Hope.
Herpestes ichneumon numidicus F. G. Cuvier, 1834 − two individuals from Algiers in Algeria kept in the menagerie of the Muséum d'Histoire Naturelle, France
Herpestes ichneumon widdringtonii Gray, 1842 − a specimen from Sierra Morena in Spain
Herpestes angolensis (Bocage, 1890) − a male specimen from Quissange in Angola
Mungos ichneumon parvidens (Lönnberg, 1908) − three specimens collected near the lower Congo River in Congo Free State
Mungos ichneumon funestus (Osgood, 1910) − a specimen from Naivasha in British East Africa
Mungos ichneumon centralis (Lönnberg, 1917) − two specimens from Beni, Democratic Republic of the Congo
Herpestes ichneumon sangronizi Cabrera, 1924 − a specimen from Mogador in Morocco
Herpestes caffer sabiensis (Roberts, 1926) − a specimen from Sabi Sand Game Reserve in Southern Africa
Herpestes cafer mababiensis (Roberts, 1932) − a specimen from Mababe in northern Bechuanaland
In 1811, Johann Karl Wilhelm Illiger subsumed the ichneumon to the genus Herpestes.
Threats
A survey of poaching methods in Israel carried out in autumn 2000 revealed that the Egyptian mongoose is affected by snaring in agricultural areas. Most of the traps found were set up by Thai guest workers.
Numerous dried heads of Egyptian mongooses were found in 2007 at the Dantokpa Market in southern Benin, suggesting that it is used as fetish in animal rituals.
Conservation
The Egyptian mongoose is listed on Appendix III of the Berne Convention, and Annex V of the European Union Habitats and Species Directive.
In Israel, wildlife is protected by law, and hunting allowed only with a permit.
In culture
Mummified remains of four Egyptian mongooses were excavated in the catacombs of Anubis at Saqqara during works started in 2009.
At the cemetery of Beni Hasan, an Egyptian mongoose on a leash is depicted in the tomb of Baqet I dating to the Eleventh Dynasty of Egypt.
The American poet John Greenleaf Whittier wrote a poem as an elegy for an ichneumon, which had been brought to Haverhill Academy in Haverhill, Massachusetts, in 1830. The long lost poem was published in the November 1902 issue of "The Independent" magazine.
The Sherlock Holmes canon also features an ichneumon the short story The Adventure of the Crooked Man, though due to Watson's description of its appearance and its owner's history in India it is likely to actually be an Indian grey mongoose.
| Biology and health sciences | Other carnivora | Animals |
5512913 | https://en.wikipedia.org/wiki/Toxicodendron%20vernicifluum | Toxicodendron vernicifluum | Toxicodendron vernicifluum (formerly Rhus verniciflua), also known by the common name Chinese lacquer tree, is an Asian tree species of genus Toxicodendron native to China and the Indian subcontinent, and cultivated in regions of China, Japan and Korea. Other common names include Japanese lacquer tree, Japanese sumac, and varnish tree. The trees are cultivated and tapped for their toxic sap, which is used as a highly durable lacquer to make Chinese, Japanese, and Korean lacquerware.
The trees grow up to 20 metres tall with large leaves, each containing from 7 to 19 leaflets (most often 11–13). The sap contains the allergenic compound urushiol, which gets its name from this species' Japanese name urushi (); "urushi" is also used in English as a collective term for all kinds of Asian lacquerware made from the sap of this and related Asian tree species, as opposed to European "lacquer" or Japanning made from other materials. Urushiol is also the oil found in poison ivy and poison oak that causes a rash.
Uses
Lacquer
Sap, containing urushiol (an allergenic irritant), is tapped from the trunk of the Chinese lacquer tree to produce lacquer. This is done by cutting 5 to 10 horizontal lines on the trunk of a 10-year-old tree, and then collecting the greyish yellow sap that exudes. The sap is then filtered, heat-treated, or coloured before applying onto a base material that is to be lacquered. Curing the applied sap requires "drying" it in a warm, humid chamber or closet for 12 to 24 hours where the urushiol polymerizes to form a clear, hard, and waterproof surface. In its liquid state, urushiol can cause extreme rashes, even from vapours. Once hardened, reactions are possible but less common.
Products coated with lacquer are recognizable by an extremely durable and glossy finish. Lacquer has many uses; some common applications include tableware, musical instruments, fountain pens, jewelry, and bows for archery. There are various types of lacquerware. The cinnabar-red is highly regarded. Unpigmented lacquer is dark brown but the most common colors of urushiol finishes are black and red, from powdered iron oxide pigments of ferrous-ferric oxide (magnetite) and ferric oxide (rust), respectively. Lacquer is painted on with a brush and is cured in a warm and humid environment.
The leaves, seeds, and the resin of the Chinese lacquer tree are sometimes used in Chinese medicine for the treatment of internal parasites and for stopping bleeding. Compounds butein and sulfuretin are antioxidants, and have inhibitory effects on aldose reductase and advanced glycation processes.
Buddhist monks who practiced the art of Sokushinbutsu would use the tree's sap in their ceremony.
Wax
The fruits of T. vernicifluum can also be processed to produce a waxy substance known as Japan wax used for numerous purposes including varnishing furniture and producing candles. The fruits of the trees are harvested, dried, steamed, and pressed to extract the wax, which hardens when cooled.
| Biology and health sciences | Sapindales | Plants |
8686104 | https://en.wikipedia.org/wiki/Anthracotheriidae | Anthracotheriidae | Anthracotheriidae is a paraphyletic family of extinct, hippopotamus-like artiodactyl ungulates related to hippopotamuses and whales. The oldest genus, Elomeryx, first appeared during the middle Eocene in Asia. They thrived in Africa and Eurasia, with a few species ultimately entering North America during the Oligocene. They died out in Europe and Africa during the Miocene, possibly due to a combination of climatic changes and competition with other artiodactyls, including pigs and true hippopotamuses. The youngest genus, Merycopotamus, died out in Asia during the late Pliocene, possibly for the same reasons. The family is named after the first genus discovered, Anthracotherium, which means "coal beast", as the first fossils of it were found in Paleogene-aged coal beds in France. Fossil remains of the anthracothere genus were discovered by the Harvard University and Geological Survey of Pakistan joint research project (Y-GSP) in the well-dated middle and late Miocene deposits of the Pothohar Plateau in northern Pakistan.
In life, the average anthracothere would have resembled a skinny hippopotamus with a comparatively small, narrow head and most likely pig-like in general appearance. They had four or five toes on each foot, and broad feet suited to walking on soft mud. They had full sets of about 44 teeth with five semicrescentric cusps on the upper molars, which, in some species, were adapted for digging up the roots of aquatic plants.
Evolutionary relationships
Some skeletal characters of anthracotheres suggest they are related to hippos.
The nature of the sediments in which they are fossilized implies they were amphibious, which supports the view, based on anatomical evidence, that they were ancestors of the hippopotamuses. In many respects, especially the anatomy of the lower jaw, Anthracotherium, as with other members of the family, is allied to the hippopotamus, of which it is probably an ancestral form. However, one study suggests that instead of anthracotheres, another pig-like group of artiodactyls, the palaeochoerids, are the true stem group of Hippopotamidae.
Recent evidence, gained from comparative gene sequencing, further suggests that hippos are the closest living relatives of whales, so, if anthracotheres are stem hippos, they would also be related to whales in a clade provisionally called Whippomorpha.
However, the earliest known anthracotheres appear in the fossil record in the middle Eocene, well after the archaeocetes had already taken up totally aquatic lifestyles. Although phylogenetic analyses of molecular data on extant animals strongly support the notion that hippopotamids are the closest relatives of cetaceans (whales, dolphins and porpoises), the two groups are unlikely to be closely related when extant and extinct artiodactyls are analyzed. Cetaceans originated about 50 million years ago in the Tethys Sea between India and China, whereas the family Hippopotamidae is only 15 million years old, and the first Asian hippopotamids are only 6 million years old. Yet, analyses of fossil clades have not resolved the issue of cetacean relations.
Another study has offered a suggestion that anthracotheres are part of a clade that also consists of entelodonts (and even Andrewsarchus) and that is a sister clade to other cetancodonts, with Siamotherium as the most basal member of the clade Cetacodontamorpha.
| Biology and health sciences | Other artiodactyla | Animals |
4131678 | https://en.wikipedia.org/wiki/Hammond%27s%20postulate | Hammond's postulate | Hammond's postulate (or alternatively the Hammond–Leffler postulate), is a hypothesis in physical organic chemistry which describes the geometric structure of the transition state in an organic chemical reaction. First proposed by George Hammond in 1955, the postulate states that:
If two states, as, for example, a transition state and an unstable intermediate, occur consecutively during a reaction process and have nearly the same energy content, their interconversion will involve only a small reorganization of the molecular structures.
Therefore, the geometric structure of a state can be predicted by comparing its energy to the species neighboring it along the reaction coordinate. For example, in an exothermic reaction the transition state is closer in energy to the reactants than to the products. Therefore, the transition state will be more geometrically similar to the reactants than to the products. In contrast, however, in an endothermic reaction the transition state is closer in energy to the products than to the reactants. So, according to Hammond’s postulate the structure of the transition state would resemble the products more than the reactants. This type of comparison is especially useful because most transition states cannot be characterized experimentally.
Hammond's postulate also helps to explain and rationalize the Bell–Evans–Polanyi principle. Namely, this principle describes the experimental observation that the rate of a reaction, and therefore its activation energy, is affected by the enthalpy of that reaction. Hammond's postulate explains this observation by describing how varying the enthalpy of a reaction would also change the structure of the transition state. In turn, this change in geometric structure would alter the energy of the transition state, and therefore the activation energy and reaction rate as well.
The postulate has also been used to predict the shape of reaction coordinate diagrams. For example, electrophilic aromatic substitution involves a distinct intermediate and two less well defined states. By measuring the effects of aromatic substituents and applying Hammond's postulate it was concluded that the rate-determining step involves formation of a transition state that should resemble the intermediate complex.
History
During the 1940s and 1950s, chemists had trouble explaining why even slight changes in the reactants caused significant differences in the rate and product distributions of a reaction. In 1955 George Hammond, a young professor at Iowa State University, postulated that transition-state theory could be used to qualitatively explain the observed structure-reactivity relationships. Notably, John E. Leffler of Florida State University proposed a similar idea in 1953. However, Hammond's version has received more attention since its qualitative nature was easier to understand and employ than Leffler's complex mathematical equations. Hammond's postulate is sometimes called the Hammond–Leffler postulate to give credit to both scientists.
Interpreting the postulate
Effectively, the postulate states that the structure of a transition state resembles that of the species nearest to it in free energy. This can be explained with reference to potential energy diagrams:
In case (a), which is an exothermic reaction, the energy of the transition state is closer in energy to that of the reactant than that of the intermediate or the product. Therefore, from the postulate, the structure of the transition state also more closely resembles that of the reactant. In case (b), the energy of the transition state is close to neither the reactant nor the product, making none of them a good structural model for the transition state. Further information would be needed in order to predict the structure or characteristics of the transition state. Case (c) depicts the potential diagram for an endothermic reaction, in which, according to the postulate, the transition state should more closely resemble that of the intermediate or the product.
Another significance of Hammond’s postulate is that it permits us to discuss the structure of the transition state in terms of the reactants, intermediates, or products. In the case where the transition state closely resembles the reactants, the transition state is called “early” while a “late” transition state is the one that closely resembles the intermediate or the product.
An example of the “early” transition state is chlorination. Chlorination favors the products because it is an exothermic reaction, which means that the products are lower in energy than the reactants. When looking at the adjacent diagram (representation of an "early" transition state), one must focus on the transition state, which is not able to be observed during an experiment. To understand what is meant by an “early” transition state, the Hammond postulate represents a curve that shows the kinetics of this reaction. Since the reactants are higher in energy, the transition state appears to be right after the reaction starts.
An example of the “late” transition state is bromination. Bromination favors the reactants because it is an endothermic reaction, which means that the reactants are lower in energy than the products. Since the transition state is hard to observe, the postulate of bromination helps to picture the “late” transition state (see the representation of the "late" transition state). Since the products are higher in energy, the transition state appears to be right before the reaction is complete.
One other useful interpretation of the postulate often found in textbooks of organic chemistry is the following:
Assume that the transition states for reactions involving unstable intermediates can be closely approximated by the intermediates themselves.
This interpretation ignores extremely exothermic and endothermic reactions which are relatively unusual and relates the transition state to the intermediates which are usually the most unstable.
Structure of transition states
SN1 reactions
Hammond's postulate can be used to examine the structure of the transition states of a SN1 reaction. In particular, the dissociation of the leaving group is the first transition state in a SN1 reaction. The stabilities of the carbocations formed by this dissociation are known to follow the trend tertiary > secondary > primary > methyl.
Therefore, since the tertiary carbocation is relatively stable and therefore close in energy to the R-X reactant, then the tertiary transition state will have a structure that is fairly similar to the R-X reactant. In terms of the graph of reaction coordinate versus energy, this is shown by the fact that the tertiary transition state is further to the left than the other transition states. In contrast, the energy of a methyl carbocation is very high, and therefore the structure of the transition state is more similar to the intermediate carbocation than to the R-X reactant. Accordingly, the methyl transition state is very far to the right.
SN2 reactions
Bimolecular nucleophilic substitution (SN2) reactions are concerted reactions where both the nucleophile and substrate are involved in the rate limiting step. Since this reaction is concerted, the reaction occurs in one step, where the bonds are broken, while new bonds are formed. Therefore, to interpret this reaction, it is important to look at the transition state, which resembles the concerted rate limiting step. In the "Depiction of SN2 Reaction" figure, the nucleophile forms a new bond to the carbon, while the halide (L) bond is broken.
E1 reactions
An E1 reaction consists of a unimolecular elimination, where the rate determining step of the mechanism depends on the removal of a single molecular species. This is a two-step mechanism. The more stable the carbocation intermediate is, the faster the reaction will proceed, favoring the products. Stabilization of the carbocation intermediate lowers the activation energy. The reactivity order is (CH3)3C- > (CH3)2CH- > CH3CH2- > CH3-.
Furthermore, studies describe a typical kinetic resolution process that starts out with two enantiomers that are energetically equivalent and, in the end, forms two energy-inequivalent intermediates, referred to as diastereomers. According to Hammond's postulate, the more stable diastereomer is formed faster.
E2 reactions
Elimination, bimolecular reactions are one step, concerted reaction where both base and substrate participate in the rate limiting step. In an E2 mechanism, a base takes a proton near the leaving group, forcing the electrons down to make a double bond, and forcing off the leaving group-all in one concerted step. The rate law depends on the first order concentration of two reactants, making it a 2nd order (bimolecular) elimination reaction. Factors that affect the rate determining step are stereochemistry, leaving groups, and base strength.
A theory, for an E2 reaction, by Joseph Bunnett suggests the lowest pass through the energy barrier between reactants and products is gained by an adjustment between the degrees of Cβ-H and Cα-X rupture at the transition state. The adjustment involves much breaking of the bond more easily broken, and a small amount of breaking of the bond which requires more energy. This conclusion by Bunnett is a contradiction from the Hammond postulate. The Hammond postulate is the opposite of what Bunnett theorized. In the transition state of a bond breaking step it involves little breaking when the bond is easily broken and much breaking when it is difficult to break. Despite these differences, the two postulates are not in conflict since they are concerned with different sorts of processes. Hammond focuses on reaction steps where one bond is made or broken, or the breaking of two or more bonds is done with no time taken occur simultaneously. The E2 theory transition state concerns a process when bond formation or breaking are not simultaneous.
Kinetics and the Bell–Evans–Polanyi principle
Technically, Hammond's postulate only describes the geometric structure of a chemical reaction. However, Hammond's postulate indirectly gives information about the rate, kinetics, and activation energy of reactions. Hence, it gives a theoretical basis for the understanding the Bell–Evans–Polanyi principle, which describes the experimental observation that the enthalpy and rate of a similar reactions were usually correlated.
The relationship between Hammond's postulate and the BEP principle can be understood by considering a SN1 reaction. Although two transition states occur during a SN1 reaction (dissociation of the leaving group and then attack by the nucleophile), the dissociation of the leaving group is almost always the rate-determining step. Hence, the activation energy and therefore rate of the reaction will depend only upon the dissociation step.
First, consider the reaction at secondary and tertiary carbons. As the BEP principle notes, experimentally SN1 reactions at tertiary carbons are faster than at secondary carbons. Therefore, by definition, the transition state for tertiary reactions will be at a lower energy than for secondary reactions. However, the BEP principle cannot justify why the energy is lower.
Using Hammond's postulate, the lower energy of the tertiary transition state means that its structure is relatively closer to its reactants R(tertiary)-X than to the carbocation product when compared to the secondary case. Thus, the tertiary transition state will be more geometrically similar to the R(tertiary)-X reactants than the secondary transition state is to its R(secondary)-X reactants. Hence, if the tertiary transition state is close in structure to the (low energy) reactants, then it will also be lower in energy because structure determines energy. Likewise, if the secondary transition state is more similar to the (high energy) carbocation product, then it will be higher in energy.
Applying the postulate
Hammond's postulate is useful for understanding the relationship between the rate of a reaction and the stability of the products.
While the rate of a reaction depends just on the activation energy (often represented in organic chemistry as ΔG‡ “delta G double dagger”), the final ratios of products in chemical equilibrium depends only on the standard free-energy change ΔG (“delta G”). The ratio of the final products at equilibrium corresponds directly with the stability of those products.
Hammond's postulate connects the rate of a reaction process with the structural features of those states that form part of it, by saying that the molecular reorganizations have to be small in those steps that involve two states that are very close in energy. This gave birth to the structural comparison between the starting materials, products, and the possible "stable intermediates" that led to the understanding that the most stable product is not always the one that is favored in a reaction process.
Explaining seemingly contradictory results
Hammond's postulate is especially important when looking at the rate-limiting step of a reaction. However, one must be cautious when examining a multistep reaction or one with the possibility of rearrangements during an intermediate stage. In some cases, the final products appear in skewed ratios in favor of a more unstable product (called the kinetic product) rather than the more stable product (the thermodynamic product). In this case one must examine the rate-limiting step and the intermediates. Often, the rate-limiting step is the initial formation of an unstable species such as a carbocation. Then, once the carbocation is formed, subsequent rearrangements can occur. In these kinds of reactions, especially when run at lower temperatures, the reactants simply react before the rearrangements necessary to form a more stable intermediate have time to occur. At higher temperatures when microscopic reversal is easier, the more stable thermodynamic product is favored because these intermediates have time to rearrange. Whether run at high or low temperatures, the mixture of the kinetic and thermodynamic products eventually reach the same ratio, one in favor of the more stable thermodynamic product, when given time to equilibrate due to microreversal.
| Physical sciences | Kinetics | Chemistry |
4136723 | https://en.wikipedia.org/wiki/Ultrafast%20laser%20spectroscopy | Ultrafast laser spectroscopy | Ultrafast laser spectroscopy is a category of spectroscopic techniques using ultrashort pulse lasers for the study of dynamics on extremely short time scales (attoseconds to nanoseconds). Different methods are used to examine the dynamics of charge carriers, atoms, and molecules. Many different procedures have been developed spanning different time scales and photon energy ranges; some common methods are listed below.
Attosecond-to-picosecond spectroscopy
Dynamics on the femtosecond time scale are in general too fast to be measured electronically. Most measurements are done by employing a sequence of ultrashort light pulses to initiate a process and record its dynamics. The temporal width (duration) of the light pulses has to be on the same scale as the dynamics that are to be measured or even shorter.
Light sources
Titanium-sapphire laser
Ti-sapphire lasers are tunable lasers that emit red and near-infrared light (700 nm- 1100 nm).Ti-sapphire laser oscillators use Ti doped-sapphire crystals as a gain medium and Kerr-lens mode-locking to achieve sub-picosecond light pulses. Typical Ti:sapphire oscillator pulses have nJ energy and repetition rates 70-100 MHz. Chirped pulse amplification through regenerative amplification can be used to attain higher pulse energies. For amplification, laser pulses from the Ti:sapphire oscillator must first be stretched in time to prevent damage to optics, and then are injected into the cavity of another laser where pulses are amplified at a lower repetition rate. Regeneratively amplified pulses can be further amplified in a multi-pass amplifier. Following amplification, the pulses are recompressed to pulse widths similar to the original pulse widths.
Dye laser
A dye laser is a four-level laser that uses an organic dye as the gain medium. Pumped by a laser with a fixed wavelength, due to various dye types you use, different dye lasers can emit beams with different wavelengths. A ring laser design is most often used in a dye laser system. Also, tuning elements, such as a diffraction grating or prism, are usually incorporated in the cavity. This allows only light in a very narrow frequency range to resonate in the cavity and be emitted as laser emission. The wide tunability range, high output power, and pulsed or CW operation make the dye laser particularly useful in many physical & chemical studies.
Fiber laser
A fiber laser is usually generated first from a laser diode. The laser diode then couples the light into a fiber where it will be confined. Different wavelengths can be achieved with the use of doped fiber. The pump light from the laser diode will excite a state in the doped fiber which can then drop in energy causing a specific wavelength to be emitted. This wavelength may be different from that of the pump light and more useful for a particular experiment.
X-ray generation
Ultrafast optical pulses can be used to generate x-ray pulses in multiple ways. An optical pulse can excite an electron pulse via the photoelectric effect, and acceleration across a high potential gives the electrons kinetic energy. When the electrons hit a target they generate both characteristic x-rays and bremsstrahlung. A second method is via laser-induced plasma. When very high-intensity laser light is incident on a target, it strips electrons off the target creating a negatively charged plasma cloud. The strong Coulomb force due to the ionized material in the center of the cloud quickly accelerates the electrons back to the nuclei left behind. Upon collision with the nuclei, Bremsstrahlung and characteristic emission x-rays are given off. This method of x-ray generation scatters photons in all directions, but also generates picosecond x-ray pulses.
Conversion and characterization
Pulse characterization
For accurate spectroscopic measurements to be made, several characteristics of the laser pulse need to be known; pulse duration, pulse energy, spectral phase, and spectral shape are among some of these. Information about pulse duration can be determined through autocorrelation measurements, or from cross-correlation with another well-characterized pulse. Methods allowing for complete characterization of pulses include frequency-resolved optical gating (FROG) and spectral phase interferometry for direct electric-field reconstruction (SPIDER).
Pulse shaping
Pulse shaping is to modify the pulses from the source in a well-defined manner, including manipulation on pulse’s amplitude, phase, and duration.
To amplify pulse’s intensity, chirped pulse amplification is generally employed, which includes a pulse stretcher, amplifier, and compressor. It will not change the duration or phase of the pulse during the amplification. Pulse compression (shortening of the pulse duration) is achieved by first chirping the pulse in a nonlinear material and broadening the spectrum, with the following compressor for chirp compensation. A fiber compressor is generally used in this case.
Pulse shapers usually refer to optical modulators which apply Fourier transforms to a laser beam. Depending on which property of light is controlled, modulators are called intensity modulators, phase modulators, polarization modulators, spatial light modulators. Depending on the modulation mechanism, optical modulators are divided into Acoustic-optic modulators, Electro-optic modulators, Liquid crystal modulators, etc. Each is dedicated to different applications.
High harmonic generation
High harmonic generation (HHG) is a nonlinear process where intense laser radiation is converted from one fixed frequency to high harmonics of that frequency by ionization and recollision of an electron. It was first observed in 1987 by McPherson et al. who successfully generated harmonic emission up to the 17th order at 248 nm in neon gas.
HHG is seen by focusing an ultra-fast, high-intensity, near-IR pulse into a noble gas at intensities of 1013–1014 W/cm2 and it generates coherent pulses in the XUV to Soft X-ray (100–1 nm) region of the spectrum. It is realizable on a laboratory scale (table-top systems) as opposed to large free electron-laser facilities.
High harmonic generation in atoms is well understood in terms of the three-step model (ionization, propagation, and recombination).
Ionization: The intense laser field modifies the Coulomb potential of the atom, electron tunnels through the barrier and ionize.
Propagation: The free-electron accelerates in the laser field and gains momentum.
Recombination: When the field reverses, the electron is accelerated back toward the ionic parent and releases a photon with very high energy.
Frequency conversion techniques
Different spectroscopy experiments require different excitation or probe wavelengths. For this reason, frequency conversion techniques are commonly used to extend the operational spectrum of existing laser light sources.
The most widespread conversion techniques rely on using crystals with second-order non-linearity to perform either parametric amplification or frequency mixing.
Frequency mixing works by superimposing two beams of equal or different wavelengths to generate a signal which is a higher harmonic or the sum frequency of the first two.
Parametric amplification overlaps a weak probe beam with a higher energy pump beam in a non-linear crystal such that the weak beam gets amplified and the remaining energy goes out as a new beam called the idler. This approach has the capability of generating output pulses that are shorter than the input ones. Different schemes of this approach have been implemented. Examples are optical parametric oscillator (OPO), optical parametric amplifier (OPA), non-collinear parametric amplifier (NOPA).
Techniques
Ultrafast transient absorption
This method is typical of 'pump-probe' experiments, where a pulsed laser is used to excite the electrons in a material (such as a molecule or semiconducting solid) from their ground states to higher-energy excited states. A probing light source, typically a xenon arc lamp or broadband laser pulse created by supercontinuum generation, is used to obtain an absorption spectrum of the compound at various times following its excitation. As the excited molecules absorb the probe light, they are further excited to even higher states or induced to return to the ground state radiatively through stimulated emission. After passing through the sample, the unabsorbed probe light continues to a photodetector such as an avalanche photodiode array or CMOS camera, and the data is processed to generate an absorption spectrum of the excited state. Since all the molecules or excitation sites in the sample will not undergo the same dynamics simultaneously, this experiment must be carried out many times (where each "experiment" comes from a single pair of pump and probe laser pulse interactions), and the data must be averaged to generate spectra with accurate intensities and peaks. Because photobleaching and other photochemical or photothermal reactions can happen to the samples, this method requires evaluating these effects by measuring the same sample at the same location many times at different pump and probe intensities. Most time the liquid samples are stirred during measurement making relatively long-time kinetics difficult to measure due to flow and diffusion. Unlike time-correlated single photon counting (TCSPC), this technique can be carried out on non-fluorescent samples. It can also be performed on non-transmissive samples in a reflection geometry.
Ultrafast transient absorption can use almost any probe light, so long as the probe is of a pertinent wavelength or set of wavelengths. A monochromator and photomultiplier tube in place of the avalanche photodiode array allows observation of a single probe wavelength, and thus allows probing of the decay kinetics of the excited species. The purpose of this setup is to take kinetic measurements of species that are otherwise nonradiative, and specifically it is useful for observing species that have short-lived and non-phosphorescent populations within the triplet manifold as part of their decay path. The pulsed laser in this setup is used both as a primary excitation source, and a clock signal for the ultrafast measurements. Although laborious and time-consuming, the monochromator position may also be shifted to allow absorbance decay profiles to be constructed, ultimately to the same effect as the above method.
The data of UTA measurements usually are reconstructed absorption spectra sequenced over the delay time between the pump and probe. Each spectrum resembles a normal steady-state absorption profile of the sample after the delay time of the excitation with the time resolution convoluted from the pump and probe time resolutions. The excitation wavelength is blinded by the pump laser and cut out. The rest of the spectra usually have a few bands such as ground-state absorption, excited-state absorption, and stimulated emission. Under normal conditions, the angles of the emission are randomly orientated and not detected in the absorption geometry. But in UTA measurement, the stimulated emission resembles the lasing effect, is highly oriented, and is detected. Many times this emission overlaps with the absorption bands and needs to be deconvoluted for quantitative analysis. The relationship and correlation among these bands can be visualized using the classical spectroscopic two-dimensional correlation analysis.
Time-resolved photoelectron spectroscopy and two-photon photoelectron spectroscopy
Time-resolved photoelectron spectroscopy and two-photon photoelectron spectroscopy (2PPE) combine a pump-probe scheme with angle-resolved photoemission. A first laser pulse is used to excite a material, a second laser pulse ionizes the system. The kinetic energy of the electrons from this process is then detected, through various methods including energy mapping, time of flight measurements etc. As above, the process is repeated many times, with different time delays between the probe pulse and the pump pulse. This builds up a picture of how the molecule relaxes over time.
A variation of this method looks at the positive ions created in this process and is called time-resolved photo-ion spectroscopy (TRPIS)
Multidimensional spectroscopy
Using the same principles pioneered by 2D-NMR experiments, multidimensional optical or infrared spectroscopy is possible using ultrafast pulses. Different frequencies can probe various dynamic molecular processes to differentiate between inhomogeneous and homogeneous line broadening as well as identify coupling between the measured spectroscopic transitions. If two oscillators are coupled together, be it intramolecular vibrations or intermolecular electronic coupling, the added dimensionality will resolve anharmonic responses not identifiable in linear spectra. A typical 2D pulse sequence consists of an initial pulse to pump the system into a coherent superposition of states, followed by a phase conjugate second pulse that pushes the system into a non-oscillating excited state, and finally, a third pulse that converts back to a coherent state that produces a measurable pulse. A 2D frequency spectrum can then be recorded by plotting the Fourier transform of the delay between the first and second pulses on one axis, and the Fourier transform of the delay between a detection pulse relative to the signal-producing third pulse on the other axis. 2D spectroscopy is an example of a four-wave mixing experiment, and the wavevector of the signal will be the sum of the three incident wavevectors used in the pulse sequence. Multidimensional spectroscopies exist in infrared and visible variants as well as combinations using different wavelength regions.
2D spectroscopy using ultrafast pulses can be combined with complementary experimental methods to characterize the system under study. Photoelectrochemical measurements of photosynthetic complexes have been correlated with ultrafast pulses to stimulate and probe chromophores involved in photosynthesis and to characterize the charge transfer processes in photosynthetic reaction centers. Since charge separation and transfer is the final, biologically relevant process (in contrast to intermediate excitations and relaxations of the chromophores), the combination of photoelectrochemistry and 2D spectroscopy (PEC2DES) can be considered a form of "action spectroscopy".
Ultrafast imaging
Most ultrafast imaging techniques are variations on standard pump-probe experiments. Some commonly used techniques are Electron Diffraction imaging, Kerr Gated Microscopy, imaging with ultrafast electron pulses and terahertz imaging.
This is particularly true in the biomedical community where safe and non-invasive techniques for diagnosis are always of interest. Terahertz imaging has recently been used to identify areas of decay in tooth enamel and image the layers of the skin. Additionally, it has shown to be able to successfully distinguish a region of breast carcinoma from healthy tissue.
Another technique called Serial Time-encoded amplified microscopy has shown to have the capability of even earlier detection of trace amounts of cancer cells in the blood. Other non-biomedical applications include ultrafast imaging around corners or through opaque objects.
Femtosecond up-conversion
Femtosecond up-conversion is a pump-probe technique that uses nonlinear optics to combine the fluorescence signal and probe signal to create a signal with a new frequency via photon upconversion, which is subsequently detected. The probe scans through delay times after the pump excites the sample, generating a plot of intensity over time.
Applications
Applications of femtosecond spectroscopy to biochemistry
Ultrafast processes are found throughout biology. Until the advent of femtosecond methods, many of the mechanism of such processes were unknown. Examples of these include the cis-trans photoisomerization of the rhodopsin chromophore retinal, excited state and population dynamics of DNA, and the charge transfer processes in photosynthetic reaction centers Charge transfer dynamics in photosynthetic reaction centers has a direct bearing on man’s ability to develop light harvesting technology, while the excited state dynamics of DNA has implications in diseases such as skin cancer. Advances in femtosecond methods are crucial to the understanding of ultrafast phenomena in nature.
Photodissociation and femtosecond probing
Photodissociation is a chemical reaction in which a chemical compound is broken down by photons. It is defined as the interaction of one or more photons with one target molecule. Any photon with sufficient energy can affect the chemical bonds of a chemical compound, such as visible light, ultraviolet light, x-rays and gamma rays. The technique of probing chemical reactions has been successfully applied to unimolecular dissociations. The possibility of using a femtosecond technique to study bimolecular reactions at the individual collision level is complicated by the difficulties of spatial and temporal synchronization. One way to overcome this problem is through the use of Van der Waals complexes of weakly bound molecular cluster. Femtosecond techniques are not limited to the observation of the chemical reactions, but can even exploited to influence the course of the reaction. This can open new relaxation channels or increase the yield of certain reaction products.
Picosecond-to-nanosecond spectroscopy
Streak camera
Unlike attosecond and femtosecond pulses, the duration of pulses on the nanosecond timescale are slow enough to be measured through electronic means. Streak cameras translate the temporal profile of pulses into that of a spatial profile; that is, photons that arrive on the detector at different times arrive at different locations on the detector.
Time-correlated single photon counting
Time-correlated single photon counting (TCSPC) is used to analyze the relaxation of molecules from an excited state to a lower energy state. Since various molecules in a sample will emit photons at different times following their simultaneous excitation, the decay must be thought of as having a certain rate rather than occurring at a specific time after excitation. The experimental setup is adjusted to detect 1 photon per 100 excitation pulses. In other words, less than one emitted photon is detected per laser pulse, and the process is repeated multiple times to get an average value. It measures the time difference between the excitation pulse and the photon detection, also called the time width (Δt). The fluorescence decay curve is obtained by plotting the measured time on the x-axis and the number of photons detected on the y-axis. However, it is difficult to simultaneously monitor multiple molecules. Instead, individual excitation-relaxation events are recorded and then averaged to generate the curve.
This technique analyzes the time difference between the excitation of the sample molecule and the release of energy as another photon. Repeating this process many times will give a decay profile. Pulsed lasers or LEDs can be used as a source of excitation. Part of the light passes through the sample, the other to the electronics as "sync" signal. The light emitted by the sample molecule is passed through a monochromator to select a specific wavelength. The light then is detected and amplified by a photomultiplier tube (PMT). The emitted light signal as well as reference light signal is processed through a constant fraction discriminator (CFD) which eliminates timing jitter. After passing through the CFD, the reference pulse activates a time-to-amplitude converter (TAC) circuit. The TAC charges a capacitor which will hold the signal until the next electrical pulse. In reverse TAC mode the signal of "sync" stops the TAC. This data is then further processed by an analog-to-digital converter (ADC) and multi-channel analyzer (MCA) to get a data output. To make sure that the decay is not biased to early arriving photons, the photon count rate is kept low (usually less than 1% of excitation rate).
This electrical pulse comes after the second laser pulse excites the molecule to a higher energy state, and a photon is eventually emitted from a single molecule upon returning to its original state. Thus, the longer a molecule takes to emit a photon, the higher the voltage of the resulting pulse. The central concept of this technique is that only a single photon is needed to discharge the capacitor. Thus, this experiment must be repeated many times to gather the full range of delays between excitation and emission of a photon. After each trial, a pre-calibrated computer converts the voltage sent out by the TAC into a time and records the event in a histogram of time since excitation. Since the probability that no molecule will have relaxed decreases with time, a decay curve emerges that can then be analyzed to find out the decay rate of the event.
Three curves are associated with the observed decay intensity in a fluorescence decay experiment: the measured data, the instrument response function (IRF), and the calculated decay. The IRF, which represents the shortest time profile the instrument can detect, serves as a reference for accurately deconvolving the measured data. This allows for a more precise determination of the fluorescence decay time by accounting for the system's inherent response. As the term implies, this curve illustrates the response of the instrument to a sample with zero lifetime. Usually, dilute scattering solutions, such as Ludox (colloidal silica) and titanium dioxide are used to collect the curve. The measured intensity indicates the number of photons detected within a given time interval, while the calculated decay curve, also known as the fitted curve, represents the convolution of the IRF with the impulse response function.
A major complicating factor is that many decay processes involve multiple energy states, and thus multiple rate constants. Though non-linear least squares analysis can usually detect the different rate constants, determining the processes involved is often very difficult and requires the combination of multiple ultra-fast techniques. Even more complicating is the presence of inter-system crossing and other non-radiative processes in a molecule. A limiting factor of this technique is that it is limited to studying energy states that result in fluorescent decay. The technique can also be used to study relaxation of electrons from the conduction band to the valence band in semiconductors.
TCSPC has extensive applications in fluorescence spectroscopy, microscopy (FLIM), and optical tomography. Over the years, this technique has gained significant attention for studying the fluorescence decay of various classes of molecules, including the fluorescence decay of residues in biological systems. The modulation of the fluorescence of the biological sample provides a better understanding of the complex system. TCSPC is widely used to study the intensity decay of Green Fluorescent Proteins (GFP), Chlorophyll aggregates in hexane, single fluorescence amino acid-containing proteins, and dinucleotides (FAD). It is also used to study the bandwidth in semiconductors.
| Physical sciences | Spectroscopy | Chemistry |
4137557 | https://en.wikipedia.org/wiki/Pyeong | Pyeong | A pyeong (abbreviationpy) is a Korean unit of area and floorspace, equal to a square kan or 36square Korean feet. The ping and tsubo are its equivalent Taiwanese and Japanese units, similarly based on a square bu (ja:步) or ken, equivalent to 36square Chinese or Japanese feet.
Current use
Korea
In Korea, the period of Japanese occupation produced a pyeong of or 3.3058m2. It is the standard traditional measure for real estate floorspace, with an average house reckoned as about 25pyeong, a studio apartment as 8–12py, and a garret as 1½py. In South Korea, the unit has been officially banned since 1961 but with little effect prior to the criminalization of its commercial use effective 1 July 2007. Informal use continues, however, including in the form of real estate use of unusual fractions of meters equivalent to unit amounts of pyeong. Real estate listings on major websites such as Daum show measurements in square meters with the pyeong equivalent.
Taiwan
In Taiwan, the Taiwanese ping was introduced in the period of Taiwan under Japanese rule, which remains in fairly common use and is about 3.305m2.
Japan
In Japan, the usual measure of real estate floorspace is the tatami and the tsubo is reckoned as two tatami. The tatami varies by region but the modern standard is usually taken to be the Nagoya tatami of about 1.653m2, producing a tsubo of 3.306m2. It is sometimes reckoned as comprising 10gō.
China
In China, the metrication of traditional units would produce a ping of 4m2, but it is almost unknown, with most real estate floorspace simply reckoned in square meters. The longer length of the Hong Kong foot produces a larger ping of almost 5m2, but it is similarly uncommon.
| Physical sciences | Area | Basics and measurement |
1533184 | https://en.wikipedia.org/wiki/Chemical%20decomposition | Chemical decomposition | Chemical decomposition, or chemical breakdown, is the process or effect of simplifying a single chemical entity (normal molecule, reaction intermediate, etc.) into two or more fragments. Chemical decomposition is usually regarded and defined as the exact opposite of chemical synthesis. In short, the chemical reaction in which two or more products are formed from a single reactant is called a decomposition reaction.
The details of a decomposition process are not always well defined. Nevertheless, some activation energy is generally needed to break the involved bonds and as such, higher temperatures generally accelerates decomposition. The net reaction can be an endothermic process, or in the case of spontaneous decompositions, an exothermic process.
The stability of a chemical compound is eventually limited when exposed to extreme environmental conditions such as heat, radiation, humidity, or the acidity of a solvent. Because of this chemical decomposition is often an undesired chemical reaction. However chemical decomposition can be desired, such as in various waste treatment processes.
For example, this method is employed for several analytical techniques, notably mass spectrometry, traditional gravimetric analysis, and thermogravimetric analysis. Additionally decomposition reactions are used today for a number of other reasons in the production of a wide variety of products. One of these is the explosive breakdown reaction of sodium azide [(NaN3)2] into nitrogen gas (N2) and sodium (Na). It is this process which powers the life-saving airbags present in virtually all of today's automobiles.
Decomposition reactions can be generally classed into three categories; thermal, electrolytic, and photolytic decomposition reactions.
Reaction formula
In the breakdown of a compound into its constituent parts, the generalized reaction for chemical decomposition is:
AB → A + B (AB represents the reactant that begins the reaction, and A and B represent the products of the reaction)
An example is the electrolysis of water to the gases hydrogen and oxygen:
2 H2O() → 2 H2() + O2()
Additional examples
An example of a spontaneous (without addition of an external energy source) decomposition is that of hydrogen peroxide which slowly decomposes into water and oxygen (see video at right):
2 H2O2 → 2 H2O + O2
This reaction is one of the exceptions to the endothermic nature of decomposition reactions.
Other reactions involving decomposition do require the input of external energy. This energy can be in the form of heat, radiation, electricity, or light. The latter being the reason some chemical compounds, such as many prescription medicines, are kept and stored in dark bottles which reduce or eliminate the possibility of light reaching them and initiating decomposition.
When heated, carbonates will decompose. A notable exception is carbonic acid, (H2CO3). Commonly seen as the "fizz" in carbonated beverages, carbonic acid will spontaneously decompose over time into carbon dioxide and water. The reaction is written as:
H2CO3 → H2O + CO2
Other carbonates will decompose when heated to produce their corresponding metal oxide and carbon dioxide. The following equation is an example, where M represents the given metal:
MCO3 → MO + CO2
A specific example is that involving calcium carbonate:
CaCO3 → CaO + CO2
Metal chlorates also decompose when heated. In this type of decomposition reaction, a metal chloride and oxygen gas are the products. Here, again, M represents the metal:
2 MClO3 → 2 MCl+ 3 O2
A common decomposition of a chlorate is in the reaction of potassium chlorate where oxygen is the product. This can be written as:
2 KClO3 → 2 KCl + 3 O2
| Physical sciences | Other reactions | Chemistry |
1533521 | https://en.wikipedia.org/wiki/Dreyse%20needle%20gun | Dreyse needle gun | The needle-gun was a 19th-century military breech-loading rifle, as well as the first breech-loading rifle to use a bolt action to open and close the chamber. It was used as the main infantry weapon of the Prussians in the Wars of German Unification. It was invented in 1836 by the German gunsmith Johann Nikolaus von Dreyse (1787–1867), who had been conducting numerous design experiments since 1824.
The name "ignition needle rifle" () was based on its firing pin, since it passed like a needle through the paper cartridge to strike a percussion cap at the base of the bullet. However, to conceal the revolutionary nature of the design, the rifle entered military service in 1841 as the (). It had a rate of fire of about six rounds per minute.
History
The first types of needle gun made by Johann Nikolaus von Dreyse were muzzle-loading, with a firing pin consisting of a long needle driven by a coiled conchoidal spring that fired the internal percussion cap on the base of the sabot. His adoption of the bolt-action breech-loading principle combined with this igniter system gave the rifle its military potential, as these factors allowed a much faster rate of fire.
After successful testing in 1840, the Prussian king Friedrich Wilhelm IV ordered 60,000 of the new rifles. Dreyse set up the factory in Sömmerda with the help of state loans to ramp up production. It was accepted for service in 1841 as the , but only 45,000 units had been produced by 1848. It was used in combat for the first time during the German revolutions of 1848–49 and proved its combat superiority in street fighting during the May Uprising in Dresden in 1849. Many German states subsequently adopted the weapon. The Sömmerda factory could not meet demand and produced only 30,000 rifles a year. Most of the Prussian infantry in the 1850s were still equipped with the obsolete 1839 Model Potsdam musket, a smoothbore weapon whose range and accuracy was far inferior to the French Minié and Austrian Lorenz rifle. The Prussian Army's low level of funding resulted in just 90 battalions being equipped with the weapon in 1855. Dreyse consented to state manufacture of the rifle to increase production. The Royal Prussian Rifle Factory at the Spandau Arsenal began production in 1853, followed by Danzig, and Erfurt. At first, the Spandau factory produced 12,000 Dreyse needle guns a year, rising to 48,000 in 1867.
The British Army evaluated the Dreyse needle gun in 1849–1851. In the British trials, the Dreyse was shown to be capable of six rounds per minute, and to maintain accuracy at . The trials suggested that the Dreyse was "too complicated and delicate" for service use. The French muzzle-loading rifle was judged to be a better weapon, and an improved version was adopted as the Pattern 1851 Minié-type muzzle-loading rifle.
After the Prussian army received a 25% increase in funding and was reformed by Wilhelm I, Albrecht von Roon and Helmuth von Moltke the Elder from 1859 to 1863, the Dreyse needle gun played an important role in the Austro-Prussian victory in the Second Schleswig War against Denmark in 1864. The introduction of cast steel barrels made industrial mass production of the weapon possible in the early 1860s. The new 1862 model and the enhanced M/55 ammunition type expedited the use and widespread adoption of the weapon in the 1860s. The success of German private industry in delivering the necessary amount of armaments for the army marked the definite end of government-owned army workshops. The Prussian Army infantry had 270,000 Dreyse needle guns by the outbreak of the Austro-Prussian War in 1866. The employment of the needle-gun changed military tactics in the 19th century, as a Prussian soldier could fire five (or more) shots, even while lying on the ground, in the time that it took his Austrian muzzle-loading counterpart to reload while standing. Production was ramped up after the war against Austria and when the Franco-Prussian War broke out in 1870, the Prussian Army had 1,150,000 needle guns in its inventory.
In 1867, Romania purchased 20,000 rifles and 11,000 carbines from the Prussian government. These were used to great effect in the Romanian War of Independence.
Sometime in the late 1860s, Japan acquired an unknown number of Model 1862 rifles and bayonets. These were marked with the imperial chrysanthemum stamp. China also acquired Dreyse rifles for the modernisation of their armed forces.
Ammunition and mechanism
The cartridge used with this rifle consisted of the paper case, the bullet, the percussion cap and the black powder charge. The 15.4 mm (0.61 in) bullet was shaped like an acorn, with the broader end forming a point and the primer attached to its base. The bullet was held in a paper case known as a sabot, which separated from the bullet as it exited the muzzle. Between this inner lining and the outer case was the powder charge, consisting of 4.8 g (74 grains) of black powder.
The upper end of the paper case is rolled up and tied. Upon release of the trigger, the point of the needle pierces the rear of the cartridge, passes through the powder and hits the primer fixed to the base of the sabot. Thus the burn-front in the black powder charge passes from the front to the rear. This front-to-rear burn pattern minimizes the effect seen in rear-igniting cartridges where a portion of the powder at the front of the charge is wasted, as it is forced down and out of the barrel and burns in the air as muzzle flash. It also ensures that the whole charge burns under the highest possible pressure, theoretically minimising unburnt residues. Consequently, a smaller charge can be used to obtain the same velocity as a rear-ignited charge of the same bullet calibre and weight. It also increases the handling security of the cartridge, since it is virtually impossible to set the primer off accidentally.
There was also a blank cartridge developed for the needle gun. It was shorter and lighter than the live round, since it lacked the projectile, but was otherwise similar in construction and powder load.
Limitations
British trials in 1849–1851 showed that:
The spring that drove the needle was delicate.
When the needle was dirty, the rifle tended to misfire. Colonel Hawker considered that a new needle was required every 12 shots.
When the gun was heated and foul, operating the bolt required much strength.
The barrel tended to wear at the junction with the cylinder.
The escape of gas at the breech got worse as firing continued.
Its effective range was less than that of the Chassepot, against which it was fielded during the Franco-Prussian War. This was mainly because a sizable amount of gas escaped at the breech when the rifle was fired with a paper cartridge. An improved model, giving greater muzzle velocity and increased speed in loading, was introduced later, but it was replaced shortly thereafter by the Mauser Model 1871 rifle.
The placement of the primer directly behind the bullet meant the firing needle was enclosed in black powder when the gun was fired, causing stress to the pin, which could break over time and render the rifle useless until it could be replaced. Soldiers were provided with two replacement needles for that purpose. The needle could be easily replaced in under 30 seconds, even in the field. Because the rifle used black powder, residue accumulated at the back of the barrel, making cleaning necessary after about 60–80 shots. This was not a large problem because the individual soldier carried fewer cartridges than that and Dreyse created an "air chamber" by having a protruding needle tube. (The Chassepot also had this, but it was more likely to jam after fewer shots because its chamber had a smaller diameter.) A soldier trained before the Austro-Prussian War of 1866 had to finish field cleaning in less than 10 minutes.
Comparison with contemporary rifles
| Technology | Specific firearms | null |
1536137 | https://en.wikipedia.org/wiki/Ammonium%20sulfate | Ammonium sulfate | Ammonium sulfate (American English and international scientific usage; ammonium sulphate in British English); (NH4)2SO4, is an inorganic salt with a number of commercial uses. The most common use is as a soil fertilizer. It contains 21% nitrogen and 24% sulfur.
Uses
Agriculture
The primary use of ammonium sulfate is as a fertilizer for alkaline soils. In the soil, the ammonium ion is released and forms a small amount of acid, lowering the pH balance of the soil, while contributing essential nitrogen for plant growth. One disadvantage to the use of ammonium sulfate is its low nitrogen content relative to ammonium nitrate, which elevates transportation costs.
It is also used as an agricultural spray adjuvant for water-soluble insecticides, herbicides, and fungicides. There, it functions to bind iron and calcium cations that are present in both well water and plant cells. It is particularly effective as an adjuvant for 2,4-D (amine), glyphosate, and glufosinate herbicides.
Laboratory use
Ammonium sulfate precipitation is a common method for protein purification by precipitation. As the ionic strength of a solution increases, the solubility of proteins in that solution decreases. Being extremely soluble in water, ammonium sulfate can "salt out" (precipitate) proteins from aqueous solutions. Precipitation by ammonium sulfate is a result of a reduction in solubility rather than protein denaturation, thus the precipitated protein can be resolubilized through the use of standard buffers. Ammonium sulfate precipitation provides a convenient and simple means to fractionate complex protein mixtures.
In the analysis of rubber lattices, volatile fatty acids are analyzed by precipitating rubber with a 35% ammonium sulfate solution, which leaves a clear liquid from which volatile fatty acids are regenerated with sulfuric acid and then distilled with steam. Selective precipitation with ammonium sulfate, opposite to the usual precipitation technique which uses acetic acid, does not interfere with the determination of volatile fatty acids.
Food additive
As a food additive, ammonium sulfate is considered generally recognized as safe (GRAS) by the U.S. Food and Drug Administration, and in the European Union it is designated by the E number E517. It is used as an acidity regulator in flours and breads.
Other uses
Ammonium sulfate is a precursor to other ammonium salts, especially ammonium persulfate.
Ammonium sulfate is listed as an ingredient for many United States vaccines per the Centers for Disease Control.
Ammonium sulfate has also been used in flame retardant compositions acting much like diammonium phosphate. As a flame retardant, it increases the combustion temperature of the material, decreases maximum weight loss rates, and causes an increase in the production of residue or char.
Preparation
Ammonium sulfate is made by treating ammonia with sulfuric acid:
A mixture of ammonia gas and water vapor is introduced into a reactor that contains a saturated solution of ammonium sulfate and about 2% to 4% of free sulfuric acid at 60 °C. Concentrated sulfuric acid is added to keep the solution acidic, and to retain its level of free acid. The heat of reaction keeps reactor temperature at 60 °C. Dry, powdered ammonium sulfate may be formed by spraying sulfuric acid into a reaction chamber filled with ammonia gas. The heat of reaction evaporates all water present in the system, forming a powdery salt. Approximately 6,000 million tons were produced in 1981.
Ammonium sulfate also is manufactured from gypsum (CaSO4·2H2O). Finely divided gypsum is added to an ammonium carbonate solution. Calcium carbonate precipitates as a solid, leaving ammonium sulfate in the solution.
Ammonium sulfate occurs naturally as the rare mineral mascagnite in volcanic fumaroles and due to coal fires on some dumps.
Ammonium sulfate is a byproduct in the production of methyl methacrylate.
Properties
Ammonium sulfate becomes ferroelectric at temperatures below –49.5 °C. At room temperature it crystallises in the orthorhombic system, with cell sizes of a = 7.729 Å, b = 10.560 Å, c = 5.951 Å. When chilled into the ferrorelectric state, the symmetry of the crystal changes to space group Pna21.
Reactions
Ammonium sulfate decomposes upon heating above , first forming ammonium bisulfate. Heating at higher temperatures results in decomposition into ammonia, nitrogen, sulfur dioxide, and water.
As a salt of a strong acid (H2SO4) and weak base (NH3), its solution is acidic; the pH of 0.1 M solution is 5.5. In aqueous solution the reactions are those of and ions. For example, addition of barium chloride, precipitates out barium sulfate. The filtrate on evaporation yields ammonium chloride.
Ammonium sulfate forms many double salts (ammonium metal sulfates) when its solution is mixed with equimolar solutions of metal sulfates and the solution is slowly evaporated. With trivalent metal ions, alums such as ferric ammonium sulfate are formed. Double metal sulfates include ammonium cobaltous sulfate, ferrous diammonium sulfate, ammonium nickel sulfate which are known as Tutton's salts and ammonium ceric sulfate. Anhydrous double sulfates of ammonium also occur in the Langbeinites family. The ammonia produced has a pungent smell and is toxic.
Airborne particles of evaporated ammonium sulfate comprise approximately 30% of fine particulate pollution worldwide.
It reacts with additional sulfuric acid to give triammonium hydrogen disulphate,, .
Legislation and control
In November 2009, a ban on ammonium sulfate, ammonium nitrate and calcium ammonium nitrate fertilizers was imposed in the former Malakand Division—comprising the Upper Dir, Lower Dir, Swat, Chitral and Malakand districts of the North West Frontier Province (NWFP) of Pakistan, by the NWFP government, following reports that they were used by militants to make explosives. In January 2010, these substances were also banned in Afghanistan for the same reason.
| Physical sciences | Salts | null |
1539785 | https://en.wikipedia.org/wiki/Dark%20matter%20halo | Dark matter halo | In modern models of physical cosmology, a dark matter halo is a basic unit of cosmological structure. It is a hypothetical region that has decoupled from cosmic expansion and contains gravitationally bound matter.
A single dark matter halo may contain multiple virialized clumps of dark matter bound together by gravity, known as subhalos.
Modern cosmological models, such as ΛCDM, propose that dark matter halos and subhalos may contain galaxies. The dark matter halo of a galaxy envelops the galactic disc and extends well beyond the edge of the visible galaxy. Thought to consist of dark matter, halos have not been observed directly. Their existence is inferred through observations of their effects on the motions of stars and gas in galaxies and gravitational lensing. Dark matter halos play a key role in current models of galaxy formation and evolution. Theories that attempt to explain the nature of dark matter halos with varying degrees of success include cold dark matter (CDM), warm dark matter, and massive compact halo objects (MACHOs).
Rotation curves as evidence of a dark matter halo
The presence of dark matter (DM) in the halo is inferred from its gravitational effect on a spiral galaxy's rotation curve. Without large amounts of mass throughout the (roughly spherical) halo, the rotational velocity of the galaxy would decrease at large distances from the galactic center, just as the orbital speeds of the outer planets decrease with distance from the Sun. However, observations of spiral galaxies, particularly radio observations of line emission from neutral atomic hydrogen (known, in astronomical parlance, as 21 cm Hydrogen line, H one, and H I line), show that the rotation curve of most spiral galaxies flattens out, meaning that rotational velocities do not decrease with distance from the galactic center. The absence of any visible matter to account for these observations implies either that unobserved (dark) matter, first proposed by Ken Freeman in 1970, exist, or that the theory of motion under gravity (general relativity) is incomplete. Freeman noticed that the expected decline in velocity was not present in NGC 300 nor M33, and considered an undetected mass to explain it. The DM Hypothesis has been reinforced by several studies.
Formation and structure of dark matter halos
The formation of dark matter halos is believed to have played a major role in the early formation of galaxies. During initial galactic formation, the temperature of the baryonic matter should have still been much too high for it to form gravitationally self-bound objects, thus requiring the prior formation of dark matter structure to add additional gravitational interactions. The current hypothesis for this is based on cold dark matter (CDM) and its formation into structure early in the universe.
The hypothesis for CDM structure formation begins with density perturbations in the Universe that grow linearly until they reach a critical density, after which they would stop expanding and collapse to form gravitationally bound dark matter halos. The spherical collapse framework analytically models the formation and growth of such halos. These halos would continue to grow in mass (and size), either through accretion of material from their immediate neighborhood, or by merging with other halos. Numerical simulations of CDM structure formation have been found to proceed as follows: A small volume with small perturbations initially expands with the expansion of the Universe. As time proceeds, small-scale perturbations grow and collapse to form small halos. At a later stage, these small halos merge to form a single virialized dark matter halo with an ellipsoidal shape, which reveals some substructure in the form of dark matter sub-halos.
The use of CDM overcomes issues associated with the normal baryonic matter because it removes most of the thermal and radiative pressures that were preventing the collapse of the baryonic matter. The fact that the dark matter is cold compared to the baryonic matter allows the DM to form these initial, gravitationally bound clumps. Once these subhalos formed, their gravitational interaction with baryonic matter is enough to overcome the thermal energy, and allow it to collapse into the first stars and galaxies. Simulations of this early galaxy formation matches the structure observed by galactic surveys as well as observation of the Cosmic Microwave Background.
Density profiles
A commonly used model for galactic dark matter halos is the pseudo-isothermal halo:
where denotes the finite central density and the core radius. This provides a good fit to most rotation curve data. However, it cannot be a complete description, as the enclosed mass fails to converge to a finite value as the radius tends to infinity. The isothermal model is, at best, an approximation. Many effects may cause deviations from the profile predicted by this simple model. For example, (i) collapse may never reach an equilibrium state in the outer region of a dark matter halo, (ii) non-radial motion may be important, and (iii) mergers associated with the (hierarchical) formation of a halo may render the spherical-collapse model invalid.
Numerical simulations of structure formation in an expanding universe lead to the empirical NFW (Navarro–Frenk–White) profile:
where is a scale radius, is a characteristic (dimensionless) density, and = is the critical density for closure. The NFW profile is called 'universal' because it works for a large variety of halo masses, spanning four orders of magnitude, from individual galaxies to the halos of galaxy clusters. This profile has a finite gravitational potential even though the integrated mass still diverges logarithmically. It has become conventional to refer to the mass of a halo at a fiducial point that encloses an overdensity 200 times greater than the critical density of the universe, though mathematically the profile extends beyond this notational point. It was later deduced that the density profile depends on the environment, with the NFW appropriate only for isolated halos. NFW halos generally provide a worse description of galaxy data than does the pseudo-isothermal profile, leading to the cuspy halo problem.
Higher resolution computer simulations are better described by the Einasto profile:
where r is the spatial (i.e., not projected) radius. The term is a function of n such that is the density at the radius that defines a volume containing half of the total mass. While the addition of a third parameter provides a slightly improved description of the results from numerical simulations, it is not observationally distinguishable from the 2 parameter NFW halo, and does nothing to alleviate the cuspy halo problem.
Shape
The collapse of overdensities in the cosmic density field is generally aspherical. So, there is no reason to expect the resulting halos to be spherical. Even the earliest simulations of structure formation in a CDM universe emphasized that the halos are substantially flattened. Subsequent work has shown that halo equidensity surfaces can be described by ellipsoids characterized by the lengths of their axes.
Because of uncertainties in both the data and the model predictions, it is still unclear whether the halo shapes inferred from observations are consistent with the predictions of ΛCDM cosmology.
Halo substructure
Up until the end of the 1990s, numerical simulations of halo formation revealed little substructure. With increasing computing power and better algorithms, it became possible to use greater numbers of particles and obtain better resolution. Substantial amounts of substructure are now expected. When a small halo merges with a significantly larger halo it becomes a subhalo orbiting within the potential well of its host. As it orbits, it is subjected to strong tidal forces from the host, which cause it to lose mass. In addition the orbit itself evolves as the subhalo is subjected to dynamical friction which causes it to lose energy and angular momentum to the dark matter particles of its host. Whether a subhalo survives as a self-bound entity depends on its mass, density profile, and its orbit.
Angular momentum
As originally pointed out by Hoyle and first demonstrated using numerical simulations by Efstathiou & Jones, asymmetric collapse in an expanding universe produces objects with significant angular momentum.
Numerical simulations have shown that the spin parameter distribution for halos formed by dissipation-less hierarchical clustering is well fit by a log-normal distribution, the median and width of which depend only weakly on halo mass, redshift, and cosmology:
with and . At all halo masses, there is a marked tendency for halos with higher spin to be in denser regions and thus to be more strongly clustered.
Milky Way dark matter halo
The visible disk of the Milky Way Galaxy is thought to be embedded in a much larger, roughly spherical halo of dark matter. The dark matter density drops off with distance from the galactic center. It is now believed that about 95% of the galaxy is composed of dark matter, a type of matter that does not seem to interact with the rest of the galaxy's matter and energy in any way except through gravity. The luminous matter makes up approximately solar masses. The dark matter halo is likely to include around to solar masses of dark matter. A 2014 Jeans analysis of stellar motions calculated the dark matter density (at the sun's distance from the galactic centre) = 0.0088 (+0.0024 −0.0018) solar masses/parsec^3.
| Physical sciences | Basics_2 | Astronomy |
9337873 | https://en.wikipedia.org/wiki/Madtsoiidae | Madtsoiidae | Madtsoiidae is an extinct family of mostly Gondwanan snakes with a fossil record extending from early Cenomanian (Upper Cretaceous) to late Pleistocene strata located in South America, Africa, India, Australia and Southern Europe. Madtsoiidae include very primitive snakes, which like extant boas and pythons would likely dispatch their prey by constriction. Genera include some of the longest snakes known such as Vasuki, measuring at least long, and the Australian Wonambi and Yurlunggur. As a grouping of basal forms the composition and even the validity of Madtsoiidae is in a state of flux as new pertinent finds are described, with more recent evidence suggesting that it is paraphyletic as previously defined.
Although madtsoiids persisted on Australia until the Pleistocene, they largely went extinct elsewhere during the Eocene. However, some species persisted in South America and India through the Oligocene.
Description
Madtsoiidae was first classified as a subfamily of Boidae, Madtsoiinae, in Hoffstetter (1961). Further study and new finds allowed ranking the group as a distinct family in Linnaean systems. With the recent use of cladistics to unravel phylogeny, various analyses have posited Madtsoiidae as a likely clade within Serpentes, or possible paraphyletic stem group outside Serpentes and within a more inclusive Ophidia.
Madtsoiid snakes ranged in size from less than (estimated total length) to over , and are thought to have been constrictors analogous to modern pythons and boas, but with more primitive jaw structures less highly adapted for swallowing large prey. There are specific anatomical features that diagnose members of this family, such as the presence of hypapophyses only in anterior trunk, that the middle and posterior trunk vertebrae possess a moderately or well-developed haemal keel, except for a few near the cloacal region, often with short laterally paired projections on the posterior part of the keel. Also, all trunk and caudal vertebrae have at least a parazygantral foramen, sometimes several of them, located in a more or less distinct fossa that is lateral to each zygantral facet. Additional features are the prezygapophyseal processes' absence while the paracotylar foramina are present and that the diapophyses are relatively wide, exceeding width across prezygapophyses at least in the posterior trunk vertebrae. (Scanlon 2005)
Like most fossil snakes the majority of madtsoiids are known only from isolated vertebrae, but several (Madtsoia bai, M. camposi, Wonambi naracoortensis, Nanowana spp., unnamed Yurlunggur spp., Najash rionegrina) have associated or articulated parts of skeletons. Of the genera listed below, all have been referred to Madtsoiidae in all recent classifications except Najash rionegrina, which is included here based on diagnostic vertebral characters described by Apesteguía and Zaher (2006). These authors didn't include Najash among madtsoiids because they consider that madtsoiids are a paraphyletic assemblage of basal macrostomatans related to Madtsoia bai and consequently, not related to the Cretaceous alethinophidians from southern continents.
Rieppel et al. (2002) classified Wonambi naracoortensis within the extant radiation (crown group) of snakes as Macrostomata incertae sedis, but many of their character state attributions for this species have been criticised or refuted by Scanlon (2005) and the better-preserved skulls of Yurlunggur sp./spp. have numerous characters apparently more plesiomorphic than any macrostomatans (Scanlon, 2006). The partial skull attributed to Najash rionegrina (Apesteguía and Zaher 2006) resembles that of the non-madtsoiid Dinilysia patagonica, and vertebrae support that they are related. The type material of Najash is the only possible madtsoiid specimen retaining evidence of pelvic and hindlimb elements, which are claimed to be more plesiomorphic than other Cretaceous limbed snakes, such as Pachyrhachis, Haasiophis or Eupodophis, in retaining a sacro-iliac contact and well-developed limbs, with a huge and well-defined trochanter. The sacro iliac contact is perhaps misleadingly described by Apesteguía and Zaher as unique possession of a sacrum, whereas it has rarely been questioned that the cloacal vertebrae in snakes are homologous to the sacrals of limbed squamates (i.e. the sacrum is present but has lost contact with the reduced ilia in other taxa). It would be unsurprising if other madtsoiids also possessed hindlimbs as complete as those of Najash.
Several madtsoiid genera have been named using indigenous words for legendary Rainbow Serpents or dragons, including Wonambi (Pitjantjatjara), Yurlunggur (Yolngu) and Nanowana (Ancient Greek nano-, 'dwarf' + Warlpiri Wana) in Australia, and Herensugea (Basque) in Europe. G.G. Simpson (1933) apparently started this trend by compounding Madtsoia from indigenous roots. In this particular case these originated from the Tehuelche language, although the reference made was geographic rather than mythological, the derivation being from that language's terms mad, "valley" and tsoi, "cow" as a rough translation from Spanish name of the type locality, Cañadón Vaca.
A 2022 morphological study found Madtsoiidae to be paraphyletic, with Sanajeh being found to be the most basal member of the Ophidia, whereas the Cenozoic Australian madtsoiids were basal alethinophidians.
Classification
Gigantophis Andrews, 1901
Gigantophis garstini Andrews, 1901 (Andrews, 1906; Hoffstetter, 1961b; Paleogene, Late Eocene; Egypt (Birket Qarun and Qasr el-Sagha Formations), Libya)
Madtsoia Simpson, 1933
Madtsoia bai Simpson, 1933 (Paleogene, Early Eocene Sarmiento Formation; Argentina)
Madtsoia cf. M. bai (Simpson, 1935; Hoffstetter, 1960; Paleogene, Late Paleocene Las Flores Formation; Argentina)
Madtsoia madagascariensis Hoffstetter, 1961a (Piveteau, 1933; Cretaceous, Maastrichtian Maevarano Formation; Madagascar)
Madtsoia aff. madagascariensis (de Broin et al., 1974; Cretaceous, Coniacian or Santonian In Beceten Formation, Niger)
Madtsoia camposi Rage, 1998 (Paleogene, middle Paleocene Itaboraí Formation; Brazil)
Madtsoia pisdurensis Mohabey et al, 2011 (Cretaceous, Maastrichtian Lameta Formation; India)
Wonambi Smith, 1976
Wonambi naracoortensis Smith, 1976 (Scanlon and Lee, 2000; Scanlon, 2005; Neogene, Pliocene to Pleistocene; Australia)
Wonambi barriei Scanlon in Scanlon and Lee, 2000 (Neogene, early Miocene; Australia)
Patagoniophis Albino, 1986
Patagoniophis parvus Albino, 1986 (Cretaceous, Campanian or Maastrichtian Los Alamitos Formation; Argentina)
Patagoniophis australiensis Scanlon, 2005 (Scanlon, 1993; Paleogene, early Eocene; Australia)
Alamitophis Albino, 1986
Alamitophis argentinus Albino, 1986 (Cretaceous, Campanian or Maastrichtian Los Alamitos and La Colonia Formations; Argentina)
Alamitophis elongatus Albino, 1994 (Cretaceous, Campanian or Maastrichtian Allen Formation; Argentina)
Alamitophis tingamarra Scanlon, 2005 (Scanlon, 1993; Paleogene, early Eocene; Australia)
Rionegrophis Albino, 1986
Rionegrophis madtsoioides Albino, 1986 (Cretaceous, Campanian or Maastrichtian Los Alamitos Formation; Argentina)
Yurlunggur Scanlon, 1992
Yurlunggur camfieldensis Scanlon, 1992 (Neogene, middle Miocene Bullock Creek (Northern Territory); Australia)
Yurlunggur spp. (Scanlon, 2004; 2006; Paleogene-Neogene, Oligocene to Miocene; Australia)
Herensugea Rage, 1996
Herensugea caristiorum Rage, 1996 (Cretaceous, Campanian or Maastrichtian Vitoria Formation; Spain)
Nanowana Scanlon, 1997
Nanowana godthelpi Scanlon, 1997 (Neogene, early-to-middle Miocene Australian Fossil Mammal Sites (Riversleigh); Australia)
Nanowana schrenki Scanlon, 1997 (Neogene, early-to-middle Miocene; Australia)
Sanajeh Wilson et al., 2010
Sanajeh indicus Wilson et al., 2010 (Cretaceous, Maastrichtian Lameta Formation; India)
Menarana Laduke et al., 2010
Menarana nosymena Laduke et al., 2010 (Late Cretaceous, Maastrichtian Maevarano Formation; Madagascar)
Menarana laurasiae Rage, 1996 (Astibia et al., 1990; Cretaceous, Campanian or Maastrichtian; Spain)
Nidophis Vasile et al., 2013
Nidophis insularis Vasile et al., 2013 (Late Cretaceous, Maastrichtian Densus-Ciula Formation; Romania)
Adinophis Pritchard et al., 2014
Adinophis fisaka Pritchard et al., 2014 (Late Cretaceous, Maastrichtian Maevarano Formation; Madagascar)
Platyspondylophis Smith et al., 2016
Platyspondylophis tadkeshwarensis Smith et al., 2016 (Paleogene, Eocene Cambay Shale; India)
Eomadtsoia Gómez et al., 2019
Eomadtsoia ragei Gómez et al., 2019 (Cretaceous, Maastrichtian La Colonia Formation; Argentina)
Powellophis Garberoglio et al., 2022
Powellophis andina Garberoglio et al., 2022 (Paleogene, Paleocene Mealla Formation; Argentina)
Vasuki Datta & Bajpai, 2024
Vasuki indicus Datta & Bajpai, 2024 (Paleogene, Eocene Naredi Formation; India)
Unnamed specimens
Madtsoiidae indet. (Rage, 1987; Paleogene, Paleocene; Morocco)
Madtsoiidae indet. (Werner and Rage, 1994, Rage and Werner 1999; Cretaceous, Cenomanian; Sudan)
?Madtsoiid (Rage and Prasad, 1992; Cretaceous, Maastrichtian; India)
?Madtsoiid (Rage, 1991; Paleogene, early Paleocene Santa Lucía Formation; Bolivia)
?Madtsoiidae indet. cf. Madtsoia sp. (Scanlon, 2005; Paleogene, early Eocene; Australia)
Madtsoiidae indet. (Folie and Codrea, 2005; Cretaceous, Maastrichtian; Romania)
Madtsoiidae nov. (Gomez and Baez, 2006; Cretaceous, late Campanian or early Maastrichtian; Argentina)
Madtsoiidae indet. (Wazir et al., 2022; Late-Oligocene, India)
Phylogeny
According to a cladistic analysis by Scanlon (2006), Wonambi and Yurlunggur as representative genera of Madtsoiidae form a monophyletic assembly. However, as Madtsoia is not included, its grouping in the same family is questionable.
| Biology and health sciences | Prehistoric squamates | Animals |
9339120 | https://en.wikipedia.org/wiki/Magnesium%20citrate | Magnesium citrate | Magnesium citrates are metal-organic compounds formed from citrate and magnesium ions. They are salts. One form is the 1:1 magnesium preparation in salt form with citric acid in a 1:1 ratio (1 magnesium atom per citrate molecule). It contains 11.33% magnesium by weight. Magnesium citrate (sensu lato) is used medicinally as a saline laxative and to empty the bowel before major surgery or a colonoscopy. It is available without a prescription, both as a generic and under various brand names. It is also used in the pill form as a magnesium dietary supplement. As a food additive, magnesium citrate is used to regulate acidity and is known as E number E345.
Structures
The structures of solid magnesium citrates have been characterized by X-ray crystallography. In the 1:1 salt, only one carboxylate of citrate is deprotonated. It has the formula The other form of magnesium citrate has the formula , consisting of the citrate dianion (both carboxylic acids are deprotonated). Thus, it is clear that name "magnesium citrate" is ambiguous and sometimes may refer to other salts such as trimagnesium dicitrate which has a magnesium:citrate ratio of 3:2, or monomagnesium dicitrate with a ratio of 1:2, or a mix of two or three of the salts of magnesium and citric acid.
Mechanism of action
Magnesium citrate works by attracting water through the tissues by a process known as osmosis. Once in the intestine, it can attract enough water into the intestine to induce defecation. The additional water stimulates bowel motility. This means it can also be used to treat rectal and colon problems. Magnesium citrate functions best on an empty stomach, and should always be followed with a full (eight-ounce or 250 ml) glass of water or juice to help counteract water loss and aid in absorption. Magnesium citrate solutions generally produce bowel movement in one-half to three hours.
Use and dosage
The maximum upper tolerance limit (UTL) for magnesium in supplement form for adults is 350 mg of elemental magnesium per day, according to the National Institutes of Health (NIH). In addition, according to the NIH, total dietary requirements for magnesium from all sources (in other words, food and supplements) is 320–420 mg of elemental magnesium per day, though there is no UT for dietary magnesium.
Laxative
Magnesium citrate is used as a laxative agent. It is not recommended for use in children and infants two years of age or less.
Magnesium deficiency treatment
Although less common, as a magnesium supplement the citrate form is sometimes used because it is believed to be more bioavailable than other common pill forms, such as magnesium oxide. But, according to one study, magnesium gluconate was found to be marginally more bioavailable than even magnesium citrate.
Potassium-magnesium citrate, as a supplement in pill form, is useful for the prevention of kidney stones.
Side effects
Magnesium citrate is generally not a harmful substance, but care should be taken by consulting a healthcare professional if any adverse health problems are suspected or experienced. Extreme magnesium overdose can result in serious complications such as slow heartbeat, low blood pressure, nausea, drowsiness, etc. If severe enough, an overdose can even result in coma or death. However, a moderate overdose will be excreted through the kidneys, unless one has serious kidney problems. Rectal bleeding or failure to have a bowel movement after use could be signs of a serious condition.
| Physical sciences | Citrates | Chemistry |
9348093 | https://en.wikipedia.org/wiki/Energy%20engineering | Energy engineering | Energy engineering is a multidisciplinary field of engineering that focuses on optimizing energy systems, developing renewable energy technologies, and improving energy efficiency to meet the world's growing demand for energy in a sustainable manner. It encompasses areas such as energy harvesting and storage, energy conversion, energy materials, energy systems, energy efficiency, energy services, facility management, plant engineering, energy modelling, environmental compliance, As one of the most recent engineering disciplines to emerge, energy engineering plays a critical role in addressing global challenges like climate change, carbon reduction, and the transition from fossil fuels to renewable energy sources and sustainable energy.
Energy engineering is one of the most recent engineering disciplines to emerge. Energy engineering combines knowledge from the fields of physics, math, and chemistry with economic and environmental engineering practices. Energy engineers apply their skills to increase efficiency and further develop renewable sources of energy. The main job of energy engineers is to find the most efficient and sustainable ways to operate buildings and manufacturing processes. Energy engineers audit the use of energy in those processes and suggest ways to improve the systems. This means suggesting advanced lighting, better insulation, more efficient heating and cooling properties of buildings. Although an energy engineer is concerned about obtaining and using energy in the most environmentally friendly ways, their field is not limited to strictly renewable energy like hydro, solar, biomass, or geothermal. Energy engineers are also employed by the fields of oil and natural gas extraction.
Purpose
The primary purpose of energy engineering is to optimize the production and use of energy resources while minimizing energy waste and reducing environmental impact. This discipline is vital for designing systems that consume less energy, meet carbon reduction targets, and improve the energy efficiency of processes in industrial, commercial, and residential sectors. Often applied to building design, heavy consideration is given to HVAC, lighting, refrigeration, to both reduce energy loads and increase efficiency of current systems. Energy engineering is increasingly seen as a major step forward in meeting carbon reduction targets. Since buildings and houses consume over 40% of the United States energy, the services an energy engineer performs are in demand.
History
Human civilizations have long relied on the conversion of energy for various purposes, from the use of fire to the development of water wheels, windmills, and, eventually, electricity generation. The formalization of energy engineering began during the industrial revolution and accelerated in the mid-20th century with advancements in electrical power systems, nuclear energy, and renewable energy technologies. The oil crisis of 1973 highlighted the need for increased energy efficiency and energy independence, leading to the establishment of new government programs and industry standards. In addition, the energy crisis of 1979 brought to light the need to get more work out of less energy. The United States government passed several laws to promote increased energy efficiency, such as United States public law 94-413, the Federal Clean Car Incentive Program.
Power engineering
Power engineering, often viewed as a subset of electrical engineering, focuses on the generation, transmission, distribution, and utilization of electrical power. This subfield covers critical infrastructure such as power plants, electric grids, and energy storage systems, ensuring the efficient and reliable delivery of energy across various sectors. Emerging technologies in power engineering include the development of smart grids, microgrids, and advanced energy storage systems like lithium-ion batteries and hydrogen fuel cells, which are central to the future of renewable energy integration.
Leadership in Energy and Environmental Design
Leadership in Energy and Environmental Design (LEED) is a program created by the United States Green Building Council (USGBC) in March 2000. LEED is a program that encourages green building and promotes sustainability in the construction of buildings and the efficiency of the utilities in the buildings.
In 2012 the United States Green Building Council asked the independent firm Booz Allen Hamilton to conduct a study on the effectiveness of LEED program. "This study confirmed that green buildings generate substantial energy savings. From 2000–2008, green construction and renovation generated $1.3 billion in energy savings. Of that $1.3 billion, LEED-certified buildings accounted for $281 million." The study also found the summation of all green construction supported 2.4 million jobs.
Energy efficiency
Energy efficiency is seen two ways. The first view is that more work is done from the same amount of energy used. The other perception is that the same amount of work is accomplished with less energy used in the system. Some ways to get more work out of less energy is to "Reduce, Reuse, and Recycle" the materials used in daily life. The advancement of technology has led to other uses of waste. Technology such as waste-to-energy facilities which convert solid wastes through the process of gasification or pyrolysis to liquid fuels to be burned. The Environmental Protection Agency stated that the United States produced 250 million tons of municipal waste in 2010. Of that 250 million tons roughly 54% gets thrown in land fills, 33% is recycled, and 13% goes to energy recovery plants. In European countries who pay more for fuel, such as Denmark where the price of gas neared in 2010, have more fully developed waste-to energy facilities. In 2010 Denmark sent 7% of waste to landfills, 69% was recycled, and 24% was sent to waste-to-energy facilities. There are several other developed Western European countries that also have taken energy engineering into consideration. Germany's "Energiewende", a policy which set the goal by 2050 to meet 80% of electrical needs from renewable energy sources.
Statistics
As of 2023, the median annual salary for energy engineers in the U.S. ranges from $75,000 to $95,000, depending on experience and location. Energy engineers with expertise in renewable energy and energy storage tend to receive higher salaries due to the growing demand for sustainable solutions. The gender distribution in the field remains prominent, with around 80% male engineers, though efforts to increase diversity are underway through scholarships and mentorship programs. The job market for energy engineers is expected to grow rapidly over the next decade, driven by the shift towards clean energy and sustainable solutions to modern climate issues.
Education
To become an energy engineer, a bachelor's degree in energy engineering or related fields such as mechanical, electrical, or environmental engineering is typically required. Many universities now offer specialized energy engineering programs with a focus on renewable energy, energy storage, and grid management. Advanced certifications like the Certified Energy Manager (CEM) credential, offered by the Association of Energy Engineers, and graduate programs in sustainable energy systems further improve career plans. Also, several universities across the world have established departments or centers offering energy engineering degrees, to better prepare future engineers for their career. One of those programs is the IEP PEM Certification which is offered at Virginia Tech University.
Emerging Technologies
Emerging technologies in energy engineering are reshaping the way energy is produced, stored, and consumed. Innovations such as next-generation solar panels, modern wind turbine innovations, energy storage systems (such as flow batteries and hydrogen fuel cells), and smart grid technologies are paving the way for a more sustainable energy future. These technologies are critical in reducing reliance on fossil fuels and ensuring the stability of renewable energy systems. Other advances include artificial intelligence and machine learning applications for optimizing energy use in real-time, and carbon capture and storage (CCS) systems to mitigate emissions from existing power plants.
Energy Engineering in Policy and Society
Energy engineers play a key role in shaping energy policies and regulations worldwide. Their expertise is essential in setting standards for energy efficiency, renewable energy integration, and reducing carbon footprints. Global initiatives like the Paris Agreement and the European Green Deal are influencing energy engineering practices, pushing the field toward more sustainable and equitable energy solutions. Additionally, energy engineers are increasingly involved in public and private sector collaborations, working with governments and corporations to design and implement large-scale energy infrastructure projects which would have both societal and political impacts.
| Technology | Disciplines | null |
47325 | https://en.wikipedia.org/wiki/Perch | Perch | Perch is a common name for freshwater fish from the genus Perca, which belongs to the family Percidae of the large order Perciformes. The name comes from , meaning the type species of this genus, the European perch (P. fluviatilis).
Many species of freshwater game fish more or less resemble perch, but belong to different genera. In fact, the exclusively saltwater-dwelling red drum (which belong to a different order Acanthuriformes) is often referred to as a "red perch", though by definition perch are freshwater species. Though many fish are referred to as perch as a common name, to be considered a true perch, the fish must be of the family Percidae.
Species
Most authorities recognize three species within the perch genus:
The European perch (P. fluviatilis) is primarily found in Europe, but a few can also be found in South Africa, and even as far east on the Southern hemisphere as Australia. This species is typically greenish in color with dark vertical bars on its sides with a red or orange coloring in the tips of its fins. The European perch has been successfully introduced in New Zealand and Australia, where it is known as the redfin perch or English perch. In Australia, larger specimens have been bred, but the species rarely grows heavier than .
The Balkhash perch (P. schrenkii) is found in Kazakhstan, (in Lake Balkhash and Lake Alakol), Uzbekistan, and China. It has a dark gray/black color on its dorsal side, but the ventral areas of the fish are a lighter silver or even sometimes green color. The Balkhash perch also displays the vertical bars on its sides, similar to the European and yellow perches. In the latter half of the 20th century, the Balkhash perch was introduced into the basins of the Nuru and Chu rivers. The introduction of these fishes to the Nuru and Chu rivers was successful. Because of this success, the population of Balkhash perch in the Balkhash Lake is rarer now. They are similar in size to the yellow and European perches, weighing around .
The yellow perch (P. flavescens), smaller and paler than the European perch (but otherwise nearly identical), is found in North America. In northern areas, it is sometimes referred to as the lake perch. This species is prized for its food quality and has often been raised in hatcheries and introduced into areas in which it is not native. These fish typically only reach a size of about and .
Anatomy
External anatomy
Perch have a long and round body shape which allows for fast swimming in the water. True perch have "rough" or ctenoid scales. Perch have paired pectoral and pelvic fins, and two dorsal fins, the first one spiny and the second soft. These two fins can be separate or joined. The head consists of the skull (formed from loosely connected bones), eyes, mouth, operculum, gills, and a pair of nostrils (which has no connection to the oral cavity). They have small brush-like teeth across their jaws and on the roof of their mouth. The gills are located under the operculum on both sides of the head and are used to extract oxygen molecules from water and expel carbon dioxide; the gills have gill rakers inside the mouth.
External anatomy can be used to determine the sex of perch in multiple ways. Perch have two posterior openings located on their abdomen, the anal and urogenital. In males, the shape of the urogenital opening is round and larger than the anal opening. In females, the urogenital opening is often a V- or U-shape which is a similar size to the anal opening. Also, males usually have a more brown-red colored urogenital opening compared to females.
Internal anatomy
The esophagus is a flexible tube that goes from the mouth to the stomach. The stomach is connected to the intestine via the pyloric sphincter. The intestines of perch consist of the small intestine and large intestine; the intestines have many pyloric caeca and a spiral value, the small intestine consists of a part called the duodenum. The spleen is located after the stomach and before the spiral value. The spleen is connected to the circulatory system, not part of the digestive tract. The liver is composed of three lobes: one small lobe (includes the gall bladder) and two large lobes. Perch have long and narrow kidneys that contain clusters of nephrons which empty into the mesonephric duct. They have a two-chambered heart consisting of four compartments: the sinus venous, one atrium, one ventricle, and conus. Perch have a swim bladder that helps control buoyancy or floating within the water, the swim bladder is only found in bony fish. In perch, the duct connecting the swim bladder to the pharynx is closed so air is unable to pass through from the mouth, these fish are called physoclists. Specifically in perch, the gas bladder can vary from 12% to 25% of oxygen and 1.4% to 2.9% of carbon dioxide gas. Perch reproductive organs include either a pair of testes (sperm-producing) or a pair of ovaries (egg-producing).
Habitats
Perch are classified as carnivores, choosing waters where smaller fish, shellfish, zooplankton, and insect larvae are abundant. The yellow perch can be found in the central parts of the United States in freshwater ponds, lakes, streams, or rivers. These fish can be found in freshwater all over the world, and are known to inhabit the Great Lake region, in particular Lake Erie. These fish inhabit bodies of water where vegetation and debris is readily accessible. In the spring when the perch chooses to spawn, they use vegetation to conceal their eggs from predators.
Fishing
Perch are a popular sport fish species. They are known to put up a fight, and to be good for eating. They can be caught with a variety of methods, including float fishing, lure fishing, and legering. Fly fishing for perch using patterns that imitate small fry or invertebrates can be successful. The record weight for this fish in Britain is , the Netherlands , and in America . The biggest recorded catch in Sweden is 3.15 kg (6lb 15oz) in 1985.
Perch grow to around and or more, but the most common size caught are around and or less and anything over and is considered a prize catch.
| Biology and health sciences | Acanthomorpha | null |
47326 | https://en.wikipedia.org/wiki/Trout | Trout | Trout (: trout) is a generic common name for numerous species of carnivorous freshwater ray-finned fishes belonging to the genera Oncorhynchus, Salmo and Salvelinus, all of which are members of the subfamily Salmoninae in the family Salmonidae. The word trout is also used for some similar-shaped but non-salmonid fish, such as the spotted seatrout/speckled trout (Cynoscion nebulosus, which is actually a croaker).
Trout are closely related to salmon and have similar migratory life cycles. Most trout are strictly potamodromous, spending their entire lives exclusively in freshwater lakes, rivers and wetlands and migrating upstream to spawn in the shallow gravel beds of smaller headwater creeks. The hatched fry and juvenile trout, known as alevin and parr, will stay upstream growing for years before migrating down to larger waterbodies as maturing adults. There are some anadromous species of trout, such as the steelhead (a coastal subspecies of rainbow trout) and sea trout (the sea-run subspecies of brown trout), that can spend up to three years of their adult lives at sea before returning to freshwater streams for spawning, in the same fashion as a salmon run. Brook trout and three other extant species of North American trout, despite the names, are actually char (or charr), which are salmonids also closely related to trout and salmon.
Trout are classified as oily fish and have been important food fish for humans. As mid-level predators, trout prey upon smaller aquatic animals including crustaceans, insects, worms, baitfish and tadpoles, and themselves in turn are also important staple prey items for many wildlifes including brown bears, otters, raccoons, birds of prey (e.g. sea eagles, ospreys, fish owls), gulls, cormorants and kingfishers, and other large aquatic predators. Discarded remains of trout also provide a source of nutrients for scavengers, detrivores and riparian florae, making trout keystone species across aquatic and terrestrial ecosystems.
Species
The name "trout" is commonly used for many (if not most) species in three of the seven genera in the subfamily Salmoninae: Salmo (Atlantic), Oncorhynchus (Pacific) and Salvelinus (circum-arctic). Fish species referred to as trout include:
Genus Salmo, all extant species except Atlantic salmon
Adriatic trout, Salmo obtusirostris
Brown trout, Salmo trutta
River trout, S. t. morpha fario
Lake trout/Lacustrine trout, S. t. morpha lacustris
Sea trout, S. t. morpha trutta
Flathead trout, Salmo platycephalus
Marble trout, Soca River trout or Soča trout – Salmo marmoratus
Ohrid trout, Salmo letnica, S. balcanicus (extinct), S. lumi, and S. aphelios
Sevan trout, Salmo ischchan
Genus Oncorhynchus, six of the 12 extant species
Apache trout, Oncorhynchus apache
Biwa trout, Oncorhynchus masou rhodurus
Cutthroat trout, Oncorhynchus clarki
Coastal cutthroat trout, O. c. clarki
Crescenti trout, O. c. c. f. crescenti
Alvord cutthroat trout, O. c. alvordensis (extinct)
Bonneville cutthroat trout, O. c. utah
Humboldt cutthroat trout, O. c. humboldtensis
Lahontan cutthroat trout, O. c. henshawi
Whitehorse Basin cutthroat trout
Paiute cutthroat trout, O. c. seleniris
Snake River fine-spotted cutthroat trout, O. c. behnkei
Westslope cutthroat trout, O. c. lewisi
Yellowfin cutthroat trout, O. c. macdonaldi (extinct)
Yellowstone cutthroat trout, O. c. bouvieri
Colorado River cutthroat trout, O. c. pleuriticus
Greenback cutthroat trout, O. c. stomias
Rio Grande cutthroat trout, O. c. virginalis
Gila trout, Oncorhynchus gilae
Rainbow trout, Oncorhynchus mykiss
Kamchatkan rainbow trout, Oncorhynchus mykiss mykiss
Columbia River redband trout, Oncorhynchus mykiss gairdneri
Coastal rainbow trout (steelhead), Oncorhynchus mykiss irideus
Beardslee trout, Oncorhynchus mykiss irideus var. beardsleei
Great Basin redband trout, Oncorhynchus mykiss newberrii
Golden trout, Oncorhynchus mykiss aguabonita
Kern River rainbow trout, Oncorhynchus mykiss aguabonita var. gilberti
Sacramento golden trout, Oncorhynchus mykiss aguabonita var. stonei
Little Kern golden trout, Oncorhynchus mykiss aguabonita var. whitei
Kamloops rainbow trout, Oncorhynchus mykiss kamloops
Baja California rainbow trout, Nelson's trout, or San Pedro Martir trout, Oncorhynchus mykiss nelsoni
Eagle Lake trout, Oncorhynchus mykiss aquilarum
McCloud River redband trout, Oncorhynchus mykiss stonei
Sheepheaven Creek redband trout
Mexican golden trout, Oncorhynchus chrysogaster
Genus Salvelinus, five of the 52 extant species
Brook trout, Salvelinus fontinalis
Aurora trout, S. f. timagamiensis
Bull trout, Salvelinus confluentus
Dolly Varden trout, Salvelinus malma
Lake trout, Salvelinus namaycush
Silver trout, † Salvelinus agassizi (extinct)
Hybrids
Tiger trout, Salmo trutta X Salvelinus fontinalis (infertile)
Speckled Lake (Splake) trout, Salvelinus namaycush X Salvelinus fontinalis (fertile)
Fish from other families
Pseudaphritidae
Genus Pseudaphritis
Sand trout, Pseudaphritis urvillii
Sciaenidae
Genus Cynoscion
Spotted sea-trout, Cynoscion nebulosus
Anatomy
Trout that live in different environments can have dramatically different colorations and patterns. Mostly, these colors and patterns form as camouflage, based on the surroundings, and will change as the fish moves to different habitats. Trout in, or newly returned from the sea, can look very silvery, while the same fish living in a small stream or in an alpine lake could have pronounced markings and more vivid coloration; it is also possible that in some species, this signifies that they are ready to mate. In general, trout that are about to breed have extremely intense coloration and can look like an entirely different fish outside of spawning season. It is virtually impossible to define a particular color pattern as belonging to a specific breed; however, in general, wild fish are claimed to have more vivid colors and patterns.
Trout have fins entirely without spines, and all of them have a small adipose fin along the back, near the tail. The pelvic fins sit well back on the body, on each side of the anus. The swim bladder is connected to the esophagus, allowing for gulping or rapid expulsion of air, a condition known as physostome. Unlike many other physostome fish, trout do not use their bladder as an auxiliary device for oxygen uptake, relying solely on their gills.
There are many species, and even more populations, that are isolated from each other and morphologically different. However, since many of these distinct populations show no significant genetic differences, what may appear to be a large number of species is considered a much smaller number of distinct species by most ichthyologists. The trout found in the eastern United States are a good example of this. The brook trout, the aurora trout, and the (extinct) silver trout all have physical characteristics and colorations that distinguish them, yet genetic analysis shows that they are one species, Salvelinus fontinalis.
Lake trout (Salvelinus namaycush), like brook trout, belong to the char genus. Lake trout inhabit many of the larger lakes in North America, and live much longer than rainbow trout, which have an average maximum lifespan of seven years. Lake trout can live many decades, and can grow to more than .
Habitat
As salmonids, trout are coldwater fish that are usually found in cool (), clear streams, wetlands and lakes, although many of the species have anadromous populations as well. Juvenile trout are referred to as troutlet, troutling or parr. They are distributed naturally throughout North America, northern Asia and Europe. Several species of trout were introduced to Australia and New Zealand by amateur fishing enthusiasts in the 19th century, effectively displacing and endangering several upland native fish species. The introduced species included brown trout from England and rainbow trout from California. The rainbow trout has a steelhead subspecies, generally accepted as coming from Sonoma Creek. The rainbow trout of New Zealand still show the steelhead tendency to run up rivers in winter to spawn.
In Australia, the rainbow trout was introduced in 1894 from New Zealand and is an extremely popular gamefish in recreational angling.
Despite severely impacting the distribution and abundance of native Australian fish, such as the climbing galaxias, millions of rainbow and other trout species are released annually from government and private hatcheries.
The closest resemblance of seema trout and other trout family can be found in the Himalayan Region of India, Nepal, Bhutan, Pakistan and in Tian Shan mountains of Kyrgyzstan.
Diet
Trout generally feed on other fish, and soft-bodied aquatic invertebrates, such as flies, mayflies, caddisflies, stoneflies, mollusks and dragonflies. In lakes, various species of zooplankton often form a large part of the diet. In general, trout longer than about prey almost exclusively on fish, where they are available. Adult trout will devour smaller fish up to one-third of their length. Trout may feed on shrimp, mealworms, bloodworms, insects, small animal parts, and eel.
Trout who swim the streams love to feed on land animals, aquatic life, and flies. Most of their diet comes from macroinvertebrates, or animals that do not have a backbone like snails, worms, or insects. They also eat flies, and most people who try to use lures to fish trout mimic flies because they are one of trout's most fed on meals. Trout enjoy certain land animals, including insects like grasshoppers. They also eat small animals like mice when they fall in. (Although only large trout have mouths capable of eating mice.) They consume a diet of aquatic life like minnows or crawfish as well. Trout have a diverse diet they follow; they have plenty of different oppositions.
Trout as food
Compared to other salmonids, trout are somewhat more bony, but the flesh is generally considered delicious, and the texture is often indistinguishable from that of salmon. The flavor of the flesh is heavily influenced by the diet of the fish. For example, trout that have been feeding on crustaceans tend to be more flavorful than those feeding primarily on insects and larvae. Because of their popularity, trout are often raised on fish farms and then stocked into heavily fished waters, in an effort to mask the effects of overfishing. Farmed trout are also sold commercially as seafood, although they are not saltwater fish. Trout meat is typically prepared the same way as salmon, often by smoking.
In Mainland China, farm-raised rainbow trout from Qinghai was officially sanctioned to be labeled and sold domestically as salmon, which caused much controversy regarding food safety and consumer rights violation, as raw fish dishes or yusheng using Atlantic salmon are gaining popularity in southern China. Farmed rainbow trout is much cheaper than the imported Atlantic salmon and the meat are indistinguishable to the untrained eyes, and the news of trout being sold as salmon triggered public scrutiny accusing seafood suppliers of bait-and-switch and unethical business practices. Also, many people believe freshwater trout are more prone to parasites than oceanic salmon (even though both live in freshwater for significant periods of their life cycles) and thus unsafe for raw eating.
Nutritional value
One fillet of trout (about ) contains:
Energy:
Fat (g): 5.22
Carbohydrates (g): 0
Fibers (g): 0
Protein (g): 16.41
Cholesterol (mg): 46
Trout fishing
Trout are very popular freshwater game fish highly prized especially by creek fishermen, because they generally put up a good fight when caught with a hook and line. As trout are predatory fish, lure fishing (which use replica baits called lures to imitate live prey) is the predominant form of sport fishing involving trout, although traditional bait fishing techniques using floats and/or sinkers (particularly with moving live baits such as baitfish, crayfish or aquatic insects) are also successful, especially against stocked trout that are hatchery/farm-raised and thus more accustomed to artificial feeds.
Many species of trout, most noticeably rainbow trout and brown trout, have been widely introduced into waterbodies outside of their native ranges purely for the sake of recreational fishing, and some of these introduced populations have even become invasive in the new habitats.
River fishing
While trout can be caught with a normal rod and reel, fly fishing is a distinctive lure fishing method developed for trout, and now extended to other species. Due to the high proportion of insects and small crustaceans within the trout's diet, small lures made of hand-tied hairs and threads are often used to imitate these aquatic invertebrates that the trout prey upon. These ultralight fly lures cannot be cast adequately by conventional techniques, and a specialized heavy line (i.e. fly line) is needed to launch the lure.
Understanding how moving water shapes the stream channel makes it easier to find trout. In most streams, the current creates a riffle-run-pool pattern that repeats itself over and over. A deep pool may hold a big brown trout, but rainbow trout and smaller brown trout are likely found in runs. Riffles are where fishers will find small trout, called troutlet, during the day and larger trout crowding in during morning and evening feeding periods.
Riffles have a fast current and shallow water. This gives way to a bottom of gravel, rubble or boulder. Riffles are morning and evening feeding areas. Trout usually spawn just above or below riffles, but may spawn right in them.
Runs are deeper than riffles with a moderate current and are found between riffles and pools. The bottom is made up of small gravel or rubble. These hot spots hold trout almost anytime, if there is sufficient cover.
Pools are smoother and look darker than the other areas of the stream. The deep, slow-moving water generally has a bottom of silt, sand, or small gravel. Pools make good midday resting spots for medium to large trout.
It is recommended that when fishing for trout, that the fisher(s) should use line in the 4–8 lb test for streamfish, and stronger line with the same diameter for trout from the sea or from a large lake, such as Lake Michigan. It is also recommended to use a hook size 8–5 for trout of all kind. Trout, especially farm-raised ones, tend to like salmon roes, worms, minnows, cut bait, maize, or marshmallows.
Ice fishing
Fishing for trout under the ice generally occurs in depths of . Because trout are cold water fish, during the winter they move from up-deep to the shallows, replacing the small fish that inhabit the area during the summer. Trout in winter constantly cruise in shallow depths looking for food, usually traveling in groups, although bigger fish may travel alone and in water that's somewhat deeper, around . Rainbow, Brown, and Brook trout are the most common trout species caught through the ice.
Trout fishing records
By information from International Game Fish Association (IGFA), the most outstanding records are:
Brook trout caught by Dr. W. Cook in the Nipigon River, Canada, on July 1, 1916, that weighed
Cutthroat trout caught by John Skimmerhorn in Pyramid Lake located in Nevada, US, on December 1, 1925, that weighed
Bull trout caught by N. Higgins in Lake Pend Oreille located in Idaho, US, on October 27, 1949, that weighed
Golden trout caught by Chas Reed in Cooks Lake located in Wyoming, US, on August 5, 1948, that weighed
Rainbow trout caught by Sean Konrad in Lake Diefenbaker, Canada, on September 5, 2009, that weighed
Lake trout caught by Lloyd Bull in Great Bear Lake, Canada, on August 19, 1995, that weighed
Baits
Declines in native trout populations
Salmonid populations in general have been declining due to numerous factors, including invasive species, hybridization, wildfires, and climate change. Native salmonid fish in the western and southwestern United States are threatened by non-native species that were introduced decades ago. Non-native salmonids were introduced to enrich recreational fishing; however, they quickly started outcompeting and displacing native salmonids upon their arrival. Non-native, invasive species are quick to adapt to their new environment and learn to outcompete any native species, making them a force the native salmon and trout have to reckon with. Not only do the non-native fish drive the native fish to occupy new niches, but they also try to hybridize with them, contaminating the native gene construction. As more hybrids between native and non-native fish are formed, the lineage of the pure fish is continuously being contaminated by other species and soon may no longer represent the sole native species. The Rio Grande cutthroat trout (Oncorhynchus clarki virginalis) are susceptible to hybridization with other salmonids such as rainbow trout (Oncorhynchus mykiss) and yield a new "cutbow" trout, which is a contamination of both lineages’ genes. One solution to this issue is implemented by New Mexico Department of Game and Fish hatcheries: stock only sterile fish in river streams. Hatcheries serve as a reservoir of fish for recreational activities but growing and stocking non-sterile fish would worsen the hybridization issue on a quicker, more magnified time scale. By stocking sterile fish, the native salmonids can't share genes with the non-native hatchery fish, thus, preventing further gene contamination of the native trout in New Mexico. Fire is also a factor in deteriorating Gila trout (Oncorhynchus gilae) populations because of the ash and soot that can enter streams following fires. The ash lowers water quality, making it more difficult for the Gila trout to survive. In some New Mexico streams, the native Gila trout will be evacuated from streams that are threatened by nearby fires and be reintroduced after the threat is resolved.
Climate change is also dwindling native salmonid populations. Global warming continually affects various cold-water fish such as trout, especially as inland waterbodies are more prone to warming than oceans. With an increase of temperature along with changes in spawning river flow, an abundance of trout species are effected negatively. In the past, a mere increase was predicted to eliminate half of the native brook trout in the Southern Appalachian Mountains. Trout generally prefer streams with colder water () to spawn and thrive, but raising water temperatures are altering this ecosystem and further deteriorate native populations.
| Biology and health sciences | Salmoniformes | null |
47332 | https://en.wikipedia.org/wiki/Mackerel | Mackerel | Mackerel is a common name applied to a number of different species of pelagic fish, mostly from the family Scombridae. They are found in both temperate and tropical seas, mostly living along the coast or offshore in the oceanic environment.
Mackerel species typically have deeply forked tails and vertical "tiger-like" stripes on their backs with an iridescent green-blue quality. Many are restricted in their distribution ranges and live in separate populations or fish stocks based on geography. Some stocks migrate in large schools along the coast to suitable spawning grounds, where they spawn in fairly shallow waters. After spawning they return the way they came in smaller schools to suitable feeding grounds, often near an area of upwelling. From there they may move offshore into deeper waters and spend the winter in relative inactivity. Other stocks migrate across oceans.
Smaller mackerel are forage fish for larger predators, including larger mackerel and Atlantic cod. Flocks of seabirds, whales, dolphins, sharks, and schools of larger fish such as tuna and marlin follow mackerel schools and attack them in sophisticated and cooperative ways. Mackerel flesh is high in omega-3 oils and is intensively harvested by humans. In 2009, over 5 million tons were landed by commercial fishermen. Sport fishermen value the fighting abilities of the king mackerel.
Species
Over 30 different species, principally belonging to the family Scombridae, are commonly referred to as mackerel. The term "mackerel" is derived from Old French and may have originally meant either "marked, spotted" or "pimp, procurer". The latter connection is not altogether clear, but mackerel spawn enthusiastically in shoals near the coast, and medieval ideas on animal procreation were creative.
Scombroid mackerels
About 21 species in the family Scombridae are commonly called mackerel. The type species for the scombroid mackerel is the Atlantic mackerel, Scomber scombrus. Until recently, Atlantic chub mackerel and Indo-Pacific chub mackerel were thought to be subspecies of the same species. In 1999, Collette established, on molecular and morphological considerations, that these are separate species. Mackerel are smaller with shorter lifecycles than their close relatives, the tuna, which are also members of the same family.
Scombrini, the true mackerels
The true mackerels belong to the tribe Scombrini. The tribe consists of seven species, each belonging to one of two genera: Scomber or Rastrelliger.
Scomberomorini, the Spanish mackerels
The Spanish mackerels belong to the tribe Scomberomorini, which is the "cousin tribe" of the true mackerels. This tribe consists of 21 species in all—18 of those are classified into the genus Scomberomorus, two into Grammatorcynus, and a single species into the monotypic genus Acanthocybium.
Other mackerel
In addition, a number of species with mackerel-like characteristics in the families Carangidae, Hexagrammidae and Gempylidae are commonly referred to as mackerel. Some confusion had occurred between the Pacific jack mackerel (Trachurus symmetricus) and the heavily harvested Chilean jack mackerel (T. murphyi). These have been thought at times to be the same species, but are now recognised as separate species.
The term "mackerel" is also used as a modifier in the common names of other fish, sometimes indicating the fish has vertical stripes similar to a scombroid mackerel:
Mackerel icefish—Champsocephalus gunnari
Mackerel pike—Cololabis saira
Mackerel scad—Decapterus macarellus
Mackerel shark—several species
Shortfin mako shark—Isurus oxyrinchus
Mackerel tuna—Euthynnus affinis
Mackerel tail goldfish—Carassius auratus
By extension, the term is applied also to other species such as the mackerel tabby cat, and to inanimate objects such as the altocumulus mackerel sky cloud formation.
Characteristics
Most mackerel belong to the family Scombridae, which also includes tuna and bonito. Generally, mackerel are much smaller and slimmer than tuna, though in other respects, they share many common characteristics. Their scales, if present at all, are extremely small. Like tuna and bonito, mackerel are voracious feeders, and are swift and manoeuvrable swimmers, able to streamline themselves by retracting their fins into grooves on their bodies. Like other scombroids, their bodies are cylindrical with numerous finlets on the dorsal and ventral sides behind the dorsal and anal fins, but unlike the deep-bodied tuna, they are slim.
The type species for scombroid mackerels is the Atlantic mackerel, Scomber scombrus. These fish are iridescent blue-green above with a silvery underbelly and near-vertical wavy black stripes running along their upper bodies.
The prominent stripes on the back of mackerels seemingly are there to provide camouflage against broken backgrounds. That is not the case, though, because mackerel live in midwater pelagic environments which have no background. However, fish have an optokinetic reflex in their visual systems that can be sensitive to moving stripes. For fish to school efficiently, they need feedback mechanisms that help them align themselves with adjacent fish, and match their speed. The stripes on neighbouring fish provide "schooling marks", which signal changes in relative position.
A layer of thin, reflecting platelets is seen on some of the mackerel stripes. In 1998, E J Denton and D M Rowe argued that these platelets transmit additional information to other fish about how a given fish moves. As the orientation of the fish changes relative to another fish, the amount of light reflected to the second fish by this layer also changes. This sensitivity to orientation gives the mackerel "considerable advantages in being able to react quickly while schooling and feeding."
Mackerel range in size from small forage fish to larger game fish. Coastal mackerel tend to be small. The king mackerel is an example of a larger mackerel. Most fish are cold-blooded, but exceptions exist. Certain species of fish maintain elevated body temperatures. Endothermic bony fishes are all in the suborder Scombroidei and include the butterfly mackerel, a species of primitive mackerel.
Mackerel are strong swimmers. Known in the latin family as "punctualis piscis" which translates to "punctual fish." This is due to its punctuality of migration during mating season as it moves from warm to cold waters. Atlantic mackerel can swim at a sustained speed of 0.98 m/sec with a burst speed of 5.5 m/sec, while chub mackerel can swim at a sustained speed of 0.92 m/sec with a burst speed of 2.25 m/sec.
Distribution
Most mackerel species have restricted distribution ranges.
Some mackerel species migrate vertically. Adult snake mackerel conduct a diel vertical migration, staying in deeper water during the day and rising to the surface at night to feed. The young and juveniles also migrate vertically, but in the opposite direction, staying near the surface during the day and moving deeper at night.
Lifecycle
Mackerel are prolific broadcast spawners, and must breed near the surface of the water because the eggs of the females float. Individual females lay between 300,000 and 1,500,000 eggs. Their eggs and larvae are pelagic, that is, they float free in the open sea. The larvae and juvenile mackerel feed on zooplankton. As adults, they have sharp teeth, and hunt small crustaceans such as copepods, forage fish, shrimp, and squid. In turn, they are hunted by larger pelagic animals such as tuna, billfish, sea lions, sharks, and pelicans.
Off Madagascar, spinner sharks follow migrating schools of mackerel. Bryde's whales feed on mackerel when they can find them. They use several feeding methods, including skimming the surface, lunging, and bubble nets.
Fisheries
Chub mackerel, Scomber japonicus, are the most intensively fished scombroid mackerel. They account for about half the total capture production of scombroid mackerels. As a species, they are easily confused with Atlantic mackerel. Chub mackerel migrate long distances in oceans and across the Mediterranean. They can be caught with drift nets and suitable trawls, but are most usually caught with surround nets at night by attracting them with lampara lamps.
The remaining catch of scombroid mackerels is divided equally between the Atlantic mackerel and all other scombroid mackerels.
Just these two species (Chub mackerel and Atlantic mackerel) account for about 75% of the total catch of scombroid mackerels.
Chilean jack mackerel are the most commonly fished nonscombroid mackerel, fished as heavily as chub mackerel. The species has been overfished, and its fishery may now be in danger of collapsing.
Smaller mackerel behave like herrings, and are captured in similar ways. Fish species like these, which school near the surface, can be caught efficiently by purse seining. Huge purse-seine vessels use spotter planes to locate the schooling fish. Then they close in using sophisticated sonar to track the shape of the school, which is then encircled with fast auxiliary boats that deploy purse seines as they speed around the school.
Suitably designed trollers can also catch mackerels effectively when they swim near the surface. Trollers typically have several long booms which they lift and drop with "topping lifts". They haul their lines with electric or hydraulic reels. Fish aggregating devices are also used to target mackerel.
Management
The North Sea has been overfished to the point where the ecological balance has become disrupted and many jobs in the fishing industry have been lost.
The Southeast US region spans the Gulf of Mexico, the Caribbean Sea, and the US Southeast Atlantic. Overfishing of king and Spanish mackerel occurred in the 1980s. Regulations were introduced to restrict the size, fishing locations, and bag limits for recreational fishers and commercial fishers. Gillnets were banned in waters off Florida. By 2001, the mackerel stocks had bounced back.
As food
Mackerel is an important food fish that is consumed worldwide. As an oily fish, it is a rich source of omega-3 fatty acids. The flesh of mackerel spoils quickly, especially in the tropics, and can cause scombroid food poisoning. Accordingly, it should be eaten on the day of capture, unless properly refrigerated or cured.
Mackerel preservation is not simple. Before the 19th-century development of canning and the widespread availability of refrigeration, salting and smoking were the principal preservation methods available. Historically in England, this fish was not preserved, but was consumed only in its fresh form. However, spoilage was common, leading the authors of The Cambridge Economic History of Europe to remark: "There are more references to stinking mackerel in English literature than to any other fish!" In France, mackerel was traditionally pickled with large amounts of salt, which allowed it to be sold widely across the country.
For many years mackerel was regarded as 'unclean' in the UK and other places due to folklore which suggested that the fish fed on the corpses of dead sailors. A 1976 survey of housewives in Britain undertaken by the White Fish Authority indicated a reluctance to departing from buying the traditional staples of cod, haddock or salmon. Less than 10% of the survey's 1,931 respondents had ever bought mackerel, and only 3% did so regularly. As a result of this trend, many UK fishmongers during the 1970s did not display or even stock mackerel.
| Biology and health sciences | Acanthomorpha | null |
47335 | https://en.wikipedia.org/wiki/Catfish | Catfish | Catfish (or catfishes; order Siluriformes or Nematognathi) are a diverse group of ray-finned fish. Named for their prominent barbels, which resemble a cat's whiskers, catfish range in size and behavior from the three largest species alive, the Mekong giant catfish from Southeast Asia, the wels catfish of Eurasia, and the piraíba of South America, to detritivores (species that eat dead material on the bottom), and even to a tiny parasitic species commonly called the candiru, Vandellia cirrhosa. Neither the armour-plated types nor the naked types have scales. Despite their name, not all catfish have prominent barbels or "whiskers". Members of the Siluriformes order are defined by features of the skull and swimbladder. Catfish are of considerable commercial importance; many of the larger species are farmed or fished for food. Many of the smaller species, particularly the genus Corydoras, are important in the aquarium hobby. Many catfish are nocturnal, but others (many Auchenipteridae) are crepuscular or diurnal (most Loricariidae or Callichthyidae, for example).
Taxonomy
Molecular evidence suggests that in spite of the great morphological diversity in the order, all catfish form a monophyletic group. Catfish belong to a superorder called the Ostariophysi, which also includes the Cypriniformes (carps and minnows), Characiformes (characins and tetras), Gonorynchiformes (milkfish and beaked salmons) and Gymnotiformes (South American knifefish), a superorder characterized by the Weberian apparatus. Some place Gymnotiformes as a sub-order of Siluriformes; however, this is not as widely accepted. Currently, the Siluriformes are said to be the sister group to the Gymnotiformes, though this has been debated due to more recent molecular evidence. there were about thirty-six extant catfish families, and about 3,093 extant species have been described. This makes the catfish order the second or third most diverse vertebrate order; in fact, one out of every twenty vertebrate species is a catfish.
Catfish are believed to have a Gondwanan origin primarily centered around South America, as the most basal living catfish groups are known from there. The earliest known definitive members lived in the Americas from the Campanian to Maastrichtian stages of the Late Cretaceous, including the Andinichthyidae, Vorhisia vulpes and possibly Arius. A potential fossil record is known from the earlier Coniacian-Santonian stages in Niger of West Africa, though this has been considered unreliable, and the putative earliest armored catfish known from the fossil record, Afrocascudo, lived during the Cenomanian age of the Late Cretaceous in Morocco of North Africa (Kem Kem Group). The describers of Afrocascudo claimed that the presence of a derived loricariid so early on would indicate the extensive diversification of catfish, or at least loricarioids, prior to the beginning of the Late Cretaceous. As extant loricariids are only known from South America, much of this diversification must have occurred on the supercontinent of West Gondwana prior to its fragmentation into South America and Africa. Britz and colleagues suggested that Afrocascudo instead represents a juvenile obaichthyid lepisosteiform, possibly a junior synonym of Obaichthys. The authors of the original study still stood by their original conclusion based on the absence of important holostean characters, and noted that it could not be a juvenile, since the bones were completely ossified.
The taxonomy of catfish is quickly changing. In a 2007 and 2008 paper, Horabagrus, Phreatobius, and Conorhynchos were not classified under any current catfish families. There is disagreement on the family status of certain groups; for example, Nelson (2006) lists Auchenoglanididae and Heteropneustidae as separate families, while the All Catfish Species Inventory (ACSI) includes them under other families. FishBase and the Integrated Taxonomic Information System lists Parakysidae as a separate family, while this group is included under Akysidae by both Nelson (2006) and ACSI. Many sources do not list the recently revised family Anchariidae. The family Horabagridae, including Horabagrus, Pseudeutropius, and Platytropius, is not shown by some authors but presented by others as a true group. Thus, the actual number of families differs between authors. The species count is in constant flux due to taxonomic work as well as description of new species. Between 2003 and 2005, over one hundred species were named, a rate three times faster than that of the past century. In June 2005, researchers named the newest family of catfish, Lacantuniidae, only the third new family of fish distinguished in the last seventy years, the others being the coelacanth in 1938 and the megamouth shark in 1983. The new species in Lacantuniidae, Lacantunia enigmatica, was found in the Lacantun river in the Mexican state of Chiapas.
The higher-level phylogeny of Siluriformes has gone through several recent changes, mainly due to molecular phylogenetic studies. While most studies, both morphological and molecular, agree that catfishes are arranged into three main lineages, the relationship among these lineages has been a contentious point in which these studies, performed for example by Rui Diogo, differ. The three main lineages in Siluriformes are the family Diplomystidae, the denticulate catfish suborder Loricarioidei (containing the Neotropical "suckermouth" catfishes), and the suborder Siluroidei, which contains the remaining families of the order. According to morphological data, Diplomystidae is usually considered to be the earliest branching catfish lineage and the sister group to the other two lineages, Loricarioidei and Siluroidei. Molecular evidence usually contrasts with this hypothesis, and shows the suborder Loricarioidei as the earliest branching catfish lineage, and sister to a clade that includes the Diplomystidae and Siluroidei; this phylogeny has been obtained in numerous studies based on genetic data. However, it has been suggested that these molecular results are errors as a result of long branch attraction, incorrectly placing Loricarioidei as the earliest-branching catfish lineage. When a data filtering method was used to reduce lineage rate heterogeneity (the potential source of bias) on their dataset, a final phylogeny was recovered which showed the Diplomystidae are the earliest-branching catfish, followed by Loricarioidei and Siluroidei as sister lineages, providing both morphological and molecular support for Diplomystidae being the earliest branching catfish.
Below is a list of family relationships by different authors. Lacantuniidae is included in the Sullivan scheme based on recent evidence that places it sister to Claroteidae.
Phylogeny
Phylogeny of living Siluriformes based on 2017 and extinct families based on Nelson, Grande & Wilson 2016.
Unassigned families:
Bachmanniidae†
Scoloplacidae (Loricarioidei)
Akysidae (Sisoroidea)
Amblycipitidae (Sisoroidea)
Anchariidae (Arioidea)
Ariidae (Arioidea)
Amphiliidae (Big African catfishes)
Austroglanididae (Arioidea)
Chacidae (Siluroidei)
Conorhynchos (Pimelodoidea)
Cranoglanididae (Ictaluroidea)
Heteropneustidae (Clarioidea)
Horabagridae (Sisoroidea)
Kryptoglanidae (Siluroidea)
Lacantuniidae (Big African catfishes)
Malapteruridae (Big African catfishes)
Phreatobiidae (Pimelodoidea)
Rita (Sisoroidea)
Schilbeidae (Big African catfishes)
Ecology
Distribution and habitat
Extant catfish species live inland or in coastal waters of every continent except Antarctica. Catfish have inhabited all continents at one time or another. They are most diverse in tropical South America, Asia, and Africa, with one family native to North America and one family in Europe. More than half of all catfish species live in the Americas. They are the only ostariophysans that have entered freshwater habitats in Madagascar, Australia, and New Guinea.
They are found in fresh water/brackish water environments, though most inhabit shallow, running water. Representatives of at least eight families are hypogean (live underground) with three families that are also troglobitic (inhabiting caves). One such species is Phreatobius cisternarum, known to live underground in phreatic habitats. Numerous species from the families Ariidae and Plotosidae, and a few species from among the Aspredinidae and Bagridae, are found in salt water.
In the Southern United States, catfish species may be known by a variety of slang names, such as "mud cat", "polliwogs", or "chuckleheads". These nicknames are not standardized, so one area may call a bullhead catfish by the nickname "chucklehead", while in another state or region, that nickname refers to the blue catfish.
As invasive species
Representatives of the genus Ictalurus have been introduced into European waters in the hope of obtaining a sporting and food resource, but the European stock of American catfishes has not achieved the dimensions of these fish in their native waters and have only increased the ecological pressure on native European fauna. Walking catfish have also been introduced in the freshwater areas of Florida, with the voracious catfish becoming a major alien pest there. Flathead catfish, Pylodictis olivaris, is also a North American pest on Atlantic slope drainages. Pterygoplichthys species, released by aquarium fishkeepers, have also established feral populations in many warm waters around the world.
Physical characteristics
External anatomy of catfish
Most catfish are bottom feeders. In general, they are negatively buoyant, which means that they usually sink rather than float due to a reduced gas bladder and a heavy, bony head. Catfish have a variety of body shapes, though most have a cylindrical body with a flattened ventrum to allow for benthic feeding.
A flattened head allows for digging through the substrate, as well as perhaps serving as a hydrofoil. Some have a mouth that can expand to a large size and contains no incisiform teeth; catfish generally feed through suction or gulping rather than biting and cutting prey. Some families, though, notably the Loricariidae and Astroblepidae, have a suckermouth that allows them to fasten themselves to objects in fast-moving water. Catfish also have a maxilla reduced to a support for barbels; this means that they are unable to protrude their mouths as other fish such as carp.
Catfish may have up to four pairs of barbels - nasal, maxillary (on each side of mouth), and two pairs of chin barbels, though pairs of barbels may be absent depending on the species. Catfish barbels always occur in pairs. Many larger catfish also have chemoreceptors across their entire bodies, which means they "taste" anything they touch and "smell" any chemicals in the water. "In catfish, gustation plays a primary role in the orientation and location of food". Because their barbels and chemoreception are more important in detecting food, the eyes on catfish are generally small. Like other ostariophysans, they are characterized by the presence of a Weberian apparatus. Their well-developed Weberian apparatus and reduced gas bladder allow for improved hearing and sound production.
Catfish do not have scales; their bodies are often naked. In some species, their mucus-covered skin is used in cutaneous respiration, where the fish breathes through its skin. In some catfish, the skin is covered in bony plates called scutes; some form of body armor appears in various ways within the order. In loricarioids and in the Asian genus Sisor, the armor is primarily made up of one or more rows of free dermal plates. Similar plates are found in large specimens of Lithodoras. These plates may be supported by vertebral processes, as in scoloplacids and in Sisor, but the processes never fuse to the plates or form any external armor. By contrast, in the subfamily Doumeinae (family Amphiliidae) and in hoplomyzontines (Aspredinidae), the armor is formed solely by expanded vertebral processes that form plates. Finally, the lateral armor of doradids, Sisor, and hoplomyzontines consists of hypertrophied lateral line ossicles with dorsal and ventral lamina.
All catfish other than members of the Malapteruridae (electric catfish), possess a strong, hollow, bony, leading spine-like ray on their dorsal and pectoral fins. As a defense, these spines may be locked into place so that they stick outwards, enabling them to inflict severe wounds. In numerous catfish species, these fin rays can be used to deliver a stinging protein if the fish is irritated; as many as half of all catfish species may be venomous in this fashion, making the Siluriformes overwhelmingly the vertebrate order with the largest number of venomous species. This venom is produced by glandular cells in the epidermal tissue covering the spines. In members of the family Plotosidae and of the genus Heteropneustes, this protein is so strong it may hospitalize humans who receive a sting; in Plotosus lineatus, the stings can be lethal. The dorsal- and pectoral-fin spines are two of the most conspicuous features of siluriforms, and differ from those in other fish groups. Despite the widespread use of the spines for taxonomic and phylogenetic studies the fields have struggled to effectively use the information due to a lack of consistency in the nomenclature, with a general standard for the descriptive anatomy of catfish spines proposed in 2022 to try and resolve this problem.
Juvenile catfish, like most fish, have relatively large heads, eyes, and posterior median fins in comparison to larger, more mature individuals. These juveniles can be readily placed in their families, particularly those with highly derived fin or body shapes; in some cases, identification of the genus is possible. As far as known for most catfish, features that are often characteristic of species, such as mouth and fin positions, fin shapes, and barbel lengths, show little difference between juveniles and adults. For many species, pigmentation pattern is also similar in juveniles and adults. Thus, juvenile catfish generally resemble and develop smoothly into their adult form without distinct juvenile specializations. Exceptions to this are the ariid catfish, where the young retain yolk sacs late into juvenile stages, and many pimelodids, which may have elongated barbels and fin filaments or coloration patterns.
Sexual dimorphism is reported in about half of all families of catfish. The modification of the anal fin into an intromittent organ (in internal fertilizers) as well as accessory structures of the reproductive apparatus (in both internal and external fertilizers) have been described in species belonging to 11 different families.
Size
Catfish have one of the largest ranges in size within a single order of bony fish. Many catfish have a maximum length of under . Some of the smallest species of the Aspredinidae and Trichomycteridae reach sexual maturity at only .
The wels catfish, Silurus glanis, and the much smaller related Aristotle's catfish, are the only catfish indigenous to Europe; the former ranges throughout Europe, and the latter is restricted to Greece. Mythology and literature record wels catfish of astounding proportions yet are to be proven scientifically. The typical size of the species is about , and fish more than are rare. However, they are known to exceed in length and in weight. In July 2009, a catfish weighing was caught in the River Ebro, Spain, by an 11-year-old British schoolgirl.
In North America, the largest Ictalurus furcatus (blue catfish) caught in the Missouri River on 20 July 2010, weighed . The largest flathead catfish, Pylodictis olivaris, ever caught was in Independence, Kansas, weighing .
These records pale in comparison to a Mekong giant catfish caught in northern Thailand on 1 May 2005, and reported to the press almost 2 months later, that weighed . This is the largest giant Mekong catfish caught since Thai officials started keeping records in 1981. Also in Asia, Jeremy Wade caught a goonch following three fatal attacks on humans in the Kali River on the India-Nepal border. Wade was of the opinion that the offending fish must have been significantly larger than this to have taken an 18-year-old boy, as well as a water buffalo.
Piraíba (Brachyplatystoma filamentosum) can grow exceptionally large and are native to the Amazon Basin. They can occasionally grow to , as evidenced by numerous catches. Deaths from being swallowed by these fish have been reported in the region.
Internal anatomy
In many catfish, the "humeral process" is a bony process extending backward from the pectoral girdle immediately above the base of the pectoral fin. It lies beneath the skin, where its outline may be determined by dissecting the skin or probing with a needle.
The retinae of catfish are composed of single cones and large rods. Many catfish have a tapetum lucidum, which may help enhance photon capture and increase low-light sensitivity. Double cones, though present in most teleosts, are absent from catfish.
The anatomical organization of the testis in catfish is variable among the families of catfish, but the majority of them present fringed testis: Ictaluridae, Claridae, Auchenipteridae, Doradidae, Pimelodidae, and Pseudopimelodidae. In the testes of some species of Siluriformes, organs and structures such as a spermatogenic cranial region and a secretory caudal region are observed, in addition to the presence of seminal vesicles in the caudal region. The total number of fringes and their length are different in the caudal and cranial portions between species. Fringes of the caudal region may present tubules, in which the lumen is filled by secretion and spermatozoa. Spermatocysts are formed from cytoplasmic extensions of Sertoli cells; the release of spermatozoa is allowed by breaking of the cyst walls.
The occurrence of seminal vesicles, in spite of their interspecific variability in size, gross morphology, and function, has not been related to the mode of fertilization. They are typically paired, multichambered, and connected with the sperm duct, and have been reported to play glandular and storage functions. Seminal vesicle secretion may include steroids and steroid glucuronides, with hormonal and pheromonal functions, but it appears to be primarily constituted of mucoproteins, acid mucopolysaccharides, and phospholipids.
Fish ovaries may be of two types - gymnovarian or cystovarian. In the first type, the oocytes are released directly into the coelomic cavity and then eliminated. In the second type, the oocytes are conveyed to the exterior through the oviduct. Many catfish are cystovarian in type, including Pseudoplatystoma corruscans, P. fasciatum, Lophiosilurus alexandri, and Loricaria lentiginosa.
Communication
Catfish can produce different types of sounds and also have well-developed auditory reception used to discriminate between sounds with different pitches and velocities. They are also able to determine the distance of the sound's origin and from what direction it originated. This is a very important fish communication mechanism, especially during agonistic and distress behaviors. Catfish are able to produce a variety of sounds for communication that can be classified into two groups: drumming sounds and stridulation sounds. The variability in catfish sound signals differs due to a few factors: the mechanism by which the sound is produced, the function of the resulting sound, and physiological differences such as size, sex, and age. To create a drumming sound, catfish use an indirect vibration mechanism using a swimbladder. In these fishes, sonic muscles insert on the ramus Mulleri, also known as the elastic spring. The sonic muscles pull the elastic spring forward and extend the swimbladder. When the muscles relax, the tension in the spring quickly returns the swimbladder to its original position, which produces the sound.
Catfish also have a sound-generating mechanism in their pectoral fins. Many species in the catfish family possess an enhanced first pectoral fin ray, called the spine, which can be moved by large abductor and adductor muscles. The base of the catfishes' spines has a sequence of ridges, and the spine normally slides within a groove on the fish's pelvic girdle during routine movement; but, pressing the ridges on the spine against the pelvic girdle groove creates a series of short pulses. The movement is analogous to a finger moving down the teeth of a comb, and consequently a series of sharp taps is produced.
Sound-generating mechanisms are often different between the sexes. In some catfish, pectoral fins are longer in males than in females of similar length, and differences in the characteristic of the sounds produced were also observed. Comparison between families of the same order of catfish demonstrated family and species-specific patterns of vocalization, according to a study by Maria Clara Amorim. During courtship behavior in three species of Corydoras catfish, all males actively produced stridulation sounds before egg fertilization, and the species' songs were different in pulse number and sound duration.
Sound production in catfish may also be correlated with fighting and alarm calls. According to a study by Kaatz, sounds for disturbance (e.g. alarm) and agonistic behavior were not significantly different, which suggests distress sounds can be used to sample variation in agonistic sound production. However, in a comparison of a few different species of tropical catfish, some fish put under distress conditions produced a higher intensity of stridulatory sounds than drumming sounds. Differences in the proportion of drumming versus stridulation sounds depend on morphological constraints, such as different sizes of drumming muscles and pectoral spines. Due to these constraints, some fish may not even be able to produce a specific sound. In several different species of catfish, aggressive sound production occurs during cover site defense or during threats from other fish. More specifically, in long-whiskered catfish, drumming sounds are used as a threatening signal and stridulations are used as a defense signal. Kaatz investigated 83 species from 14 families of catfish, and determined that catfish produce more stridulatory sounds in disturbance situations and more swimbladder sounds in intraspecific conflicts.
Economic importance
Aquaculture
Catfish are easy to farm in warm climates, leading to inexpensive and safe food at local grocers. About 60% of U.S. farm-raised catfish are grown within a 65-mile (100-km) radius of Belzoni, Mississippi. Channel catfish (Ictalurus punctatus) supports a $450 million/yr aquaculture industry. The largest producers are located in the Southern United States, including Mississippi, Alabama, and Arkansas.
Catfish raised in inland tanks or channels are usually considered safe for the environment, since their waste and disease should be contained and not spread to the wild.
In Asia, many catfish species are important as food. Several airbreathing catfish (Clariidae) and shark catfish (Pangasiidae) species are heavily cultured in Africa and Asia. Exports of one particular shark catfish species from Vietnam, Pangasius bocourti, have met with pressures from the U.S. catfish industry. In 2003, The United States Congress passed a law preventing the imported fish from being labeled as catfish. As a result, the Vietnamese exporters of this fish now label their products sold in the U.S. as "basa fish." Trader Joe's has labeled frozen fillets of Vietnamese Pangasius hypophthalmus as "striper."
There is a large and growing ornamental fish trade, with hundreds of species of catfish, such as Corydoras and armored suckermouth catfish (often called plecos), being a popular component of many aquaria. Other catfish commonly found in the aquarium trade are banjo catfish, talking catfish, and long-whiskered catfish.
Catfish as food
Catfish have widely been caught and farmed for food for thousands of years in Africa, Asia, Europe, and North America. Judgments as to the quality and flavor vary, with some food critics considering catfish excellent to eat, while others dismiss them as watery and lacking in flavor. Catfish is high in vitamin D. Farm-raised catfish contains low levels of omega-3 fatty acids and a much higher proportion of omega-6 fatty acids.
In Central Europe, catfish were often viewed as a delicacy to be enjoyed on feast days and holidays. Migrants from Europe and Africa to the United States brought along this tradition, and in the Southern United States, catfish is an extremely popular food.
The most commonly eaten species in the United States are the channel catfish and the blue catfish, both of which are common in the wild and increasingly widely farmed. Farm-raised catfish became such a staple of the U.S. diet that President Ronald Reagan proclaimed National Catfish Day on June 25, 1987, to recognize "the value of farm-raised catfish."
Catfish is eaten in a variety of ways. In Europe, it is often cooked in similar ways to carp, but in the United States it is popularly crumbed with cornmeal and fried.
In Indonesia, catfish is usually served fried or grilled in street stalls called warung and eaten with vegetables, sambal (a spicy relish or sauce), and usually nasi uduk (traditional coconut rice). The dish is called or . is the Indonesian word for catfish. The same dish can also be called as (squashed catfish) if the fish is lightly squashed along with sambal with a stone mortar-and-pestle. The or version presents the fish in a separate plate while the mortar is solely for sambal.
In Malaysia, catfish is called ikan keli and is fried with spices or grilled and eaten with tamarind and Thai chili gravy and is also often eaten with steamed rice.
In Bangladesh and the Indian states of Odisha, West Bengal and Assam, catfish (locally known as magur) is eaten as a favored delicacy during the monsoons. In the Indian state of Kerala, the local catfish, known as thedu''' or etta in Malayalam, is also popular.
In Hungary, catfish is often cooked in paprika sauce (Harcsapaprikás) typical of Hungarian cuisine. It is traditionally served with pasta smothered with curd cheese (túrós csusza).
In Myanmar (formally Burma), catfish is usually used in mohinga, a traditional noodle fish soup cooked with lemon grass, ginger, garlic, pepper, banana stem, onions, and other local ingredients.
Vietnamese catfish, of the genus Pangasius, cannot be legally marketed as catfish in the United States, and so is referred to as swai or basa. Only fish of the family Ictaluridae may be marketed as catfish in the United States.See Piazza's Seafood World, LLC v. Odom , 448 F. 3d 744 (5th Cir. 2006), citing Kerrilee E. Kobbeman, "Legislative Note, Hook, Line and Sinker: How Congress Swallowed the Domestic Catfish Industry's Narrow Definition of this Ubiquitous Bottomfeeder," 57 ARK. L.REV. 407, 411-18 (2004). In the UK, Vietnamese catfish is sometimes sold as "Vietnamese river cobbler", although more commonly as Basa.
In Nigeria, catfish is often cooked in a variety of stews. It is particularly cooked in a delicacy popularly known as "catfish pepper soup" which is enjoyed throughout the nation.
In Jewish dietary law, known as kashrut, fish must have fins and scales to be kosher. Since catfish lack scales, they are not kosher.
Mythology
In the mythology of the Japanese Shinto religion natural phenomenon are caused by kami. Earthquakes are caused by a giant catfish called Namazu. There are other kami associated with earthquakes. In Kyoto it's usually an eel, but after the 1855 Edo earthquake were printed giving more popularity to the catfish kami that has been known since the 16th century Otsu-e. In one catfish print the divine white horse of Amaterasu is depicted knocking down the earthquake-causing catfish.
Dangers to humans
While the vast majority of catfish are harmless to humans, a few species are known to present some risk. Many catfish species have "stings" (actually non-venomous in most cases) embedded behind their fins; thus precautions must be taken when handling them. Stings by the venomous striped eel catfish have killed people in rare cases.
Catfish fishing records
By information from International Game Fish Association IGFA the most outstanding record:
The biggest flathead catfish caught was by Ken Paulie in the Elk City Reservoir in Kansas, US on 19 May 1998 that weighed
| Biology and health sciences | Siluriformes | null |
47337 | https://en.wikipedia.org/wiki/Fishing%20rod | Fishing rod | A fishing rod is a long, thin rod used by anglers to catch fish by manipulating a line ending in a hook (formerly known as an angle, hence the term "angling"). At its most basic form, a fishing rod is a straight rigid stick/pole with a line fastened to one end (as seen in traditional bamboo rod fishing such as Tenkara fishing); however, modern rods are usually more elastic and generally have the line stored in a reel mounted at the rod handle, which is hand-cranked and controls the line retrieval, as well as numerous line-restricting rings (also known as line guides) that distribute bending stress along the rod and help dampening down/prevent line whipping and entanglement. To better entice fish, baits or lures are dressed onto the hook attached to the line, and a bite indicator (e.g. a float) is typically used, some of which (e.g. quiver tip) might be incorporated as part of the rod itself.
Fishing rods act as an extended lever and allow the angler to amplify line movements while luring and pulling the fish. It also enhances casting distance by increasing the launch speed of the terminal tackles (the hook, bait/lure, and other accompanying attachments such as float and sinker/feeder), as a longer swing radius (compared to that of a human arm) corresponds to greater arc speed at the tip under the same angular velocity. The length of fishing rods usually vary between and depending on the style of angling, while the Guinness World Record is .
Traditional fishing rods are made from a single piece of hardwood (such as ash and hickory) or bamboo; while contemporary rods are usually made from alloys (such as aluminium) or more often high-tensile synthetic composites (such as fibreglass or carbon fiber), and may come in multi-piece (joined via ferrules) or telescoping forms that are more portable and storage-friendly. Most fishing rods are tapered towards the tip to reduce the gravitational leverage front of the handle that an angler has to overcome when lifting the rod. Many modern rods are also constructed from hollow blanks to increase the specific strength of the design and reduce the overall weight.
In contrast with fishing nets and traps, which are usually used in subsistence and commercial fishing, angling with rods is a far less efficient method of catching fish, and is used more often in recreational fishing and competitive casting, which focus less on the yield and more on the experience. Fishing rods also come in many sizes, actions, hardness and configurations depending on whether they are to be used for small, medium or large fish, in fresh or saltwater situations, or the different angling styles. Various types of fishing rods are designed for specific subtypes of angling, for instance: spin fishing rods (both spinning and baitcasting rods) are optimized for frequent, repeated casting, and are usually lighter and have faster action; fly rods are designed to better sling heavy lines and ultralight artificial flies, and are usually much more flexible; surfcasting rods are designed to cast baits or lures out over far distances into the surf zone, and tends to be quite long; ice fishing rods are designed to fish through small drilled holes in ice covered lakes, and usually very short; and trolling rods are designed to drag heavy bait or lures through water while boat fishing, and usually have greater ultimate tensile strength due to the frequently large sizes of the target fish.
History
Fly fishing
The art of fly fishing took a great leap forward after the English Civil War, where a newly found interest in the activity left its mark on the many books and treatises that were written on the subject at the time. The renowned officer in the Parliamentary army, Robert Venables, published in 1662 The Experienced Angler, or Angling improved, being a general discourse of angling, imparting many of the aptest ways and choicest experiments for the taking of most sorts of fish in pond or river. Compleat Angler was written by Izaak Walton in 1653 (although Walton continued to add to it for a quarter of a century) and described the fishing in the Derbyshire Wye. It was a celebration of the art and spirit of fishing in prose and verse; six verses were quoted from John Dennys's earlier work. A second part to the book was added by Walton's friend Charles Cotton.
The 18th century was mainly an era of consolidation of the techniques developed in the previous century. Running rings began to appear along the fishing rods, which gave anglers greater control over the cast line. The rods themselves were also becoming increasingly sophisticated and specialized for different roles. Jointed rods became common from the middle of the century and bamboo came to be used for the top section of the rod, giving it a much greater strength and flexibility.
The industry also became commercialized – rods and tackle were sold at the haberdashers store. After the Great Fire of London in 1666, artisans moved to Redditch which became a centre of production of fishing related products from the 1730s. Onesimus Ustonson established his trading shop in 1761, and his establishment remained as a market leader for the next century. He received a Royal Warrant from three successive monarchs starting with King George IV.
Technological improvements
The impact of the Industrial Revolution was first felt in the manufacture of fly lines. Instead of anglers twisting their own lines, a laborious and time-consuming process, the new textile spinning machines allowed for a variety of tapered lines to be easily manufactured and marketed.
The material used for the rod itself changed from the heavy woods native to England, to lighter and more elastic varieties imported from abroad, especially from South America and the West Indies. Bamboo rods became the generally favored option from the mid 19th century, and several strips of the material were cut from the cane, milled into shape, and then glued together to form light, strong, hexagonal rods with a solid core that were superior to anything that preceded them.
Other materials used, were Tonkin bamboo Calcutta reed, ash wood, hickory, ironwood, maple, lancewood, or malacca cane. These products were light, tough, and pliable. Rods were generally made in three pieces called a butt, midsection, and tip. The butts were frequently made of maple, with bored bottom; this butt outlasted several tops. Midsections were generally made from ironwood because it was a thicker, strong wood. Tips were generally made from bamboo for its elasticity which could throw the bait further and more accurately. Handles and grips were generally of cork, wood, or wrapped cane. Many different types of glue held these sections together, most commonly Irish glue and bone glue. This was until hilton glue, or cement glue, was introduced because of its waterproof qualities. Even today, Tonkin split-bamboo rods are still popular in fly fishing.
Until the mid 19th century, rods were generally made in England. This changed in 1846 when American Samuel Phillippe introduced an imported fishing rod the first six strips of Calcutta cane made in Bavaria where Phillippe was importing Violins that he passed off as his own hand work. Split-cane rods were later independently produced after Phillippe started to sell the imported rods to a New York retailer and then copied by Americans Charles Orvis, Hiram Leonard and Englishman William Hardy in the 1870s and mass production methods made these rods accessible to the public. Horton Manufacturing Company first introduced an all steel rod in 1913. These rods were heavy and flexible and did not satisfy many customers. The next big occurrence in fishing rods was the introduction of the fiberglass rod in the 1940s and was developed by Robert Gayle and a Mr. Mcguire.
Boron and Graphite rods came around in the 1960s and 1970s when the United States and United Kingdom invested considerable research into developing the new technologies. Hewitt and Howald were the first to come up with a way to lay the fibers into the shape of a fishing rod by wrapping them around a piece of balsa wood. However, by 1977, boron fibre technology had been muscled out by the cheaper material graphite and was no longer competitive in the market.
Rods for travelers were made with nickel-silver metal joints, or ferrules, that could be inserted into one another forming the rod. Some of them were made to be used as a walking cane until needed for sport. Since the 1980s, with the advent of flexible, yet stiff graphite ferrules, travel rod technology has greatly advanced, and multi-piece travel rods that can be transported in a suitcase or backpack constitute a large share of the market.
Modern design
In theory, an ideal rod should gradually taper from butt to tip, be tight in all its joints (if any), and have a smooth, progressive taper, without 'dead spots'. Modern design and fabrication techniques, along with advanced materials such as graphite, boron, magnesium alloy and fiberglass composites as well as stainless steel (see Emmrod) – have allowed rod makers to tailor both the shape and action of fishing rods for greater casting distance, accuracy, and fish-fighting qualities. Today, fishing rods are identified by their weight (meaning the weight of line or lure required to flex a fully loaded rod) and action (describing the speed with which the rod returns to its neutral position).
Generally there are three types of rods used today graphite, fiberglass, and bamboo rods. Bamboo rods are the heaviest of the three, but people still use it for its feel. Fiberglass rods are the heaviest of the new chemically-made material rods. They are mostly popular with the new and young anglers, as well as anglers who cannot afford the generally more expensive graphite rods. They are more commonly found among those anglers that fish in rugged areas such as on rocks or piers where knocking the rod on hard objects is a greater possibility. This may potentially cause breakage, making a fiberglass rod preferable for some anglers due to its higher durability and affordability compared to graphite rods. Today's most popular rod tends to be graphite for its light weight characteristics and its ability to allow for further and more accurate cast. Graphite rods tend to be more sensitive, allowing the user to feel bites from fish easier.
Modern fishing rods retain cork as a common material for grips. Cork is light, durable, and keeps warm. EVA foam and carbon fiber grips are also used. Reel seats are often of graphite-reinforced plastic, aluminium, or wood. Guides are available in steel and titanium with a wide variety of high-tech ceramic and metal alloy inserts replacing the classic agate inserts of earlier rods.
Back- or butt-rests can also be used with modern fishing rods to make it easier to fight large game fish. These are fork-like supports that help keep the rod in position, providing leverage and counteracting tensions caused by a caught fish.
Rod making bench
An old rod-making bench would generally consist of a bench, vice, a drawing knife, a jack, a fore plane, large coarse flat file, sand paper, and several strips of wood about long with different size grooves in them. Newer rod building benches are smaller versions of lathes powered by small motors that turn the rod as thread is applied to secure the guides. The motor is controlled by a foot operated rheostat, similar to that found on a sewing machine. A low rpm motor can be used to apply rod finish, typically a two-part resin, to protect the threads.
Specifications
There are several specifications manufacturers use to delineate rod uses. These include power, action, line weight, lure weight, and number of pieces.
Power value
Also known as "rod weight", the power value of a fishing rod implies its stiffness by vaguely describing the force needed to produce a certain degree of flexure upon the rod, and may be classified as ultra-light (UL), light (L), medium-light (ML), medium (M), medium-heavy (MH), heavy (H), extra-heavy (XH), or other similar combinations. It is often an indicator of what styles of fishing, species of target fish, or size of fish a particular rod may be best suited for. The heavier the power of a rod, the more weight it can lift up easily without snapping. However, stiffer rods are also less sensitive, as light forces (such as vibrations from fish touching the hook) do not transmit well through a stiffer rod, but the bites from larger fish to heavy lures tend not to be hard to detect. Ultra-light rods are suitable for catching baitfish and small panfish, for situations where rod responsiveness is critical, or for casting very light tackles. Heavy/extra-heavy rods are used in deep-sea fishing, surf fishing, or big-game boat fishing.
While manufacturers use various designations for a rod's power value, there is no consensus or industrial standard, hence application of a particular weight tag a manufacturer is somewhat subjective. Any fish can theoretically be caught with any rod, but catching panfish on a heavy rod offers no sport whatsoever, and successfully landing a large fish on an ultralight rod requires supreme rod handling skills but more frequently still ends in broken tackles and a lost fish. It is generally advised to "pick the right tool for the job" and choose rod weights that are best suited to the intended type of fishing.
Action
The action of a fishing rod refers to the speed with which it elastically returns to the neutral (straightened) position after a flexional load is removed (i.e. the "recoil" or "rebound" speed), and is generally described as being "slow", "medium", "fast", or anything in between (e.g. "medium-fast") or beyond (e.g. "extra-fast"). Contrary to how it is often colloquially described, action does not refer to the bending characteristics (shape of the "curve") of the rod — a fast-action rod can as easily have a more evenly progressive bending curve (from tip to butt) as opposed to a more tip-bending curve, although tip-bending rods do inherently tend to have faster action. The action can also be influenced by the length of a rod, the tapering profile, and the blank materials used. Typically a rod that uses a fiberglass composite blank has slower action than one that uses a carbon fiber composite blank.
Action, however, is also often a subjective description of a manufacturer. Very often action is misused to note the bending curve instead of the speed. Some manufacturers list the power value of the rod as its action. A "medium" action bamboo rod may have a faster action than a "fast" fibreglass rod. Action is also subjectively used by anglers, as an angler might compare a given rod as "faster" or "slower" than a different rod.
A rod's action and power may change when load is greater or lesser than the rod's specified casting weight. When the load used greatly exceeds a rod's specifications a rod may break during casting, if the line does not break first. When the load is significantly less than the rod's recommended range the casting distance is significantly reduced, as the rod's action cannot launch the load. It acts like a stiff pole. In fly rods, exceeding weight ratings may warp the blank or have casting difficulties when rods are improperly loaded.
Rods with a fast action combined with a full progressive bending curve allows the fisherman to make longer casts, given that the cast weight and line diameter is correct. When a cast weight exceeds the specifications lightly, a rod becomes slower, slightly reducing the distance. When a cast weight is slightly less than the specified casting weight the distance is slightly reduced as well, as the rod action is only used partially.
Rod sensitivity
A rods blank will determine the amount of sensitivity anglers feel. Fishing rods made of graphite are the most sensitive due to the fact they can transfer vibrations better than rods made of fiberglass. But the rod blank is not the only thing that translates into sensitivity. The rods design will impact how well an angler feels a fish's bite or the bottom of the lake, stream, reservoir, creek, or river. The more sensitivity a rod has the more likely you are to feel the bite and be able to get a good hookset to land the fish.
Bending curvature and tapering
A fishing rod's main function is to bend and deliver a certain resistance or power. While casting, the rod acts as a catapult: by moving the rod shaft forward, the inertia of the mass of the terminal tackles and the distal portion of rod itself will load (bend) the rod tip backwards, and the subsequent forward elastic rebound will sling out the lure/bait. When a hook bite is registered and the fisherman jerks the rod, the bending of the rod will realign the pull along the line and help setting the hook properly. When fighting a fish, the elasticity of the rod not only enables the fisherman to keep the line under tension, but also dampens the shock of the fish struggling and avoid line snapping, which helps to exhaust the fish and enable the fisherman to reel it in. The rod bending also lessens the torque the fisherman has to overcome when fighting the fish by shortening the practical leverage the line is pulling on the rod, as stiffer rods have more distance between the tip and the handle even when bent, which actually translates to less effective force being transmitted due to more mechanical advantage in favor of the fish. In comparison, a deeper bending rod will demand less power from the fisherman, but deliver more fighting power to the fish. In practice, this leverage effect often misleads fisherman. Often it is believed that a hard, stiff rod puts more control and power on the fish to fight, while it is actually the fish who is putting the power on the fisherman. In commercial fishing practice, large fish are often pulled in on the line itself without much effort, which is possible because the absence of the leverage effect.
A rod can bend in different curves. Traditionally the bending curve is mainly determined by its tapering. In simplified terms, a fast taper will bend a lot more in the tip area and not much in the butt part, and a slow taper will tend to bend too much at the butt and delivers a weak rod. A progressive tapering which loads smooth from top to butt, adding in power the deeper the rod is bent. In practice, the tapers of quality rods often are curved or in steps to achieve the right action and bending curve for the type of fishing a rod is built. In today's practice, different fibres with different properties can be used in a single rod. In this practice, there is no straight relationship anymore between the actual tapering and the bending curve.
The bending curve is not easily described by terms. However, some rod and blank manufacturers try to simplify things towards their customers by describing the bending curve by associating them with their action. The term fast action is used for rods where only the tip is bending, and slow action for rods bending from tip to butt. In practice, this is misleading, as top-quality rods are very often fast-action rods, bending from tip to butt. While the so-called 'fast-action' rods are stiff rods (with absence of any action) which end in a soft or slow tip section. The construction of a progressive bending, fast action rod is more difficult and more expensive to achieve. Common terms to describe the bending curve or properties which influence the bending curve are: progressive taper/loading/curve/bending/..., fast taper, heavy progressive (notes a bending curve close to progressive, tending to become fast-tapered), tip action (also referred to as 'umbrella'-action), broom-action (which refers to the previously mentioned stiff 'fast action'-rods with soft tip). A parabolic action is often used to note a progressive bending curve, in fact this term comes from a series of splitcane fly rods built by Pezon & Michel in France since the late 1930s, which had a progressive bending curve. Sometimes the term parabolic is more specific used to note the specific type of progressive bending curve as was found in the Parabolic series.
A common way today to describe a rod's bending properties is the Common Cents System, which is "a system of objective and relative measurement for quantifying rod power, action and even this elusive thing ... fishermen like to call feel."
The bending curve determines the way a rod builds up and releases its power. This influences not only the casting and the fish-fighting properties, but also the sensitivity to strikes when fishing lures, the ability to set a hook (which is also related to the mass of the rod), the control over the lure or bait, the way the rod should be handled and how the power is distributed over the rod. On a full progressive rod, the power is distributed most evenly over the whole rod.
Line weight
The line weight of a fishing rod describes the optimal tension along the fishing line the rod is designed to handle, usually expressed in pounds or kilograms. A fishing line's "breaking weight" describes the maximum tensile force that can be exerted before the line breaks apart, while the line weight for a rod describes as the extent of bending force that the rod can support. Fly rod weights are typically expressed as a number from 1 to 12 written as "N"wt (e.g. 6wt), and each weight represents a standard weight in grains for the first of the fly line, established by the American Fishing Tackle Manufacturing Association. For example, the first of a 6wt fly line should weigh between , with the optimal weight being . In casting and spinning rods, designations such as "8-15 lb line weight" are typical.
Lure weight
The lure weight of a fishing rod describes the optimal weight range of terminal tackle (mainly the bait and hook/lure, and any attached float, sinker, swivel and/or heavy leader), usually expressed in ounces or grams, that the rod is designed to handle in order to achieve good casting outcome. Casting lures heavier than the designated weights might result in the rod tip breaking, while lures that are too light might have trouble with casting distance and accuracy.
Number of sections
Rods that are one piece from butt to tip are considered to have the most natural "feel" due to the theoretically uninterrupted transfer of vibrations to the angler's hand, and are preferred by many. However, the difficulty in transporting one-piece rod safely becomes an increasing problem with increasing rod length. Two-piece rods, joined by a ferrule, are very common, and if well engineered (especially with tubular glass or carbon fibre rods), sacrifice very little in the way of natural feel. Some fishermen do feel a difference in sensitivity with two-piece rods, but most do not.
Some rods are joined through a metal "bus". These add mass to the rod which helps in setting the hook and help activating the rod from tip to butt when casting, resulting in a better casting experience. Some anglers experience this kind of fitting as superior to a one piece rod. They are found on specialized hand-built rods. Apart from adding the correct mass, depending on the kind of rod, this fitting also is the strongest known fitting, but also the most expensive one. For that reason they are almost never to be found on commercial fishing rods.
Types
Fishing rods can be constructed out of a vast number of materials. Generally they are made with either fiberglass, graphite, or a new generation composite, also known as carbon fibre. Many times carbon fibre and graphite are used together in the rod making process.
Carbon fibre rods
A carbon fibre rod is not necessarily better than a glass fibre rod; the two fibres have different properties, with their own tradeoffs. Carbon fibre is less flexible (stiffer) than glass fibre and more brittle and prone to breakage when misused, while carbon fibre allows for longer and faster rods. Carbon fibre also allows for a smaller diameter rod that is more sensitive than a glass fibre rod. A carbon fibre rod is also much lighter than a glass fibre rod allowing for longer days of fishing. Each has its purpose in the fishing industry and both improve an anglers chances of being successful when the blanks are used for the right purposes.
Fly rods
Fly rods, thin, flexible fishing rods designed to cast an artificial fly, usually consisting of a hook tied with fur, feathers, foam, or other lightweight material. More modern flies are also tied with synthetic materials. Originally made of yew, green heart, and later split bamboo (Tonkin cane), most modern fly rods are constructed from man-made composite materials, including fibreglass, carbon/graphite, or graphite/boron composites. Split bamboo rods are generally considered the most beautiful, the most "classic", and are also generally the most fragile of the styles, and they require a great deal of care to last well. Instead of a weighted lure, a fly rod uses the weight of the fly line for casting, and lightweight rods are capable of casting the very smallest and lightest fly. Typically, a monofilament segment called a "leader" is tied to the fly line on one end and the fly on the other.
Each rod is sized to the fish being sought, the wind and water conditions and also to a particular weight of line: larger and heavier line sizes will cast heavier, larger flies. Fly rods come in a wide variety of line sizes, from size #000 to #0 rods for the smallest freshwater trout and pan fish up to and including #16 rods for large saltwater game fish. Fly rods tend to have a single, large-diameter line guide (called a stripping guide), with a number of smaller looped guides (aka snake guides) spaced along the rod to help control the movement of the relatively thick fly line. To prevent interference with casting movements, most fly rods usually have little or no butt section (handle) extending below the fishing reel. However, the Spey rod, a fly rod with an elongated rear handle, is often used for fishing either large rivers for salmon and Steelhead or saltwater surf casting, using a two-handed casting technique.
Fly rods are, in modern manufacture, almost always built out of carbon graphite. The graphite fibres are laid down in increasingly sophisticated patterns to keep the rod from flattening when stressed (usually referred to as hoop strength). The rod tapers from one end to the other and the degree of taper determines how much of the rod flexes when stressed. The larger amount of the rod that flexes the 'slower' the rod. Slower rods are easier to cast, create lighter presentations but create a wider loop on the forward cast that reduces casting distance and is subject to the effects of wind. Furthermore, the process of wrapping graphite fibre sheets to build a rod creates imperfections that result in rod twist during casting. Rod twist is minimized by orienting the rod guides along the side of the rod with the most 'give'. This is done by flexing the rod and feeling for the point of most give or by using computerized rod testing.
Custom rod building is an active hobby among fly fishermen. See Fly rod building.
Tenkara rods
Tenkara rods are a type of bamboo fly rod used for tenkara fishing in Japan. A mixture of the rods in the other categories, they are carbon rods, fly rods and telescopic rods all in one. These are ultra-light and very portable telescopic rods (read more about telescopic below). Their extended length normally ranges from , and they have a very soft action. The action of tenkara rods has been standardized as a ratio of "how many parts are stiffer : how many tip parts bend more easily". The standard actions are 5:5, 6:4, 7:3, and 8:2, with 5:5 being a softer/slower rod, and 8:2 being a stiffer rod. Similar to western fly-rods tenkara rods also have cork, and sometimes even wooden handles, with wooden handles (such as red-pine, and phoenix-tree wood) being the more prized rods due to their increased sensitivity to fish bites and the heavier feel that helps balance the rods. Tenkara rods have no guides. Tenkara is a fixed-line fishing method, where no reel is used, but rather the line is tied directly to the tip of the rod. Like the carbon rods mentioned above this allows for "very precise positioning of the fly which in turn enables huge catches of fish with accurate feeding". One of the most common flies used in tenkara fishing is the Sakasa Kebari. Tenkara fishing is very popular in Japan, where these rods can be found in every major tackle shop. In the US, tenkara is beginning to grow in popularity.
Spin casting rods
Spin casting rods are rods designed to hold a spin casting reel, which are normally mounted above the handle. Spin casting rods also have small eyes and, frequently, a forefinger grip trigger. They are very similar to bait casting rods, to the point where either type of reel may be used on a particular rod. While rods were at one time offered as specific "spin casting" or "bait casting" rods, this has become uncommon, as the rod design is suited to either fishing style. Today they are simply called "casting rods", and are usually offered with no distinction as to which style they are best suited for in use.
Baitcasting rods
While the easy to use spin casting rods are often used by novice anglers, baitcasting rods and reels are generally more difficult to use. Professional anglers, however, prefer baitcasting rod and reel combos because baitcasting reels grant anglers more accuracy in their casts. Casting rods are typically viewed as somewhat more powerful than their spinning rod counterparts – they can use heavier line and can handle heavier cover. Baitcasting rods low profile design along with a super silent high-speed 7.0:1-line retrieve.
Spinning rods
Spinning rods are made from graphite or fiberglass with a cork or PVC foam handle, and tend to be between in length. Typically, spinning rods have anywhere from 5–8 guides arranged along the underside of the rod to help control the line. The eyes decrease in size from the handle to the tip, with the one nearest the handle usually much larger than the rest to allow less friction as the coiled line comes off the reel, and to gather the very large loops of line that come off the spinning reel's spool. Unlike bait casting and spin casting reels, the spinning reel hangs beneath the rod rather than sitting on top, and is held in place with a sliding or locking reel seat. The fisherman's second and third fingers straddle the "leg" of the reel where it is attached to the reel seat on the rod, and the weight of the reel hangs beneath the rod, which makes for a more comfortable way to fish for extended periods. This also allows the rod to be held in the fisherman's dominant hand (the handle on most modern spinning reels is reversible) which greatly increases control and nuance applied to the rod itself. Spinning rods and reels are widely used in fishing for popular North American sport fish including bass, trout, pike and walleye. Popular targets for spinning in the UK and European continent are pike, perch, eel and zander (walleye). Longer spinning rods with elongated grip handles for two-handed casting are frequently employed for saltwater or steelhead and salmon fishing. Spinning rods are also widely used for trolling and still fishing with live bait.
Ultra-light rods
These rods are used to fish for smaller species, they provide more sport with larger fish, or to enable fishing with lighter line and smaller lures. Though the term is commonly used to refer to spinning or spin-cast rods and tackle, fly rods in smaller line weights (size #0–#3) have also long been utilized for ultra-light fishing, as well as to protect the thin-diameter, lightweight end section of leader, or tippet, used in this type of angling.
Ultra-light spinning and casting rods are generally shorter ( is common) lighter, and more limber than normal rods. Tip actions vary from slow to fast, depending upon intended use. These rods usually carry test fishing line. Some ultra-light rods are capable of casting lures as light as – typically small spinners, wet flies, crappie jigs, tubes, or bait such as trout worms. Originally produced to bring more excitement to the sport, ultra-light spin fishing is now widely used for crappie, trout, bass, bluegill, roach, perch, bream, pumpkin-seed, tench and other types of pan fish.
Ice rods
Modern ice rods are typically very short spinning rods, varying between in length. Classic ice rods – still widely used – are simply stiff rod-like pieces of wood, usually with a carved wooden handle, a couple of line guides, and two opposing hooks mounted ahead of the handle to hand-wind the line around. Ice rods are used to fish through holes in the cover ice of frozen lakes and ponds.
Sea rods
Sea rods are designed for use with fish from the ocean. They are long, (around on average), extremely thick, and feature huge and heavy tips, eyes, and handles. The largest of sea rods are for use with sport fishing boats. Some of these are specialized rods, including shark rods, and marlin rods, and are for use with very heavy equipment.
Surf rods
The most common type of sea rods are for surf casting. Surf casting rods resemble oversized spinning or bait casting rods with long grip handles intended for two-handed casting techniques. Generally between in length, surf casting rods need to be longer in order for the user cast the lure or bait beyond the breaking surf where fish tend to congregate, and sturdy enough to cast heavy weighted lures or bait needed to hold the bottom in rough water. They are almost always used in shore fishing (sea fishing from the shoreline) from the beach, rocks or other shore feature. Some surfcasters use powerful rods to cast up to or more of lead weight, artificial lures, and/or bait over .
Trolling rods
Trolling is a fishing method of casting the lure or bait to the side of, or behind, a moving boat, and letting the motion of the boat pull the bait through the water. In theory, for light and medium freshwater gamefishing, any casting or spinning rod (with the possible exception of ultralight rods) can be used for trolling. In the last 30 years, most manufacturers have developed a complete line of generally long, heavily built rods sold as "Trolling Rods", and aimed generally at ocean anglers and Great Lakes salmon and steelhead fishermen. A rod effective for trolling should have relatively fast action, as a very "whippy" slow action rod is extremely frustrating to troll with, and a fast action (fairly stiff) rod is generally much easier to work with when fishing by this method. Perhaps the extreme in this philosophy was reached during the 1940s and early 1950s, when the now-defunct True Temper corporation – a maker of garden tools – marketed a line of trolling rods of length made of tempered steel which were square in cross section. They acted as excellent trolling rods, though the action was much too stiff for sportsmanlike playing of fish once hooked. As Great Lakes sportfishing in particular becomes more popular with each passing year, all rod manufacturers continue to expand their lines of dedicated "trolling" rods, though as noted, for most inland lake and stream fishing, a good casting or spinning rod is perfectly adequate for trolling.
Telescopic rods
Telescopic fishing rods are designed to collapse down to a short length and open to a long rod. rods can close to as little as . This makes the rods very easy to transport to remote areas or travel on buses, compact cars, or public buses and subways. Telescopic fishing rods are made from the same materials as conventional multi-piece rods. Graphite, carbon, and sometimes fibreglass, or composites of these materials, are designed to slip into each other so that they open and close. The eyes on the spinning rods are generally, but not always, a special design to aid in making the end of each section stronger. Various grade eyes available in conventional rods are also available in telescopic fishing rods. The eyeless Tenkara style rods are also of this type and are typically made from carbon and/or graphite.
Care for telescopic fishing rods is much the same as other rods. The only difference being that one should not open the telescopic rod in manner that whips a closed rod into the open position rapidly. Whipping or flinging a telescopic fishing rod open may and likely will cause it to be difficult to close. When closing the rods make a slight twisting motion while pushing the sections together. Often the rods come with tip covers to protect the tip and guides. Additionally, extra care must be taken not to get dirt or sand in the joints; due to their design this can easily damage this style of rod.
Telescopic rods are popular among surf fishermen. Carrying around a surf fishing rod, even in two pieces, is cumbersome. The shorter the sections the shorter they close, the more eyes they have, and the better the power curve is in them. More eyes means better weight and stress distribution throughout the parabolic arc. This translates to further casting, stronger fish fighting abilities, and less breaking of the rod.
| Technology | Hunting and fishing | null |
47338 | https://en.wikipedia.org/wiki/Fishing%20reel | Fishing reel | A fishing reel is a hand-cranked reel used in angling to wind and stow fishing line, typically mounted onto a fishing rod, but may also be used on compound bows or crossbows to retrieve tethered arrows when bowfishing.
Modern recreational fishing reels usually have fittings aiding in casting for distance and accuracy, as well as controlling the speed and tension of line retrieval to avoid line snap and hook dislodgement. Fishing reels are traditionally used in angling and competitive casting. They are typically attached near the handle of a fishing rod, though some specialized reels with pressure sensors for immediate retrieval are equipped on downrigger systems which are mounted directly to an ocean-going sport boat's gunwales or transoms and are used for "deep drop" and trolling.
The earliest fishing reel was invented in China at least since the Song dynasty, as shown by detailed illustration of an angler fishing with reel from Chinese paintings and records beginning about 1195 AD, although sporadic textual descriptions of line wheels used for angling had existed since the 3rd century. These early fishing reel designs were likely derived from winches/windlasses and roughly resemble the modern centerpin reels.
Fishing reels first appeared in the Western Hemisphere in England around 1650 AD. An incident is disclosed in an excerpt from author Thomas Barker found in his book, The Art of Angling: wherein are discovered many rare secrets, very necessary to be knowne by all that delight in that recreation:
In the 1760s, London tackle shops were advertising multiplying or gear-retrieved reels. The first popular American fishing reel appeared in the United States around 1820. During the second half of the 20th century, Japanese and Scandinavian reel makers such as Shimano, Daiwa and ABU Garcia, previously all precision engineering manufacturers for biking equipments and watchmaking, began rising to dominate the world market.
History
Origins in China
In literary records, the earliest evidence of the fishing reel comes from a 3rd-century AD Chinese work entitled Lives of Famous Immortals, where the term "angling lathe" (釣車) was used. Tang dynasty poet Lu Guimeng (?–881) and his friend Pi Rixiu (834–883), both avid anglers, frequently mentioned "angling lathe" and "angle-fishing wheel" (釣魚輪) in their fishing poems, with Pi even describing a gift reel he received as "an angle-handled wheel [that] is smooth and light" (角柄孤輪細膩輕). Song dynasty poets, such as Huang Tingjian (1045–1105) and Yang Wanli (1127–1206), also made reference to "angling lathe" in lyrics involving lakes and fishing boats. Northern Song scientist Shen Kuo (1031–1095) even once wrote in a travel book that "angling uses wheeled rod, rod uses purple bamboo, the wheel is not to be large, the rod shouldn't be long, but [you] can angle if the line is long" (釣用輪竿,竿用紫竹,輪不欲大,竿不宜長,但絲長則可釣耳).
The earliest known graphical depiction of a fishing reel, according to Joseph Needham, comes from a Southern Song (1127–1279) painting done in 1195 by Ma Yuan (c. 1160–1225) called "Angler on a Wintry Lake". The painting, currently in collection at Tokyo National Museum after the looting of the Old Summer Palace, showing a man sitting on a small sampan boat while casting out his fishing line. Another fishing reel was featured in a painting by Wu Zhen (1280–1354). The book Tianzhu lingqian (Holy Lections from Indian Sources), printed sometime between 1208 and 1224, features two different woodblock print illustrations of fishing reels being used. An Armenian parchment Gospel of the 13th century shows a reel (though not as clearly depicted as the Chinese ones). The Sancai Tuhui, a Chinese encyclopedia published in 1609, features the next known picture of a fishing reel and vividly shows the windlass pulley of the device. These five pictures are the only ones which feature fishing reels before the year 1651.
Development in England
The first English book on fishing is "A Treatise of Fishing with an Angle" in 1496 (its spelling respective to the manner of the date is The Treatyse of Fysshynge with an Angle'). However, the book did not mention a reel. A primitive reel was first cited in the book The Art of Angling by Thomas Barker (fl.1591–1651), first published in 1651. Fishing reels first appeared in England around the 1650s, a time of growing interest in fly fishing.
The fishing industry became commercialized in the 18th century, with rods and tackle being sold at the haberdashers store. After the Great Fire of London in 1666, artisans moved to Redditch, a center of fishing-related products from the 1730s. Onesimus Ustonson established his trading shop in 1761, and his establishment remained a market leader for the next century. He received a Royal Warrant from three successive monarchs, starting with King George IV.
Some have credited Onesimus with the invention of the fishing reel – he was undoubtedly the first to advertise its sale. Early multiplying reels were wide and had a small diameter, and their gears, made of brass, often wore down after extensive use. His earliest advertisement in the form of a trading card dates from 1768 and was entitled To all lovers of angling. A full list of the tackles he sold included artificial flies and 'the best sort of multiplying brass winches both stop and plain.' The commercialization of the industry came at a time of expanded interest in fishing as a recreational hobby for members of the aristocracy.
Modern reel design began in England during the latter part of the 18th century, and the predominant model in use was known as the 'Nottingham reel'. The reel was a wide drum that spooled out freely, and was ideal for allowing the bait to drift along way out with the current.
Tackle design began to improve from the 1880s. The introduction of new woods to the manufacture of fly rods made it possible to cast flies into the wind on silk lines, instead of horse hair. These lines allowed for a much greater casting distance. A negative consequence of this, was that it became easy for the much longer line to get into a tangle. This problem spurred the invention of the regulator to evenly spool the line out and prevent tangling.
Albert Illingworth, 1st Baron Illingworth a textiles magnate, patented the modern form of fixed-spool spinning reel in 1905. When casting Illingworth's reel design, the line was drawn off the leading edge of the spool, but was restrained and rewound by a line pickup, a device which orbits around the stationary spool. Because the line did not have to pull against a rotating spool, much lighter lures could be cast than with conventional reels.
Development in the United States
Geared multiplying reels failed to gain traction in Britain but had more success in the United States, where English models were modified by George W. Snyder (c.1780–1841), a skillful watchmaker and silversmith in Paris, Kentucky, into his own bait-casting reel named the Kentucky Reel, the first American-made design in 1810. Snyder's first reel was made for his own angling use, but afterward, he made reels for members of his club. Without patent or trademark protection, Snyder's Kentucky Reel was quickly copied by many others, including Meek, Milam, Sage, Hardman and Gayle. These artisans were trained in jewelry fabrication and were experienced in cutting gears, constructing small parts, and doing precision work. In time, the Kentucky Reel was mass-produced by the emerging factories located in the Northeast, where they could be produced at a fraction of the cost and time required for hand-built construction. The availability of more affordable fly reels greatly stimulated the sales and popularity of fly fishing equipment. It was soon applied to bait casting reels, resulting in a surge in the popularity of fishing as a pastime among all levels of American society.
The American, Charles F. Orvis, designed and distributed a novel reel and fly design in 1874, described by reel historian Jim Brown as the "benchmark of American reel design," and the first fully modern fly reel. The founding of The Orvis Company helped institutionalize fly fishing by supplying angling equipment via the circulation of his tackle catalogs, distributed to a small but devoted customer list.
Types
Fishing reels can be classified into two design groups: rotary-spool and fixed-spool. Rotary-spool designs are essentially similar to a spinning wheel or windlass, where the spool actively rotates to wind the line around itself. Fixed-spool designs, on the other hand, behave like a spindle and have no rotating motion of the spool, instead using a separate spinning mechanism that revolves around the spool to drag and wrap the line around onto the spool.
Rotary-spool
Rotary-spool reels can be further subdivided into two types: single-action and multiplier. Single-action reels have a synchronous rotating action between the crank handle and the spool (hence the name, "single[-ratio] action"), and quite often the handle is mounted directly on the spool frame (in which case, the spool frame itself becomes the crank). Multiplier reels, on the other hand, have an internal gear train design that amplifies the number of spool turns for every turn of the crank handle, allowing much faster line retrievals. The spool on multiplier reels also spins in the opposite direction to that of single-action spools.
With larger-capacity spools (typically in multiplier reels), there is usually a slider mechanism in front of the spool – known as the line guide — that pushes the line side-to-side in an oscillating motion, which allows the line winding to be more evenly distributed across the spool instead of bunching up at one section.
Centrepin reel
The centrepin reel (or centerpin, center pin, or float reel) is a single-action reel which runs freely enough on its axle ("centrepin"). The centrepin reel is the earliest fishing reel design invented by humans, and is historically and currently used for coarse fishing. Instead of a mechanical drag, the angler's thumb is typically used to control the fish. Fishing in the margins for carp or other heavy fish with relatively light tackle is very popular with a 'pin' and is often used for 'trotting' a method in which a float on the line suspends a bait a certain depth to flow with the current along the waterway. During the 1950s and 1960s, many anglers in England began fishing with a centrepin reel. Despite this, the centrepin is today mostly used by coarse anglers, who remain a small proportion of the general fishing population.
A special class of centrepin reel known as the fly reel, used specifically for fly fishing, is normally operated by manually stripping the line off the reel with one hand, while casting the rod with the other hand. The main purpose of a fly reel is to help cast ultralight fly lures and provide smooth uninterrupted tension (drag) when a fish makes a long run, and counterbalance the weight of the fly rod when casting. When used in fly fishing, the fly reel or fly casting reel has traditionally been rather simple in terms of mechanical construction, and little has changed from the design patented by Charles F. Orvis of Vermont in 1874. Orvis first introduced the idea of using light metals with multiple perforated holes to construct the housing, resulting in a lighter reel that also allowed the spooled fly line to dry more quickly than a conventional, solid-sided design. Early fly reels placed the crank handle on the right side of the reel. Most had no drag mechanism, but were fitted with a click/pawl mechanism intended to keep the reel from overrunning when line was pulled from the spool. To slow a fish, the angler simply applied hand pressure to the rim of the revolving spool (known as "palming the rim"). Later, these click/pawl mechanisms were modified to provide a limited adjustable drag of sorts. Although adequate for smaller fish, these did not possess a wide adjustment range or the power to slow larger fish.
At one time, multiplier fly reels were widely available. These reels had a geared line retrieve of 2:1 or 3:1 that allowed faster retrieval of the fly line. However, their additional weight, complexity and expense did not justify the advantage of faster line retrieval in the eyes of many anglers. As a result, today they are rarely used, and have largely been replaced by large-arbor designs with large diameter spools for faster line retrieval.
Automatic fly reels use a coiled spring mechanism that pulls the line into the reel with the flick of a lever. Automatic reels tend to be heavy for their size, and have limited line capacity. Automatic fly reels peaked in popularity during the 1960s, and since that time they have been outsold many times over by manual fly reels.
Modern fly reels typically have more sophisticated disc-type drag systems made of composite materials that feature increased adjustment range, consistency, and resistance to high temperatures from drag friction. Most of these fly reels also feature large-arbor spools designed to reduce line memory, maintain consistent drag and assist the quick retrieval of slack line in the event a hooked fish makes a sudden run towards the angler. Most modern fly reels are ambidextrous, allowing the angler to place the crank handle of the reel on either the right or the left side as desired.
Saltwater fly reels are designed specifically for use in an ocean environment. Saltwater fly reels are normally large-arbor designs, having a much larger diameter spool than most freshwater fly reels. These large arbor reels provide an improved retrieve ratio and considerably more line and backing capacity, optimizing the design for the long runs of powerful ocean game fish. To prevent corrosion, saltwater fly reels often use aerospace aluminum frames and spools, electroplated and/or stainless steel components, with sealed and waterproof bearing and drive mechanisms.
Fly reel operation
Fly reels are normally manual, single-action designs. Rotating a handle on the side of the reel rotates the spool which retrieves the line, usually at a 1:1 ratio (i.e., one complete revolution of the handle equals one revolution of the spool). Fly reels are one of the simplest reels and have far fewer parts than a spinning reel. The larger the fish the more important the reel becomes. On the outside of the reel there are two levels of knobs these are the spool release and the drag adjustment.
Fly reel drag systems
Fly-reel drag systems have two purposes. One, they prevent spool overrun when stripping line from the reel while casting, and two, to tire out running fish by exerting pressure on the line that runs in the opposite direction. There are four main drag systems that are used with the fly reel. These are the ratchet-and-pawl, caliper drags, disc drags, and center-line drags. The ratchet-and-pawl drag clicks automatically while the spool is spinning. The caliper drag causes the calipers to brush up against the reel spool. A disc drag is when pressure is applied on the plates which then applies pressure on the spool. Center-line drags also known as the best kind of drag because the pressure is directly on the spool close to the axis of rotation.
Sidecast reel
The sidecast reel takes elements of the design of the centrepin reel, but adds a bracket that allows the reel to be rotated 90° for casting and then returned to the original position to retrieve line. In the casting position, the spool face is perpendicular to the rod and the axle is parallel, and the line is free to slide off the side of the spool like on a spinning reel. The advantage of such design is that the reel is direct-driven, and during casting the line release is as smooth as that of a spinning reel, but it does require an extra hand movement to start reeling.
Sidecast reels are popular with anglers in Australia for all forms of freshwater and saltwater fishing. Most common is their use for surf fishing (beachcasting), or off the rocks, often with a larger diameter spool () and paired with a surfcasting rod. The most famous brand of sidecast reels is Alvey Reels, a Brisbane-based fishing tackle manufacturer established in 1920.
Conventional reel
The conventional reel, also known as the trolling reel (due to its popularity in recreational boat trolling) or "drum reel" (due to its often drum-like cylindrical shape), is the most classical design of multiplier reels. It can be mounted (more often) above or below the rod handle, with the spool axis being perpendicular to the rod. In such a setup the line does not go over the end of the spool like it does with a spinning reel. Most modern conventional reels have a line guide that slides left and right when cranking to ensure a more even wrapping of the line onto the spool.
There are two types of trolling reels depending on the drag system design, namely the star drag reels and lever drag reels. Star drag reels are like most baitcasters, because they have a star-shaped drag control knob used to apply drag as well as a little lever to put them into free spool. The lever drag reel uses a drag lever to perform both functions as it can apply drag and put the reel into free spool. With either type, care must be taken to prevent backlash while they are in free spool. Keeping a thumb on the spool is one way to prevent a free spool backlash. Some smaller sizes of conventional reels can be cast, but large conventional reels are not meant for casting; the larger they are the more difficult they become to cast. Conventional reels are for really big fish and are usually used offshore. As a tool for Deep-sea fishing, they are mostly designed for trolling but can also be used for drift fishing, butterfly jigging and "deep drop" fishing. They are usually mounted on short, often very stiff rods called "boat" rods.
Baitcasting reel
The baitcasting reel or baitcaster is a multiplying reel like modified from the conventional reel, but with a lighter spool and a higher, more forwardly positioned line guide to facilitate farther and smoother casting, hence the name. The baitcasting reel is always mounted above the rod handle (of what is known as a "casting rod"), hence its other name given to it in New Zealand and Australia, the overhead reel. The line is stored on a bearing-supported, more freely revolving spool that is geared so that a single revolution of the crank handle results in multiple (usually 4× or more) revolutions of the spool. The baitcasting reel design will operate well with a wide variety of fishing lines ranging from braided multifilament, heat-fused "Superlines", copolymer, fluorocarbon and nylon monofilaments (see Fishing line). Most baitcasting reels can also easily be palmed or thumbed to increase the drag, set the hook, or to accurately halt the lure at a given point in the cast.
The baitcasting reel dates from at least the mid-17th century, but came into wide use by amateur anglers during the 1870s. Early baitcasting reels were often constructed with brass or iron gears, with casings and spools made of brass, German silver or hard rubber. Featuring multiplying gears ranging from 2:1 to 4:1, these early reels had no drag mechanism, and anglers used their thumb on the spool to provide resistance to runs by a fish. As early as the 1870s, some models used bearings to mount the spool; as the free-spinning spool tended to cause backlash with strong pulls on the line, manufacturers soon incorporated a clicking pawl mechanism. This "clicker" mechanism was never intended as a drag, but used solely to keep the spool from overrunning, much like a fly reel. Baitcasting reel users soon discovered that the clicking noise of the pawls provided valuable audible warning that a fish had taken the live bait, allowing the rod and reel to be left in a rod holder while awaiting a strike by a fish.
Most fishing reels are suspended from the bottom side of the rod, since this position doesn't require wrist strength to overcome gravity while enabling the angler to cast and retrieve without changing hands. The baitcasting reel's unusual mounting position atop the rod is an accident of history. Baitcasting reels were originally designed to be cast when positioned atop the rod, then rotated upside-down to operate the crank handle while playing a fish or retrieving line. However, in practice most anglers preferred to keep the reel atop the rod for both cast and retrieve by simply transferring the rod to the left hand for the retrieve, then reverse-winding the crank handle. Because of this preference, mounting the crank handle on the right side of a bait casting reel (with standard clockwise crank handle rotation) has become customary, though models with left-hand retrieve have gained in popularity in recent years thanks to user familiarity with the spinning reel.
Many of today's baitcasting reels are constructed using aluminium alloy, stainless steel, synthetic composites such as fiberglass-reinforced plastic or carbon fiber, alone or in combination; newer but more expensive materials such as titanium and magnesium alloys can also be found occasionally. They call for a rod that has a trigger finger hook located in the handle area. They typically include a level-wind mechanism to prevent the line from being trapped under itself on the spool during rewind and interfering with subsequent casts. Many are also fitted with anti-reverse handles and drags designed to slow runs by large and powerful game fish. Because the baitcasting reel uses the weight and momentum of the lure to pull the line from the rotating spool, it normally requires lures weighing 1/4 oz. or more to cast a significant distance. Recent developments have seen baitcasting reels with gear ratios as high as 7.1/1. Higher gear ratios allow much faster retrieval of line, but sacrifice some amount of strength in exchange, since the additional gear teeth required reduces torque as well as the strength of the gear train. This could be a factor when fighting a large and powerful fish.
Two variations of the revolving spool bait casting reel are the conventional surf fishing reel and the big game reel. These are very large and robust fishing reels, designed and built for heavy saltwater species such as tuna, marlin, sailfish and sharks. Surf fishing reels are normally mounted to long, two-handed rods; these reels frequently omit level-wind and braking mechanisms to achieve extremely long casting distances. Big game reels are not designed for casting, but are instead used for trolling or fishing set baits and lures; they are ideal for fighting large and heavy fish off a pier or boat. These reels normally use sophisticated star or lever drags to play out huge saltwater gamefish.
Baitcasting Reel Operation
To cast a baitcasting rod and reel, the reel is turned on its side, the "free spool" feature engaged, and the thumb placed on the spool to hold the lure in position. The cast is performed by snapping the rod backward to the 2 o'clock position, then casting it forward in a smooth motion, allowing the lure to pull the line from the reel. The thumb is used to contact the line, moderating the revolutions of the spool and braking the lure when it reaches the desired aiming point. Though modern centrifugal and/or magnetic braking systems help to control backlash, using a bait casting reel still requires practice and a certain amount of finesse on the part of the fisherman for best results.
Advantages of Baitcasting Reels
While spincasting and spinning reels are easier to operate because fishing line leaves the spool freely during a cast, baitcasting reels have the potential to overrun: a casting issue in which the reel's spool does not spin at a rate equal to the speed of fishing line leaving the reel. Professional fishermen, however, prefer baitcasters because baitcasting reels allow anglers more control over their casts. Since a baitcaster's spool spins along with the fishing line leaving the reel, a simple flick of the thumb can stop a cast early or slow a lure while it is still in the air. This grants anglers such as bass fishermen more accuracy in their casts. Furthermore, a baitcaster's design allows a fisherman to make casts at a faster rate, even with heavier baits.
Disadvantages Of Baitcasting Reels
Effective use of baitcasting reels requires prior experience and a developed skill set, thus it is unsuitable for beginners.
There are higher risks of getting backlashes during the cast without proper techniques.
One must know about spool tension adjustment for different spool sizes.
Unsuitable for light lures.
More costly than spinning reels.
Fixed-spool
Fixed spool reels can have either an "open" design, where the spool is exposed to the outside; or an "enclosed" design, where the spool is concealed under an enclosure with a front hole that allows passage of the line. There is typically an internal axle that imparts a slight reciprocating motion to the spool, which allows the line to be wrapped in a more evenly distributed fashion.
Spinning reel
Spinning reels, also called fixed spool reels or "egg beaters", are open-design fixed-spool reels that were in use in North America as early as the 1870s. They were originally developed to allow the use of artificial flies, or other lures for trout or salmon, that were too light in weight to be easily cast by conventional or baitcasting reels. Spinning reels are normally mounted below the rod; this positioning conforms to gravity, requiring no wrist strength to maintain the reel in position. For right-handed persons, the spinning rod is held and cast by the strong right hand, leaving the left hand free to operate the crank handle mounted on the left side of the reel. Invention of the spinning reel solved the problem of backlash, since the reel had no rotating spool capable of overrunning and tangling the line.
The name of Holden Illingworth, a textiles magnate, was first associated with the modern form of fixed-spool spinning reel. When casting the Illingworth reel, line was drawn off the leading edge of the spool, but was restrained and rewound by a line pickup, a device which orbits around the stationary spool. Because the line did not have to pull against a rotating spool, much lighter lures could be cast than with conventional reels.
In 1948, the Mitchell Reel Company of Cluses, France introduced the Mitchell 300, a spinning reel with a design that oriented the face of the fixed spool forward in a permanently fixed position below the fishing rod. The Mitchell reel was soon offered in a range of sizes for all fresh and saltwater fishing. A manual line pickup was used to retrieve the cast line, which eventually developed into a wire bail design that automatically recaptured the line upon cranking the retrieve handle. An anti-reverse lever prevented the crank handle from rotating while a fish was pulling line from the spool, and this pull can be altered with adjustable drag systems which allow the spool to rotate, but not the handle. With the use of light lines testing from two to six pounds, modern postwar spinning reels were capable of casting lures as light as , and sometimes lighter.
With all fixed-spool reels, the line is released in coils or loops from the leading edge of the non-rotating spool. To shorten or stop the outward cast of a lure or bait, the angler uses a finger or thumb placed in contact with the line and/or the leading edge of the spool to retard or stop the flight of the lure. Because of the design's tendency to twist and untwist the line as it is cast and retrieved, most spinning reels operate best with fairly limp and flexible fishing lines.
Though spinning reels do not suffer from backlash, line can occasionally be trapped underneath itself on the spool or even detach from the reel in loose loops of line. Some of these issues can be traced to overfilling the spool with line, while others are due to the way in which the line is wound onto the spool by the rotating bail or pickup. Various oscillating spool mechanisms have been introduced over the years in an effort to solve this problem. Spinning reels also tend to have more issues with twisting of the fishing line. Line twist in spinning reels can occur from the spin of an attached lure, the action of the wire bail against the line when engaged by the crank handle, or even retrieval of line that is under load (spinning reel users normally pump the rod up and down, then retrieve the slack line to avoid line twist and stress on internal components). To minimize line twist, many anglers who use a spinning reel manually reposition the bail after each cast with the pickup nearest the rod to minimize line twist.
Fixed spool reel operation
Fixed spool reels are cast by grasping the line with the forefinger against the rod handle, opening the bale arm and then using a backward swing of the rod followed by a forward cast while releasing the line with the forefinger. The point of release should be trialled to find optimum angle for your casting. The forefinger is then placed in contact with the departing line and the leading edge of the spool to slow or stop the outward cast. On the retrieve, one hand operates the crank handle, while the large rotating wire cage or bail (either manually or trigger-operated) serves as the line pickup, restoring the line to its original position on the spool.
Fixed spool advantages
Spinning reels were originally developed to better cast light-weight lures and baits. Today, spinning reels continue to be an excellent alternative to baitcasters, reels which have difficulty casting lighter lures. Furthermore, because spinning reels do not suffer from backlash, spinning reels are easier and more convenient to use for some fishermen.
Spincast reel
Spincast reels are fixed-spool reels with the spool and line pickup mechanisms enclosed within a cylindrical or cylindroconoidal cover, which has a hole at the front to transmit the line. The first commercial spincast reels were introduced by the Denison-Johnson Reel Company and the Zero Hour Bomb Company (ZEBCO) in 1949. Spincast reels avoid the problem of backlash found in baitcast designs, while reducing line twist and snare complaints sometimes encountered with traditional spinning reel designs. Just as with the spinning reel, the line is thrown from a fixed spool and can therefore be used with relatively light lures and baits. However, the spincast reel eliminates the large wire bail and line roller of the spinning reel in favor of one or two simple pickup pins and a metal cup to wind the line on the spool. Traditionally mounted above the rod, the spincast reel is also fitted with an external nose cone that encloses and protects the fixed spool. Spincast reels may also be described as closed face reels.
With a fixed spool, spincast reels can cast lighter lures than bait cast reels, although friction of the nose cone guide and spool cup against the uncoiling line reduces casting distance compared to spinning reels. Spincast reel design requires the use of narrow spools with less line capacity than either baitcasting or spinning reels of equivalent size, and cannot be made significantly larger in diameter without making the reel too tall and unwieldy. These limitations severely restrict the use of spin cast reels in situations such as fishing at depth, when casting long distances, or where fish can be expected to make long runs. Like other types of reels, spin cast reels are frequently fitted with both anti-reverse mechanisms and friction drags, and some also have level-wind (oscillating spool) mechanisms. Most spin cast reels operate best with limp monofilament lines, though at least one spin cast reel manufacturer installs a thermally fused "superline" into one of its models as standard equipment. During the 1950s and into the mid-1960s, they were widely used and very popular, though the spinning reel has since eclipsed them in popularity in North America. They remain a favorite fishing tool for catfish fishing and also for young beginners in general.
Spincast reel Operation
Pressing a button on the rear of the reel disengages the line pickup, and the button is then released during the forward cast to allow the line to fly off the spool. The button is pressed again to stop the lure at the position desired. Upon cranking the handle, the pickup pin immediately re-engages the line and spools it onto the reel.
Underspin reel
Underspin reels or triggerspin reels are variants of spincast reels that is designed for mounting underneath a standard spinning rod. The reel foot is now located on top of the reel (like a spinning reel), and the line release button is replaced by a front lever. With the reel's weight suspended beneath the rod, underspin reels are generally more comfortable to cast and hold for long periods, and the ability to use all standard spinning rods greatly increases its versatility compared to traditional spin cast reels.
Underspin Reel Operation
When the line release lever/trigger is lifted up by the forefinger (usually the index finger of the rod-holding hand), the line catch inside the reel disengages and retracts, and the line is free to slide off the fixed spool. In some modern designs (e.g. the Pflueger "President" reel), keeping the lever fully pulled up will however protrude the whole spool forward and pinch the line against the enclosure interior, thus halting the line release. During line retrieval, the mechanism inside the reel will engage the line catch again, which protrudes out to "grab" the line and wrap it around the spool. When necessary, the lever can be activated once again to stop the lure at a given point in the retrieval.
Mechanisms
Reel mechanisms
Direct-drive reel
Direct-drive reels have the spool and handle directly coupled. When the angler is reeling in a fish, there's user operation, but when the line is going out, and the fish is taking the bait and the reel handles are visible moving likewise to the line unwinding. With a fast-running fish, this may have consequences for the angler's knuckles. Traditional fly reels are direct-drive.
Anti-reverse reel
In anti-reverse reels, a mechanism allows line to pay out while the handle remains stationary. Depending on the drag setting, line may also pay out, as with a running fish, while the angler reels in. Baitcasting reels and many modern saltwater fly reels are examples of this design.
The mechanism works either with a 'dog' or 'pawl' design that engages into a cog wheel attached to the handle shaft. The latest design is Instant Anti-Reverse, or IAR. This system incorporates a one-way clutch bearing or centrifugal force fly wheel on the handle shaft to restrict handle movement to forward motion only.
Drag mechanisms
Drag systems are a mechanical means of applying variable pressure to the line spool or drive mechanism to act as a friction brake against outgoing spool rotation. Under normal load, the friction holds the spool and the gears in synchrony, allowing the user to reel in the line; if the tension along the fishing line exceeds the drag setting, the braking friction is overcome and the spool will reverse-rotate with resistance until the line tension drops back below the drag setting. Some designs also have an internal spring clicker that generates warning noises to remind the user whenever the line tension exceeds the drag setting. Such mechanism serves to cap off the maximum line tension and prevents it from overloading and breaking when landing a strong or vigorously fighting fish. In combination with rod flexing and adequate angling techniques, the angler can catch fish much larger than the on-paper breaking strength of the line by "walking" and gradually tiring out the fish.
The mechanics of drag systems usually consist of a number of frictional discs (drag washers) arranged in a coaxial stack on the spool shaft, or in some cases, on the drive shaft. There is generally a screw or lever mechanism that presses perpendicularly against the washers, which creates friction especially when each washer slides against adjacent ones – the higher the pressure, the greater the resistance. Drag washers are commonly made of materials such as felt, Teflon, carbon fiber or other reinforced plastics, and usually have metallic (usually steel) washers stacked intermittently to help distribute shear stress more evenly. Since large fish can generate a lot of pulling power, reels with higher available drag forces for higher-test lines will generate greater heat, and therefore use stronger and more heat-resistant materials, often with coated with specialty oil or grease to prevent burning and unwanted locking between adjacent washers. A good drag system one that is durable and generates precise, consistent and smooth (with no jerkiness) resistance.
Spinning reels
Spinning reels have two types of drag design: front or rear. All spinning reels come with front drag, but rear drag, also called "bait runner" or "baitfeeder", is an additional feature.
Front drags are basically a screw knob mounted to the front end of the spool, which exerts direct graduated axial pressure on the drag washers on the main pinion. To adjust these, the user needs to reach around the front to turn and tighten/loosen the spool. Front drags are mechanically simpler and usually more consistent in performance and capable of higher drag forces.
Rear drags, on the other hand, have an adjustment screw on the back of the reel along with a separate lever to activate its use. It automatically flips off whenever the fisherman touches the spool-crank and the front drag then steps in at that moment and incorporates its setting into the fight. Manufacturers seldom issue over ten pounds of drag from the rear but are said to be more complicated mechanically and usually not as precise or smooth as front drags since the drag itself is often part of the drive shaft and not the spool. They are for the first moments of the encounter when the fish has the bait in its mouth and is running with it without the hook set yet. The rear drag stops when the fisherman turns his spool-crank to engage the culprit on the run, and sets the hook.
Casting reels
Conventional overhead, trolling or baitcasting type reels usually use one of two types of drags: star or lever. The most common and simplest mechanically is the so-called 'star drag' because the adjustor wheel looks like a star with rounded points. Star drags work by screw action to increase or decrease the pressure on the washer stack which is usually located on the main driving gear. Reels with star drags generally have a separate lever which allows the reel to go into "freespool" by disengaging the spool from the drive train completely and allowing it to spin freely with little resistance. The freespool position is used for casting, letting line out and/or allowing live bait to move freely.
Lever drags work through cam action to increase or decrease pressure on the drag washers located on the spool itself. Most lever drags offer preset drag positions for strike (reduced drag to avoid tearing the hook out of the fish), full (used once the hook is set) and freespool (see above). Lever drags are simpler and faster to adjust during the fight. And, since they use the spool for the washer stack rather than the drive gear, the washers can be larger, offering more resistance and smoother action. The disadvantage is that in freespool, there can be residual and unwanted resistance since the drag mechanism may not be completely out of the picture without resorting to more complex mechanics.
Setting the drag
Proper drag setting depends on fishing conditions, line test (break strength) and the size and type of fish being targeted. Often it is a matter of "feel" and knowing the setup to get the drag right.
With spinning reels, closed-face reels and conventional reels with star drags, a good starting point is to set the drag to about one-third to one-half the breaking strength of the line. For example, if the line is rated at test, a drag setting that requires of force on the line to move the spool would be appropriate. This is only a rule of thumb. For lever drag reels with a strike position, most anglers start by setting the drag at the strike position to one-third the break strength of the line. This usually allows the full position to still be safely under the line rating while providing flexibility during the fight. Depending on the conditions, some anglers may leave their reels in freespool then setting the anti-reverse or engaging the drag on hookup.
Braking mechanisms
When casting, the terminal tackles flying through the air will decelerate due to air resistance, causing the line release out of the reel (which is mainly driven by the forward momentum of the terminal tackles) to slow down exponentially. This is particularly apparent when casting lightweight and/or poorly aerodynamic baits/lures, or when casting against the wind. If the angler is using a multiplier reel, its rotary spool often still has sufficient rotational momentum to keep itself spinning with a far more gradual deceleration. This deceleration mismatch between the line release and the spool rotation causes the lagging line to inertially "float" off the spool in loose loops before it can exit the reel. Some of these floating loops eventually get large enough to be pulled into the narrow spaces between the spool and the reel chassis – a phenomenon known as a spool overrun or a backlash, which often snares the loops into a very messy tangle (colloquially called a "bird's nest" or "birdie") that is notoriously difficult to untangle. Such backlashing is unique to multiplier reels, particularly baitcasters, and is not present with fixed-spool reels such as a spinning reel.
To deal with backlashing, most modern baitcasting reels have a so-called "cast control" that serves to reduce the incidence of spool overrun at the cost of sacrificing casting distances. Each time a different lure weight is attached, the cast control must be adjusted to calibrate for the difference in lure momentum and deceleration. The users are also required to learn the skill of "feathering the spool" with their thumb to apply direct tactile friction on the spool surface to slow down or even stop it from spinning.
Spool tension
Spool tension is an adjustable screw knob that is coaxial to the reel spool. When tightened, the knob exerts axial pressure on the spool gear and generates a consistent frictional resistance when the spool is free-spinning.
Centrifugal braking
Centrifugal braking uses a series of spring-loaded "blocks" on the spool, which can move radially outwards under centrifugal force when the spool is spinning rapidly. These blocks each have a rubber piece that can rub against the reel chassis, creating additional friction that slows down the spool until the blocks retract back under spring tension. Some reels, such as the Shimano SVS Infinity, have designs that allow each centrifugal blocks to be locked and temporarily disabled.
Magnetic braking
Magnetic braking incorporates the principles of Lenz's law to create a contactless resistance to the spool spinning. The reel chassis (usually on the side opposite to the crank handle) has a circularly arranged array of magnets creating a magnetic field. When the spool rotates, the metallic frame cuts through the field lines and experiences an electromagnetic resistance, which changes with the spool speed but persists as long as the spool is still moving.
Electronic braking
Electronic braking uses an electronic circuit to monitor the speed of spool rotation and apply pre-calculated resistance via an internal actuator. The most famous is the Shimano Curado DC ("Digital Control") series, first introduced in 2003 and having remained the only electronically braked fishing reel in the world for two decades until early 2023, when two similar products, the Daiwa IMZ Limitbreaker and the KastKing iReel IFC ("Intelligent Frequency Control"), were announced respectively.
Line guide mechanisms
Line guides are unique to multiplier reels, as more evenly wound lines on the spool would allow the reel to function more smoothly and prevent unwanted "overspill" of line at either end of the spool. The vast majority of line guides are a simple ring or short cylinder with a internal diameter, which slides horizontally along a spiral-groove shaft. The side-to-side motion of the line guide continuously pushes and realigns the line onto different section of the spool, thus allowing a more even distribution of winding.
While line guides are crucial to reel operation during retrieval, during casting it becomes more of a hindrance because the line will have to go through its narrow internal channel to leave the reel. This creates additional drag from friction (especially when the line kinks against the back rim of the guide) or when loose line loops whips against the line guide. This exacerbates backlashing as the narrow line guide channel often limits how fast the line can leave the reel, and is particularly a problem when the line guide stops near the side of the reel when the reel gears are disengaged during casting. There are design modifications on line guides aimed to minimize resistance against line release, most of which involve having a conical or funnel-shaped line guide that reduce the kinking angle between the line and the guide frame, which only partially resolves the drag issue. Another less-successful modification involves having an open-and-shut "gate" as a line guide, which unfortunately can catch and trap the line between the gaps of the shutter.
In 2011, the famous Japanese fishing tackle brand Daiwa introduced its famous TWS (T-Wing System) design, which have a T-shaped/inverted trianguloid line guide that has a broad top section that presents a much wider channel for line exit, while during retrieval the top bar tilts back and down to push the line into the much narrower bottom section. The TWS has become a celebrated success for Daiwa, and remained a patented trademark that was largely unchallenged for years.
Another notable 21st century line guide design, patented by the Shenzhen/New York City-based Chinese-American brand KastKing in 2023, is named the "Axis Eye" or "willowleaf guide". It has a silicon nitride-coated, rounded rectangle frame with a slightly serpentine shaped top profile, which can horizontally rotate 90° to alternate between a wide and a narrow cross-sectional width. During casting, the line guide is rotated to a transverse orientation, which presents a -wide line channel, allowing the line to exit with minimal drag; during retrieval, the line guide is rotated to a longitudinal orientation, which narrows the line channel down to less than 3 mm.
Notable brands
Japan
Shimano
Daiwa
United States
Pure Fishing
ABU Garcia
Penn Reels
Pflueger
Shakespeare Fishing Tackle
Orvis
Scientific Anglers
Australia
Alvey Reels
| Technology | Hunting and fishing | null |
47339 | https://en.wikipedia.org/wiki/Fishing%20line | Fishing line | A fishing line is any flexible, high-tensile cord used in angling to tether and pull in fish, in conjunction with at least one hook. Fishing lines are usually pulled by and stored in a reel, but can also be retrieved by hand, with a fixed attachment to the end of a rod, or via a motorized trolling outrigger.
Fishing lines generally resemble a long, ultra-thin rope, with important attributes including length, thickness, material and build. Other factors relevant to certain fishing practice include breaking strength, knot strength, UV resistance, castability, limpness, stretch, memory, abrasion resistance and visibility. Traditional fishing lines are made of silk, while most modern lines are made from synthetic polymers such as nylon, polyethylene or polyvinylidene fluoride ("fluorocarbon") and may come in monofilament or braided (multifilament) forms.
Terminology
Fishing with a hook-and-line setup is called angling. Fish are caught when one are drawn by the bait/lure dressed on the hook into swallowing it in whole, causing in the hook (usually barbed) piercing the soft tissues and anchoring into the mouthparts, gullet or gill, resulting in the fish becoming firmly tethered to the line. Another more primitive method is to use a straight gorge, which is buried longitudinally in the bait such that it would be swallowed end first, and the tension along the line would fix it cross-wise in the fish's stomach or gullet and so the capture would be assured. Once the fish is hooked, the line can then pull it towards the angler and eventually fetch it out of the water (known as "landing" the fish). Heavier fish can be difficult to retrieve by only dragging the line (as it might overwhelm and snap the line) and might need to be landed via additionally using a hand net (a.k.a. landing net) or a hooked pole called a gaff.
Trolling is a technique where one or more lines, each with at least one hooked fishing lure at the end, is dragged through the water, which mimick schooling forage fish. Trolling from a moving boat is used in both big-game and commercial fishing as a method of catching large open-water species such as tuna and marlin (which are instinctively drawn to schoolers), and can also be used when angling in freshwater as a way to catch salmon, northern pike, muskellunge and walleye. The technique allows anglers to cover a large body of water in a short time without having to cast and retrieve lures constantly.
Longline fishing and trotlining are commercial fishing technique that uses many secondary lines with baited hooks hanging perpendicularly from a single main line.
Snagging is a fishing technique where a large, sharp grappling hook is used to pierce the fish externally in the body instead of inside the fish's mouth, and is therefore not the same as angling. Generally, a large open-gaped treble hook with a heavy sinker is cast into a river containing a large amount of fish (such as salmon) and is quickly jerked and reeled in, which gives the snag hook a gaff-like "clawing" motion that can spear its sharp points past the scales and skin and deep into the body. Modern technologies such as underwater cameras are sometimes used to help improve the timing of snagging. Due to the mutilating nature of this technique (where the fish are typically too deeply injured to be released alive), snagging is frequently deemed an unethical and illegal method, and some snagging practitioners have added procedures to disguise the snagging practice, such as adding baits or jerking the line using a fishing rod, to make it look like angling.
Sections
Traditionally, only a single thread of line is used to connect the hook with the rod and reel. However, most modern angling setups use at least two sections of line (typically the mainline and the leader) joined with a bend knot (such as the famously named fisherman's knot). Occasionally a swivel might be used to join the lines and reduce the bait/lure spinning due to the inherent line twisting from a fixed-spool reel.
A typical modern angling setup can include the following line sections:
Backing is the rearmost section of the fishing line and typically used only to "pad up" the spool of the fishing reel, in order to prevent unwanted slippage between the mainline and the (usually metallic and well polished) spool surface, increase the effective radius of the spooled line and hence the retrieval speed (i.e. inches per turn), and to shorten the "jump" distance during line release in spinning reels. The backing can also act as a line reserve in case a powerful fish that manages to overpower the drag mechanism of the reel and stretch out the entire length of the mainline.
Mainline is the main section of the fishing line, and the portion that primarily interacts with the rod, line guides and reel. This is the section that handles most of the tensile stress when retrieving the line.
Leader is the frontmost section of the fishing line that is attached to the hook/lure, and the portion that most likely will be in actual physical contact with the fish. Many larger, feistier target fish warrants a strong mainline, which might make it too thick to thread through the eye of the hook, thus necessitating a thinner line to "lead" into the hook (hence the name). Leader lines usually use high-specific strength material with clear colors and water-like refractive indices (thus harder for the fish to spot it) such as polyvinylidene fluoride (PVDF, commonly called "fluorocarbon"), or even stainless steel/titanium wires to reduce breakage due to abrasion damage or fish biting. The leader line can also serve as a sacrificial device, as having a leader rated at a designated breaking strength less than that of the rod and mainline helps to cap the transferred stress and protect those more costly gears/tackles from overloading and breaking (similar to how a fuse protects a circuitry), which will minimize loss and cost of repairs/replacements if the fish manages to overpower the angler's gear setup.
Tippet or trace is used occasionally in fly fishing, and serves as a secondary leader that thread to the much smaller and delicate fly hooks.
History
Early lines
Leonard Mascall, in his book from 1596 titled "A Booke of fishing with Hooke and Line, and of all other instruments thereunto belonging". followed in many ways after Dame Juliana Berners, has an excerpt establishing silk worms in the area of England at that time:
... ...
And another excerpt explaining compiling a silk leader-line for a catgut fly-line.
So back then there was silk and horse hair used for angling.
As written in 1667 by Samuel Pepys, the fishing lines in his time were made from catgut. Later, silk fishing lines were used around 1724.
Modern lines
Modern fishing lines intended for spinning, spin cast, or bait casting reels are almost entirely made from artificial substances, including nylon (typically 610 or 612), polyvinylidene fluoride (PVDF, also called fluorocarbon), polyethylene, Dacron and UHMWPE (Honeywell's Spectra or Dyneema). The most common type is monofilament, made of a single strand. Fishermen often use monofilament because of its buoyant characteristics and its ability to stretch under load. The line stretch has advantages, such as damping the force when setting the hook and when fighting strong fish. On very far distances the damping may become a disadvantage. Recently, other alternatives to standard nylon monofilament lines have been introduced made of copolymers or fluorocarbon, or a combination of the two materials. Fluorocarbon fishing line is made of the fluoropolymer PVDF and it is valued for its refractive index, which is similar to that of water, making it less visible to fish. Fluorocarbon is also a denser material, and therefore, is not nearly as buoyant as monofilament. Anglers often utilize fluorocarbon when they need their baits to stay closer to the bottom without the use of heavy sinkers. There are also braided fishing lines, cofilament and thermally fused lines, also known as "superlines" for their small diameter, lack of stretch, and great strength relative to standard nylon monofilament lines. Braided, thermally fused, and chemically fused varieties of "superlines" are now readily available.
Specialty lines
Fly lines consist of a tough braided or monofilament core, wrapped in a thick waterproof plastic sheath, often of polyvinyl chloride (PVC). In the case of floating fly lines, the PVC sheath is usually embedded with many "microballoons", or air bubbles, and may also be impregnated with silicone or other lubricants to give buoyancy and reduce wear. In order to fill up the reel spool and ensure an adequate reserve in case of a run by a powerful fish, fly lines are usually attached to a secondary line at the butt section, called backing. Fly line backing is usually composed of braided dacron or gelspun monofilaments. All fly lines are equipped with a leader of monofilament or fluorocarbon fishing line, usually (but not always) tapered in diameter, and referred to by the "X-size" (0X, 2X, 4X, etc.) of its final tip section, or tippet. Tippet size is usually between 0X and 8X, where 0X is the thickest diameter, and 8X is the thinnest. There are exceptions to this, and tippet sizes do exist outside of the 0X–8X parameter.
Tenkara lines are special lines used for the fixed-line fishing method of tenkara. Traditionally these are furled lines the same length as the tenkara rod. Although original to Japan, these lines are similar to the British tradition of furled leader. They consist of several strands being twisted together in decreasing numbers toward the tip of the line, thus creating a taper that allows the line to cast the fly. It serves the same purpose as the fly-line, to propel a fly forward. They may be tied of various materials, but most commonly are made of monofilament.
Wire lines are frequently used as leaders to prevent the fishing line from being severed by toothy fish. Usually braided from several metal strands, wire lines may be made of stainless steel, titanium, or a combination of metal alloys coated with plastic.
Stainless-steel line leaders provide:
bite protection – it is extremely hard for fish to cut the steel wire, regardless of jaw and teeth strength and sharpness,
abrasion resistance – sharp rocks and objects can damage other lines, while steel wire can cut through most of the materials,
single-wire (single-strand) leaders are not as flexible as multi-strand steel wire, but are extremely strong and tough,
multi-strand steel wire leaders are very flexible, but are somewhat more abrasive and more damage-prone than single-strand wires.
Titanium fishing leaders are actually titanium–nickel alloys that have several very important features:
titanium leader lines are very flexible, regardless of whether they are single- or multi-strand lines/wires,
these lines are very elastic – they can stretch up to 10% without permanent damage to the line itself – perfect for hook setting,
these lines are knottable just as nylon monofilament lines,
surface is rather hard and abrasion-resistant – great for fishing toothy fish,
titanium wire is corrosion-resistant and can last for a long time, even surpassing stainless-steel wires,
due to the strength and elasticity, titanium wires are almost entirely kink-proof.
Copper, monel and lead-core fishing lines are used as heavy trolling main lines, usually followed with fluorocarbon line near the lure or bait with fishing swivel between the lines. Due to their high density, these fishing lines sink rapidly in water and require less line for achieving desired trolling depth. On the other hand, these lines are relatively thick for desired strength, especially when compared with braided fishing lines and often require reels with larger spools.
Environmental impact
Discarded monofilament fishing line takes up to 600 years to decompose. There have been several types of biodegradable fishing lines developed to minimize the impact on the environment.
| Technology | Hunting and fishing | null |
47357 | https://en.wikipedia.org/wiki/Hake | Hake | Hake is the common name for fish in the Merlucciidae family of the northern and southern oceans and the Phycidae family of the northern oceans. Hake is a commercially important fish in the same taxonomic order, Gadiformes, as cod and haddock.
Description
Hakes are medium-to-large fish averaging from in weight, with specimens as large as . The fish can grow up to in length with a lifespan of as long as 14 years.
Hake may be found in the Atlantic Ocean and Pacific Ocean in waters from deep. The fish stay in deep water during the day and come to shallower depths during the night. An undiscerning predator, hake feed on prey found near or on the bottom of the sea. Male and female hake are very similar in appearance.
After spawning, the hake eggs float on the surface of the sea where the larvae develop. After a certain period of time, the baby hake then migrate to the bottom of the sea, preferring depths of less than .
Merlucciidae
A total of 13 hake species are known in the family Merlucciidae:
Argentine hake (Merluccius hubbsi), found off Argentina
Benguela hake (Merluccius polli), found off South Africa
Deep-water hake (Merluccius paradoxus) found in the southern Atlantic Ocean
European hake (Merluccius merluccius), found off the Atlantic coast of Europe and western North Africa, in the Mediterranean Sea, and in the Black Sea
Gayi hake (Merluccius gayi), found in the North Pacific Ocean
North Pacific hake (Merluccius productus), found in the North Pacific
Offshore hake (Merluccius albidus), found off the United States
Panama hake (Merluccius angustimanus), found in the Eastern Pacific
Senegalese hake (Merluccius senegalensis), found off the Atlantic coast of western North Africa
Shallow-water hake (Merluccius capensis), found in the southern Atlantic Ocean
Silver hake (Merluccius bilinearis), found in the Northwest Atlantic Ocean
Southern hake (Merluccius australis), found off Chile and off New Zealand
Commercial use
Not all hake species are viewed as commercially important, but the deep-water and shallow-water hakes are known to grow rapidly and make up the majority of harvested species. Indicators of quality in hake products for human consumption include white flesh free of signs of browning, dryness, or grayness, and with a fresh, seawater smell. Hake is sold as frozen, fillets or steaks, fresh, smoked, or salted.
Fisheries
The main catching method of deep-water hake is primarily trawling, and shallow-water hake is mostly caught by inshore trawl and longlining. Hake are mostly found in the Southwest Atlantic (Argentina and Uruguay), Southeast Pacific (Chile and Peru), Southeast Atlantic (Namibia and South Africa), Southwest Pacific (New Zealand), and Mediterranean and Black Sea (Italy, Portugal, Spain, Greece and France).
Over-exploitation
Due to over-fishing, Argentine hake catches have declined drastically. About 80% of adult hake has apparently disappeared from Argentine waters. Argentine hake is not expected to disappear, but the stock may be so low that it is no longer economical for commercial fishing. In addition, this adversely affects Argentine employment, because of many jobs in the fishing industries. Conversely, Argentine hake prices rose due to hake scarcity, reducing exports and affecting the economy.
In Chile, seafood exports, especially Chilean hake, have decreased dramatically. Hake export has decreased by almost 19 percent. The main cause of this decline is the February 2010 Chile earthquake and tsunami. These disasters destroyed most processing plants, especially manufacturing companies that produce fish meal and frozen fillets.
European hake catches are well below historical levels because of hake depletion in the Mediterranean and Black Sea. Various factors might have caused this decline, including a too-high Total Annual Catch, unsustainable fishing, ecological problems, juvenile catches, or non-registered catches.
Namibia is the only country that has increased its hake quota, from in 2009 to in 2010. Furthermore, the Namibian Ministry of Fisheries adheres to strict rules regarding the catch of hake. For example, the closed seasons for hake lasts approximately two months, in September and October, depending on the level of stock. This rule has been applied to ensure the regrowth of the hake population. Supplemental restrictions forbid trawling for Hake in waters less than deep (to avoid damaging non-target species habitat) and to minimize by-catch.
Human introduction to non-native areas
Frank Forrester's Fishermens' Guide in 1885 mentions a hake that was transplanted from the coast of Ireland to Cape Cod on the coast of Massachusetts in the United States. It is uncertain which species it was, but the Fishermens' Guide stated:This is an Irish salt water fish, similar in appearance to the tom cod. In Galway bay, and other sea inlets of Ireland, the hake is exceedingly abundant, and is taken in great numbers. It is also found in England and France. Since the Irish immigration to America, the hake has followed in the wake of their masters, as it is now found in New York bay, in the waters around Boston, and off Cape Cod. Here it is called the stock fish, and the Bostonians call them poor Johns. It is a singular fact that until within a few years this fish was never seen in America. It does not grow as large here as in Europe, though here they are from ten to eighteen inches [250 to 460 mm] in length.... The general color of this fish is a reddish brown, with some golden tints—the sides being of a pink silvery luster.
| Biology and health sciences | Acanthomorpha | Animals |
47402 | https://en.wikipedia.org/wiki/Titan%20%28moon%29 | Titan (moon) | Titan is the largest moon of Saturn and the second-largest in the Solar System. It is the only moon known to have an atmosphere denser than the Earth's and is the only known object in space—other than Earth—on which there is clear evidence that stable bodies of liquid exist. Titan is one of seven gravitationally rounded moons of Saturn and the second-most distant among them. Frequently described as a planet-like moon, Titan is 50% larger in diameter than Earth's Moon and 80% more massive. It is the second-largest moon in the Solar System after Jupiter's Ganymede and is larger than Mercury; yet Titan is only 40% as massive as Mercury, because Mercury is mainly iron and rock while much of Titan is ice, which is less dense.
Discovered in 1655 by the Dutch astronomer Christiaan Huygens, Titan was the first known moon of Saturn and the sixth known planetary satellite (after Earth's moon and the four Galilean moons of Jupiter). Titan orbits Saturn at 20 Saturn radii or 1,200,000 km above Saturn's apparent surface. From Titan's surface, Saturn subtends an arc of 5.09 degrees, and if it were visible through the moon's thick atmosphere, it would appear 11.4 times larger in the sky, in diameter, than the Moon from Earth, which subtends 0.48° of arc.
Titan is primarily composed of ice and rocky material, with a rocky core surrounded by various layers of ice, including a crust of ice Ih and a subsurface layer of ammonia-rich liquid water. Much as with Venus before the Space Age, the dense opaque atmosphere prevented understanding of Titan's surface until the Cassini–Huygens mission in 2004 provided new information, including the discovery of liquid hydrocarbon lakes in Titan's polar regions and the discovery of its atmospheric super-rotation. The geologically young surface is generally smooth, with few impact craters, although mountains and several possible cryovolcanoes have been found.
The atmosphere of Titan is mainly nitrogen and methane; minor components lead to the formation of hydrocarbon clouds and heavy organonitrogen haze. Its climate—including wind and rain—creates surface features similar to those of Earth, such as dunes, rivers, lakes, seas (probably of liquid methane and ethane), and deltas, and is dominated by seasonal weather patterns as on Earth. With its liquids (both surface and subsurface) and robust nitrogen atmosphere, Titan's methane cycle nearly resembles Earth's water cycle, albeit at a much lower temperature of about . Due to these factors, Titan is called the most Earth-like celestial object in the Solar System.
Discovery and naming
The Dutch astronomer Christiaan Huygens discovered Titan on March 25, 1655. Fascinated by Galileo's 1610 discovery of Jupiter's four largest moons and his advancements in telescope technology, Huygens, with the help of his elder brother Constantijn Huygens Jr., began building telescopes around 1650 and discovered the first observed moon orbiting Saturn with one of the telescopes they built.
Huygens named his discovery Saturni Luna (or Luna Saturni, Latin for "moon of Saturn"), publishing in the 1655 tract De Saturni Luna Observatio Nova (A New Observation of Saturn's Moon). After Giovanni Domenico Cassini published his discoveries of four more moons of Saturn between 1673 and 1686, astronomers began referring to these and Titan as Saturn I through V (with Titan then in fourth position). Other early epithets for Titan include "Saturn's ordinary satellite." The International Astronomical Union officially numbers Titan as "Saturn VI."
The name Titan, and the names of all seven satellites of Saturn then known, came from John Herschel (son of William Herschel, discoverer of two other Saturnian moons, Mimas and Enceladus), in his 1847 publication Results of Astronomical Observations Made during the Years 1834, 5, 6, 7, 8, at the Cape of Good Hope. Numerous small moons have been discovered around Saturn since then. Saturnian moons are named after mythological giants. The name Titan comes from the Titans, a race of immortals in Greek mythology.
Formation
The regular moons of Jupiter and Saturn likely formed via co-accretion, similar to the process believed to have formed the planets in the Solar System. As the young gas giants formed, they were surrounded by discs of material that gradually coalesced into moons. While the four Galilean moons of Jupiter exist in highly regular, planet-like orbits, Titan overwhelmingly dominates Saturn's system and has a high orbital eccentricity not immediately explained by co-accretion alone. A proposed model for the formation of Titan is that Saturn's system began with a group of moons similar to Jupiter's Galilean moons, but that they were disrupted by a series of giant impacts, which would go on to form Titan. Saturn's mid-sized moons, such as Iapetus and Rhea, were formed from the debris of these collisions. Such a violent beginning would also explain Titan's orbital eccentricity. A 2014 analysis of Titan's atmospheric nitrogen suggested that it was possibly sourced from material similar to that found in the Oort cloud and not from sources present during the co-accretion of materials around Saturn.
Orbit and rotation
Titan orbits Saturn once every 15 days and 22 hours. Like Earth's Moon and many of the satellites of the giant planets, its rotational period (its day) is identical to its orbital period; Titan is tidally locked in synchronous rotation with Saturn, and permanently shows one face to the planet. Longitudes on Titan are measured westward, starting from the meridian passing through this point. Its orbital eccentricity is 0.0288, and the orbital plane is inclined 0.348 degrees relative to the Saturnian equator.
The small and irregularly shaped satellite Hyperion is locked in a 3:4 orbital resonance with Titan—that is, Hyperion orbits three times for every four times Titan orbits. Hyperion probably formed in a stable orbital island, whereas the massive Titan absorbed or ejected any other bodies that made close approaches.
Bulk characteristics
Titan is 5,149.46 km (3,199.73 mi) in diameter; it is 6% larger than the planet Mercury and 50% larger than Earth's Moon. Titan is the tenth-largest object known in the Solar system, including the Sun. Before the arrival of Voyager 1 in 1980, Titan was thought to be slightly larger than Ganymede, which has a diameter 5,262 km (3,270 mi), and thus the largest moon in the Solar System. This was an overestimation caused by Titan's dense, opaque atmosphere, with a haze layer 100–200 km above its surface. This increases its apparent diameter. Titan's diameter and mass (and thus its density) are similar to those of the Jovian moons Ganymede and Callisto. Based on its bulk density of 1.881 g/cm3, Titan's composition is 40–60% rock, with the rest being water ice and other materials.
Titan is probably partially differentiated into distinct layers with a 3,400 km (2,100 mi) rocky center. This rocky center is believed to be surrounded by several layers composed of different crystalline forms of ice, and/or water. The exact structure depends heavily on the heat flux from within Titan itself, which is poorly constrained. The interior may still be hot enough for a liquid layer consisting of a "magma" composed of water and ammonia between the ice Ih crust and deeper ice layers made of high-pressure forms of ice. The heat flow from inside Titan may even be too high for high pressure ices to form, with the outermost layers instead consisting primarily of liquid water underneath a surface crust. The presence of ammonia allows water to remain liquid even at a temperature as low as (for eutectic mixture with water).
The Cassini probe discovered evidence for the layered structure in the form of natural extremely-low-frequency radio waves in Titan's atmosphere. Titan's surface is thought to be a poor reflector of extremely-low-frequency radio waves, so they may instead be reflecting off the liquid–ice boundary of a subsurface ocean. Surface features were observed by the Cassini spacecraft to systematically shift by up to 30 km (19 mi) between October 2005 and May 2007, which suggests that the crust is decoupled from the interior, and provides additional evidence for an interior liquid layer. Further supporting evidence for a liquid layer and ice shell decoupled from the solid core comes from the way the gravity field varies as Titan orbits Saturn. Comparison of the gravity field with the RADAR-based topography observations also suggests that the ice shell may be substantially rigid.
Atmosphere
Titan is the only moon in the Solar System with an atmosphere denser than Earth's, with a surface pressure of , and it is one of only two moons whose atmospheres are able to support clouds, hazes, and weather—the other being Neptune's moon Triton. The presence of a significant atmosphere was first suspected by Catalan astronomer Josep Comas i Solà, who observed distinct limb darkening on Titan in 1903. Due to the extensive, hazy atmosphere, Titan was once thought to be the largest moon in the Solar System until the Voyager missions revealed that Ganymede is slightly larger. The haze also shrouded Titan's surface from view, so direct images of its surface could not be taken until the Cassini–Huygens mission in 2004.
The primary constituents of Titan's atmosphere are nitrogen, methane, and hydrogen. The precise atmospheric composition varies depending on altitude and latitude due to methane cycling between a gas and a liquid in Titan's lower atmospherethe methane cycle. Nitrogen is the most abundant gas, with a concentration of around 98.6% in the stratosphere that decreases to 95.1% in the troposphere. Direct observations by the Huygens probe determined that methane concentrations are highest near the surface, with a concentration of 4.92% that remains relatively constant up to 8 km (5.0 mi) above the surface. Methane concentrations then gradually decrease with increasing altitude, down to a concentration of 1.41% in the stratosphere. Methane also increases in concentration near Titan's winter pole, probably due to evaporation from the surface in high-latitude regions. Hydrogen is the third-most abundant gas, with a concentration of around 0.1%. There are trace amounts of other hydrocarbons, such as ethane, diacetylene, methylacetylene, acetylene, and propane, and other gases, such as cyanoacetylene, hydrogen cyanide, carbon dioxide, carbon monoxide, cyanogen, argon, and helium. The hydrocarbons are thought to form in Titan's upper atmosphere in reactions resulting from the breakup of methane by the Sun's ultraviolet light, producing a thick orange smog.
Energy from the Sun should have converted all traces of methane in Titan's atmosphere into more complex hydrocarbons within 50 million years—a short time compared to the age of the Solar System. This suggests that methane must be replenished by a reservoir on or within Titan itself. The ultimate origin of the methane in its atmosphere may be its interior, released via eruptions from cryovolcanoes.
On April 3, 2013, NASA reported that complex organic chemicals, collectively called tholins, likely arise on Titan, based on studies simulating the atmosphere of Titan.
On June 6, 2013, scientists at the IAA-CSIC reported the detection of polycyclic aromatic hydrocarbons in the upper atmosphere of Titan.
On September 30, 2013, propene was detected in the atmosphere of Titan by NASA's Cassini spacecraft, using its composite infrared spectrometer (CIRS). This is the first time propene has been found on any moon or planet other than Earth and is the first chemical found by the CIRS. The detection of propene fills a mysterious gap in observations that date back to NASA's Voyager 1 spacecraft's first close planetary flyby of Titan in 1980, during which it was discovered that many of the gases that make up Titan's brown haze were hydrocarbons, theoretically formed via the recombination of radicals created by the Sun's ultraviolet photolysis of methane.
Climate
Titan's surface temperature is about . At this temperature, water ice has an extremely low vapor pressure, so the little water vapor present appears limited to the stratosphere. Titan receives about 1% as much sunlight as Earth. Before sunlight reaches the surface, about 90% has been absorbed by the thick atmosphere, leaving only 0.1% of the amount of light Earth receives.
Atmospheric methane creates a greenhouse effect on Titan's surface, without which Titan would be much colder. Conversely, haze in Titan's atmosphere contributes to an anti-greenhouse effect by absorbing sunlight, canceling a portion of the greenhouse effect and making its surface significantly colder than its upper atmosphere.
Titan's clouds, probably composed of methane, ethane or other simple organics, are scattered and variable, punctuating the overall haze. The findings of the Huygens probe indicate that Titan's atmosphere periodically rains liquid methane and other organic compounds onto its surface.
Clouds typically cover 1% of Titan's disk, though outburst events have been observed in which the cloud cover rapidly expands to as much as 8%. One hypothesis asserts that the southern clouds are formed when heightened levels of sunlight during the southern summer generate uplift in the atmosphere, resulting in convection. This explanation is complicated by the fact that cloud formation has been observed not only after the southern summer solstice but also during mid-spring. Increased methane humidity at the south pole possibly contributes to the rapid increases in cloud size. It was summer in Titan's southern hemisphere until 2010, when Saturn's orbit, which governs Titan's motion, moved Titan's northern hemisphere into the sunlight. When the seasons switch, it is expected that ethane will begin to condense over the south pole.
Surface features
The surface of Titan has been described as "complex, fluid-processed, [and] geologically young". Titan has been around since the Solar System's formation, but its surface is much younger, between 100 million and 1 billion years old. Geological processes may have reshaped Titan's surface. Titan's atmosphere is four times as thick as Earth's, making it difficult for astronomical instruments to image its surface in the visible light spectrum. The Cassini spacecraft used infrared instruments, radar altimetry and synthetic aperture radar (SAR) imaging to map portions of Titan during its close fly-bys. The first images revealed a diverse geology, with both rough and smooth areas. There are features that may be volcanic in origin, disgorging water mixed with ammonia onto the surface. There is also evidence that Titan's ice shell may be substantially rigid, which would suggest little geologic activity.
There are also streaky features, some of them hundreds of kilometers in length, that appear to be caused by windblown particles. Examination has also shown the surface to be relatively smooth; the few features that seem to be impact craters appeared to have been partially filled in, perhaps by raining hydrocarbons or cryovolcanism. Radar altimetry suggests topographical variation is low, typically no more than 150 meters. Occasional elevation changes of 500 meters have been discovered and Titan has mountains that sometimes reach several hundred meters to more than one kilometer in height.
Titan's surface is marked by broad regions of bright and dark terrain. These include Xanadu, a large, reflective equatorial area about the size of Australia. It was first identified in infrared images from the Hubble Space Telescope in 1994, and later viewed by the Cassini spacecraft. The convoluted region is filled with hills and cut by valleys and chasms. It is criss-crossed in places by dark lineaments—sinuous topographical features resembling ridges or crevices. These may represent tectonic activity, which would indicate that Xanadu is geologically young. Alternatively, the lineaments may be liquid-formed channels, suggesting old terrain that has been cut through by stream systems. There are dark areas of similar size elsewhere on Titan, observed from the ground and by Cassini; at least one of these, Ligeia Mare, Titan's second-largest sea, is almost a pure methane sea.
Lakes and seas
Following the Voyager flybys, Titan was confirmed to have an atmosphere capable of supporting liquid hydrocarbons on its surface. However, the first tentative detection only came in 1995, when data from the Hubble Space Telescope and radar observations suggested expansive hydrocarbon lakes, seas, or oceans. The existence of liquid hydrocarbons on Titan was finally confirmed in situ by the Cassini orbiter, with the Cassini mission team announcing "definitive evidence of the presence of lakes filled with liquid methane on Saturn's moon Titan" in January 2007.
The observed lakes and seas of Titan are largely restricted to its polar regions, where colder temperatures allow the presence of permanent liquid hydrocarbons. Near Titan's north pole are Kraken Mare, the largest sea; Ligeia Mare, the second-largest sea; and Punga Mare, each filling broad depressions and cumulatively representing roughly 80% of Titan's sea and lake coverage— 691,000 km² (267,000 sq mi) combined. All three maria's sea levels are similar, suggesting that they may be hydraulically connected. The southern polar region, meanwhile, hosts four dry broad depressions, potentially representing dried-up seabeds. Additional smaller lakes occupy Titan's polar regions, covering a cumulative surface area of 215,000 km² (83,000 sq mi). Lakes in Titan's lower-latitude and equatorial regions have been proposed, though none have been confirmed; seasonal or transient equatorial lakes may pool following large rainstorms. Cassini RADAR data has been used to conduct bathymetry of Titan's seas and lakes. Using detected subsurface reflections, the measured maximum depth of Ligeia Mare is roughly , and that of Ontario Lacus is roughly .
Titan's lakes and seas are dominated by methane (), with smaller amounts of ethane () and dissolved nitrogen (). The fraction of these components varies across different bodies: observations of Ligeia Mare are consistent with 71% , 12% , and 17% by volume; whilst Ontario Lacus is consistent with 49% , 41% , and 10% by volume. As Titan is synchronously locked with Saturn, there exists a permanent tidal bulge of roughly at the sub- and anti-Saturnian points. Titan's orbital eccentricity means that tidal acceleration varies by 9%, though the long orbital period means that these tidal cycles are very gradual. A team of researchers led by Ralph D. Lorenz evaluated that the tidal range of Titan's major seas are around .
Tectonics and cryovolcanism
Through Cassini RADAR mapping of Titan's surface, numerous landforms have been interpreted as candidate cryovolcanic and tectonic features by multiple authors. A 2016 analysis of mountainous ridges on Titan revealed that ridges are concentrated in Titan's equatorial regions, implying that ridges either form more frequently in or are better preserved in low-latitude regions. The ridges—primarily oriented east to west—are linear to arcuate in shape, with the authors of the analysis comparing them to terrestrial fold belts indicative of horizontal compression or convergence. They note that the global distribution of Titan's ridges could be indicative of global contraction, with a thickened ice shell causing regional uplift.
The identification of cryovolcanic features on Titan remains controversial and inconclusive, primarily due to limitations of Cassini imagery and coverage. Cassini RADAR and VIMS imagery revealed several candidate cryovolcanic features, particularly flow-like terrains in western Xanadu and steep-sided lakes in the northern hemisphere that resemble maar craters on Earth, which are created by explosive subterranean eruptions. The likeliest cryovolcano features is a complex of landforms that includes two mountains, Doom Mons and Erebor Mons; a large depression, Sotra Patera; and a system of flow-like features, Mohini Fluctus. Between 2005 and 2006, parts of Sotra Patera and Mohini Fluctus became significantly brighter whilst the surrounding plains remained unchanged, potentially indicative of ongoing cryovolcanic activity. Indirect lines of evidence for cryovolcanism include the presence of Argon-40 in Titan's atmosphere. Radiogenic 40Ar is sourced from the decay of 40K, and has likely been produced within Titan over the course of billions of years within its rocky core. 40Ar's presence in Titan's atmosphere is thus supportive of active geology on Titan, with cryovolcanism being one possible method of bringing the isotope up from the interior.
Impact craters
Titan's surface has comparatively few impact craters, with erosion, tectonics, and cryovolcanism possibly working to erase them over time. Compared to the craters of similarly sized and structured Ganymede and Callisto, those of Titan are much shallower. Many have dark floors of sediment; geomorphological analysis of impact craters largely suggests that erosion and burial are the primary mechanisms of crater modification. Titan's craters are also not evenly distributed, as the polar regions are almost devoid of any identified craters whilst the majority are located in the equatorial dune fields. This inequality may be the result of oceans that once occupied Titan's poles, polar sediment deposition by past rainfall, or increased rates of erosion in the polar regions.
Plains and dunes
The majority of Titan's surface is covered by plains. Of the several types of plains observed, the most extensive are the Undifferentiated Plains that encompass vast, radar-dark uniform regions.
These mid-latitude plains—located largely between 20 and 60° north or south—appear younger than all major geological features except dunes and several craters. The Undifferentiated Plains likely were formed by wind-driven processes and composed of organic-rich sediment.
Another extensive type of terrain on Titan are sand dunes, grouped together into vast dune fields or "sand seas" located within 30° north or south. Titanian dunes are typically 1–2 km (0.62–1.24 mi) wide and spaced 1–4 (0.62–2.49 mi) apart, with some individual dunes over 100 km (62 mi) in length. Limited radar-derived height data suggests that the dunes are tall, with the dunes appearing dark in Cassini SAR imagery. Interactions between the dunes and obstacle features, such as mountains, indicate that sand is generally transported in a west-to-east direction. The sand that constructs the dunes is dominated by organic material, probably from Titan's atmosphere; possible sources of sand include river channels or the Undifferentiated Plains.
Observation and exploration
Titan is never visible to the naked eye, but can be observed through small telescopes or strong binoculars. Amateur observation is difficult because of the proximity of Titan to Saturn's brilliant globe and ring system; an occulting bar, covering part of the eyepiece and used to block the bright planet, greatly improves viewing. Titan has a maximum apparent magnitude of +8.2, and mean opposition magnitude 8.4. This compares to +4.6 for the similarly sized Ganymede, in the Jovian system.
Observations of Titan prior to the space age were limited. In 1907 Catalan astronomer Josep Comas i Solà observed limb darkening of Titan, the first evidence that the body has an atmosphere. In 1944 Gerard P. Kuiper used a spectroscopic technique to detect an atmosphere of methane.
Pioneer and Voyager
The first probe to visit the Saturnian system was Pioneer 11 in 1979, which revealed that Titan was probably too cold to support life. It took images of Titan, including Titan and Saturn together in mid to late 1979. The quality was soon surpassed by the two Voyagers.
Titan was examined by both Voyager 1 and 2 in 1980 and 1981, respectively. Voyager 1's trajectory was designed to provide an optimized Titan flyby, during which the spacecraft was able to determine the density, composition, and temperature of the atmosphere, and obtain a precise measurement of Titan's mass. Atmospheric haze prevented direct imaging of the surface, though in 2004 intensive digital processing of images taken through Voyager 1's orange filter did reveal hints of the light and dark features now known as Xanadu and Shangri-la, which had been observed in the infrared by the Hubble Space Telescope. Voyager 2, which would have been diverted to perform the Titan flyby if Voyager 1 had been unable to, did not pass near Titan and continued on to Uranus and Neptune.
Cassini–Huygens
The Cassini–Huygens spacecraft reached Saturn on July 1, 2004, and began the process of mapping Titan's surface by radar. A joint project of the European Space Agency (ESA) and NASA, Cassini–Huygens proved a very successful mission. The Cassini probe flew by Titan on October 26, 2004, and took the highest-resolution images ever of Titan's surface, at only 1,200 km (750 mi) , discerning patches of light and dark that would be invisible to the human eye.
On July 22, 2006, Cassini made its first targeted, close fly-by at 950 km (590 mi) from Titan; the closest flyby was at 880 km (550 mi) on June 21, 2010. Liquid has been found in abundance on the surface in the north polar region, in the form of many lakes and seas discovered by Cassini.
Huygens landing
Huygens was an atmospheric probe that touched down on Titan on January 14, 2005, discovering that many of its surface features seem to have been formed by fluids at some point in the past. Titan is the most distant body from Earth to have a space probe land on its surface.
The Huygens probe landed just off the easternmost tip of a bright region now called Adiri. The probe photographed pale hills with dark "rivers" running down to a dark plain. Current understanding is that the hills (also referred to as highlands) are composed mainly of water ice. Dark organic compounds, created in the upper atmosphere by the ultraviolet radiation of the Sun, may rain from Titan's atmosphere. They are washed down the hills with the methane rain and are deposited on the plains over geological time scales.
After landing, Huygens photographed a dark plain covered in small rocks and pebbles, which are composed of water ice. The two rocks just below the middle of the image on the right are smaller than they may appear: the left-hand one is 15 centimeters across, and the one in the center is 4 centimeters across, at a distance of about 85 centimeters from Huygens. There is evidence of erosion at the base of the rocks, indicating possible fluvial activity. The ground surface is darker than originally expected, consisting of a mixture of water and hydrocarbon ice.
In March 2007, NASA, ESA, and COSPAR decided to name the Huygens landing site the Hubert Curien Memorial Station in memory of the former president of the ESA.
Dragonfly
The Dragonfly mission, developed and operated by the Johns Hopkins Applied Physics Laboratory, is scheduled to launch in July 2028. It consists of a large drone powered by an RTG to fly in the atmosphere of Titan as New Frontiers 4. Its instruments will study how far prebiotic chemistry may have progressed. The mission is planned to arrive at Titan in the mid-2030s.
Proposed or conceptual missions
There have been several conceptual missions proposed in recent years for returning a robotic space probe to Titan. Initial conceptual work has been completed for such missions by NASA (and JPL), and ESA. At present, none of these proposals have become funded missions. The Titan Saturn System Mission (TSSM) was a joint NASA/ESA proposal for exploration of Saturn's moons. It envisions a hot-air balloon floating in Titan's atmosphere for six months. It was competing against the Europa Jupiter System Mission (EJSM) proposal for funding. In February 2009 it was announced that ESA/NASA had given the EJSM mission priority ahead of the TSSM. The proposed Titan Mare Explorer (TiME) was a low-cost lander that would splash down in Ligeia Mare in Titan's northern hemisphere. The probe would float whilst investigating Titan's hydrocarbon cycle, sea chemistry, and Titan's origins. It was selected for a Phase-A design study in 2011 as a candidate mission for the 12th NASA Discovery Program opportunity, but was not selected for flight.
Another mission to Titan proposed in early 2012 by Jason Barnes, a scientist at the University of Idaho, is the Aerial Vehicle for In-situ and Airborne Titan Reconnaissance (AVIATR): an uncrewed plane (or drone) that would fly through Titan's atmosphere and take high-definition images of the surface of Titan. NASA did not approve the requested $715 million, and the future of the project is uncertain.
A conceptual design for another lake lander was proposed in late 2012 by the Spanish-based private engineering firm SENER and the Centro de Astrobiología in Madrid. The concept probe is called Titan Lake In-situ Sampling Propelled Explorer (TALISE). The major difference compared to the TiME probe would be that TALISE is envisioned with its own propulsion system and would therefore not be limited to simply drifting on the lake when it splashes down.
A Discovery Program contestant for its mission #13 is Journey to Enceladus and Titan (JET), an astrobiology Saturn orbiter that would assess the habitability potential of Enceladus and Titan.
In 2015, the NASA Innovative Advanced Concepts program (NIAC) awarded a Phase II grant to a design study of a Titan Submarine to explore the seas of Titan.
Prebiotic conditions and life
Titan is thought to be a prebiotic environment rich in complex organic compounds, but its surface is in a deep freeze at so it is currently understood that life cannot exist on the moon's frigid surface. However, Titan seems to contain a global ocean beneath its ice shell, and within this ocean, conditions are potentially suitable for microbial life.
The Cassini–Huygens mission was not equipped to provide evidence for biosignatures or complex organic compounds; it showed an environment on Titan that is similar, in some ways, to ones hypothesized for the primordial Earth. Scientists surmise that the atmosphere of early Earth was similar in composition to the current atmosphere on Titan, with the important exception of a lack of water vapor on Titan.
Formation of complex molecules
The Miller–Urey experiment and several following experiments have shown that with an atmosphere similar to that of Titan and the addition of UV radiation, complex molecules and polymer substances like tholins can be generated. The reaction starts with dissociation of nitrogen and methane, forming hydrogen cyanide and acetylene. Further reactions have been studied extensively.
It has been reported that when energy was applied to a combination of gases like those in Titan's atmosphere, five nucleotide bases, the building blocks of DNA and RNA, were among the many compounds produced. In addition, amino acids—the building blocks of protein—were found. It was the first time nucleotide bases and amino acids had been found in such an experiment without liquid water being present.
Possible subsurface habitats
Laboratory simulations have led to the suggestion that enough organic material exists on Titan to start a chemical evolution analogous to what is thought to have started life on Earth. The analogy assumes the presence of liquid water for longer periods than is currently observable; several hypotheses postulate that liquid water from an impact could be preserved under a frozen isolation layer. It has also been hypothesized that liquid-ammonia oceans could exist deep below the surface. Another model suggests an ammonia–water solution as much as 200 km (120) deep beneath a water-ice crust with conditions that, although extreme by terrestrial standards, are such that life could survive. Heat transfer between the interior and upper layers would be critical in sustaining any subsurface oceanic life. Detection of microbial life on Titan would depend on its biogenic effects, with the atmospheric methane and nitrogen examined.
Methane and life at the surface
It has been speculated that life could exist in the lakes of liquid methane on Titan, just as organisms on Earth live in water. Such organisms would inhale H2 in place of O2, metabolize it with acetylene instead of glucose, and exhale methane instead of carbon dioxide. However, such hypothetical organisms would be required to metabolize at a deep freeze temperature of .
All life forms on Earth (including methanogens) use liquid water as a solvent; it is speculated that life on Titan might instead use a liquid hydrocarbon, such as methane or ethane, although water is a stronger solvent than methane. Water is also more chemically reactive, and can break down large organic molecules through hydrolysis. A life form whose solvent was a hydrocarbon would not face the risk of its biomolecules being destroyed in this way.
In 2005, astrobiologist Chris McKay argued that if methanogenic life did exist on the surface of Titan, it would likely have a measurable effect on the mixing ratio in the Titan troposphere: levels of hydrogen and acetylene would be measurably lower than otherwise expected. Assuming metabolic rates similar to those of methanogenic organisms on Earth, the concentration of molecular hydrogen would drop by a factor of 1000 on the Titanian surface solely due to a hypothetical biological sink. McKay noted that, if life is indeed present, the low temperatures on Titan would result in very slow metabolic processes, which could conceivably be hastened by the use of catalysts similar to enzymes. He also noted that the low solubility of organic compounds in methane presents a more significant challenge to any possible form of life. Forms of active transport, and organisms with large surface-to-volume ratios could theoretically lessen the disadvantages posed by this fact.
In 2010, Darrell Strobel, from Johns Hopkins University, identified a greater abundance of molecular hydrogen in the upper atmospheric layers of Titan compared to the lower layers, arguing for a downward flow at a rate of roughly 1028 molecules per second and disappearance of hydrogen near Titan's surface; as Strobel noted, his findings were in line with the effects McKay had predicted if methanogenic life-forms were present. The same year, another study showed low levels of acetylene on Titan's surface, which were interpreted by McKay as consistent with the hypothesis of organisms consuming hydrocarbons. Although restating the biological hypothesis, he cautioned that other explanations for the hydrogen and acetylene findings are more likely: the possibilities of yet unidentified physical or chemical processes (e.g. a surface catalyst accepting hydrocarbons or hydrogen), or flaws in the current models of material flow. Composition data and transport models need to be substantiated, etc. Even so, despite saying that a non-biological catalytic explanation would be less startling than a biological one, McKay noted that the discovery of a catalyst effective at would still be significant. With regards to the acetylene findings, Mark Allen, the principal investigator with the NASA Astrobiology Institute Titan team, provided a speculative, non-biological explanation: sunlight or cosmic rays could transform the acetylene in icy aerosols in the atmosphere into more complex molecules that would fall to the ground with no acetylene signature.
As NASA notes in its news article on the June 2010 findings: "To date, methane-based life forms are only hypothetical. Scientists have not yet detected this form of life anywhere." As the NASA statement also says: "some scientists believe these chemical signatures bolster the argument for a primitive, exotic form of life or precursor to life on Titan's surface."
In February 2015, a hypothetical cell membrane capable of functioning in liquid methane at cryogenic temperatures (deep freeze) conditions was modeled. Composed of small molecules containing carbon, hydrogen, and nitrogen, it would have the same stability and flexibility as cell membranes on Earth, which are composed of phospholipids, compounds of carbon, hydrogen, oxygen, and phosphorus. This hypothetical cell membrane was termed an "azotosome", a combination of "azote", French for nitrogen, and "liposome".
Obstacles
Despite these biological possibilities, there are formidable obstacles to life on Titan, and any analogy to Earth is inexact. At a vast distance from the Sun, Titan is frigid, and its atmosphere lacks CO2. At Titan's surface, water exists only in solid form. Because of these difficulties, scientists such as Jonathan Lunine have viewed Titan less as a likely habitat for life than as an experiment for examining hypotheses on the conditions that prevailed prior to the appearance of life on Earth. Although life itself may not exist, the prebiotic conditions on Titan and the associated organic chemistry remain of great interest in understanding the early history of the terrestrial biosphere. Using Titan as a prebiotic experiment involves not only observation through spacecraft, but laboratory experiments, and chemical and photochemical modeling on Earth.
Panspermia hypothesis
It is hypothesized that large asteroid and cometary impacts on Earth's surface may have caused fragments of microbe-laden rock to escape Earth's gravity, suggesting the possibility of panspermia. Calculations indicate that these would encounter many of the bodies in the Solar System, including Titan. On the other hand, Jonathan Lunine has argued that any living things in Titan's cryogenic hydrocarbon lakes would need to be so different chemically from Earth life that it would not be possible for one to be the ancestor of the other.
Future conditions
Conditions on Titan could become far more habitable in the far future. Five billion years from now, as the Sun becomes a sub-red giant, its surface temperature could rise enough for Titan to support liquid water on its surface, making it habitable. As the Sun's ultraviolet output decreases, the haze in Titan's upper atmosphere will be depleted, lessening the anti-greenhouse effect on the surface and enabling the greenhouse created by atmospheric methane to play a far greater role. These conditions together could create a habitable environment, and could persist for several hundred million years. This is proposed to have been sufficient time for simple life to spawn on Earth, though the higher viscosity of ammonia-water solutions coupled with low temperatures would cause chemical reactions to proceed more slowly on Titan.
| Physical sciences | Solar System | null |
47403 | https://en.wikipedia.org/wiki/Instrumentation | Instrumentation | Instrumentation is a collective term for measuring instruments, used for indicating, measuring, and recording physical quantities. It is also a field of study about the art and science about making measurement instruments, involving the related areas of metrology, automation, and control theory. The term has its origins in the art and science of scientific instrument-making.
Instrumentation can refer to devices as simple as direct-reading thermometers, or as complex as multi-sensor components of industrial control systems. Instruments can be found in laboratories, refineries, factories and vehicles, as well as in everyday household use (e.g., smoke detectors and thermostats).
Measurement parameters
Instrumentation is used to measure many parameters (physical values), including:
Pressure, either differential or static
Flow
Temperature
Levels of liquids, etc.
Moisture or humidity
Density
Viscosity
ionising radiation
Frequency
Current
Voltage
Inductance
Capacitance
Resistivity
Chemical composition
Chemical properties
Toxic gases
Position
Vibration
Weight
History
The history of instrumentation can be divided into several phases.
Pre-industrial
Elements of industrial instrumentation have long histories. Scales for comparing weights and simple pointers to indicate position are ancient technologies. Some of the earliest measurements were of time. One of the oldest water clocks was found in the tomb of the ancient Egyptian pharaoh Amenhotep I, buried around 1500 BCE. Improvements were incorporated in the clocks. By 270 BCE they had the rudiments of an automatic control system device.
In 1663 Christopher Wren presented the Royal Society with a design for a "weather clock". A drawing shows meteorological sensors moving pens over paper driven by clockwork. Such devices did not become standard in meteorology for two centuries. The concept has remained virtually unchanged as evidenced by pneumatic chart recorders, where a pressurized bellows displaces a pen. Integrating sensors, displays, recorders, and controls was uncommon until the industrial revolution, limited by both need and practicality.
Early industrial
Early systems used direct process connections to local control panels for control and indication, which from the early 1930s saw the introduction of pneumatic transmitters and automatic 3-term (PID) controllers.
The ranges of pneumatic transmitters were defined by the need to control valves and actuators in the field. Typically, a signal ranged from 3 to 15 psi (20 to 100kPa or 0.2 to 1.0 kg/cm2) as a standard, was standardized with 6 to 30 psi occasionally being used for larger valves.
Transistor electronics enabled wiring to replace pipes, initially with a range of 20 to 100mA at up to 90V for loop powered devices, reducing to 4 to 20mA at 12 to 24V in more modern systems. A transmitter is a device that produces an output signal, often in the form of a 4–20 mA electrical current signal, although many other options using voltage, frequency, pressure, or ethernet are possible. The transistor was commercialized by the mid-1950s.
Instruments attached to a control system provided signals used to operate solenoids, valves, regulators, circuit breakers, relays and other devices. Such devices could control a desired output variable, and provide either remote monitoring or automated control capabilities.
Each instrument company introduced their own standard instrumentation signal, causing confusion until the 4–20 mA range was used as the standard electronic instrument signal for transmitters and valves. This signal was eventually standardized as ANSI/ISA S50, "Compatibility of Analog Signals for Electronic Industrial Process Instruments", in the 1970s. The transformation of instrumentation from mechanical pneumatic transmitters, controllers, and valves to electronic instruments reduced maintenance costs as electronic instruments were more dependable than mechanical instruments. This also increased efficiency and production due to their increase in accuracy. Pneumatics enjoyed some advantages, being favored in corrosive and explosive atmospheres.
Automatic process control
In the early years of process control, process indicators and control elements such as valves were monitored by an operator, that walked around the unit adjusting the valves to obtain the desired temperatures, pressures, and flows. As technology evolved pneumatic controllers were invented and mounted in the field that monitored the process and controlled the valves. This reduced the amount of time process operators needed to monitor the process. Latter years, the actual controllers were moved to a central room and signals were sent into the control room to monitor the process and outputs signals were sent to the final control element such as a valve to adjust the process as needed. These controllers and indicators were mounted on a wall called a control board. The operators stood in front of this board walking back and forth monitoring the process indicators. This again reduced the number and amount of time process operators were needed to walk around the units. The most standard pneumatic signal level used during these years was 3–15 psig.
Large integrated computer-based systems
Process control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However, this required a large manpower resource to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently staffed central control room. Effectively this was the centralization of all the localized panels, with the advantages of lower manning levels and easy overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were transmitted back to plant.
However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware, and continual operator movement within the control room was required to view different parts of the process. With coming of electronic processors and graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around plant, and communicate with the graphic display in the control room or rooms. The distributed control concept was born.
The introduction of DCSs and SCADA allowed easy interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels.
Application
In some cases, the sensor is a very minor element of the mechanism. Digital cameras and wristwatches might technically meet the loose definition of instrumentation because they record and/or display sensed information. Under most circumstances neither would be called instrumentation, but when used to measure the elapsed time of a race and to document the winner at the finish line, both would be called instrumentation.
Household
A very simple example of an instrumentation system is a mechanical thermostat, used to control a household furnace and thus to control room temperature. A typical unit senses temperature with a bi-metallic strip. It displays temperature by a needle on the free end of the strip. It activates the furnace by a mercury switch. As the switch is rotated by the strip, the mercury makes physical (and thus electrical) contact between electrodes.
Another example of an instrumentation system is a home security system. Such a system consists of
sensors (motion detection, switches to detect door openings), simple algorithms to detect intrusion, local control (arm/disarm) and remote monitoring of the system so that the police can be summoned. Communication is an inherent part of the design.
Kitchen appliances use sensors for control.
A refrigerator maintains a constant temperature by actuating the cooling system when the temperature becomes too high.
An automatic ice machine makes ice until a limit switch is thrown.
Pop-up bread toasters allow the time to be set.
Non-electronic gas ovens will regulate the temperature with a thermostat controlling the flow of gas to the gas burner. These may feature a sensor bulb sited within the main chamber of the oven. In addition, there may be a safety cut-off flame supervision device: after ignition, the burner's control knob must be held for a short time in order for a sensor to become hot, and permit the flow of gas to the burner. If the safety sensor becomes cold, this may indicate the flame on the burner has become extinguished, and to prevent a continuous leak of gas the flow is stopped.
Electric ovens use a temperature sensor and will turn on heating elements when the temperature is too low. More advanced ovens will actuate fans in response to temperature sensors, to distribute heat or to cool.
A common toilet refills the water tank until a float closes the valve. The float is acting as a water level sensor.
Automotive
Modern automobiles have complex instrumentation. In addition to displays of engine rotational speed and vehicle linear speed, there are also displays of battery voltage and current, fluid levels, fluid temperatures, distance traveled, and feedback of various controls (turn signals, parking brake, headlights, transmission position). Cautions may be displayed for special problems (fuel low, check engine, tire pressure low, door ajar, seat belt unfastened). Problems are recorded so they can be reported to diagnostic equipment. Navigation systems can provide voice commands to reach a destination. Automotive instrumentation must be cheap and reliable over long periods in harsh environments. There may be independent airbag systems that contain sensors, logic and actuators. Anti-skid braking systems use sensors to control the brakes, while cruise control affects throttle position. A wide variety of services can be provided via communication links on the OnStar system. Autonomous cars (with exotic instrumentation) have been shown.
Aircraft
Early aircraft had a few sensors. "Steam gauges" converted air pressures into needle deflections that could be interpreted as altitude and airspeed. A magnetic compass provided a sense of direction. The displays to the pilot were as critical as the measurements.
A modern aircraft has a far more sophisticated suite of sensors and displays, which are embedded into avionics systems. The aircraft may contain inertial navigation systems, global positioning systems, weather radar, autopilots, and aircraft stabilization systems. Redundant sensors are used for reliability. A subset of the information may be transferred to a crash recorder to aid mishap investigations. Modern pilot displays now include computer displays including head-up displays.
Air traffic control radar is a distributed instrumentation system. The ground part sends an electromagnetic pulse and receives an echo (at least). Aircraft carry transponders that transmit codes on reception of the pulse. The system displays an aircraft map location, an identifier and optionally altitude. The map location is based on sensed antenna direction and sensed time delay. The other information is embedded in the transponder transmission.
Laboratory instrumentation
Among the possible uses of the term is a collection of laboratory test equipment controlled by a computer through an IEEE-488 bus (also known as GPIB for General Purpose Instrument Bus or HPIB for Hewlitt Packard Instrument Bus). Laboratory equipment is available to measure many electrical and chemical quantities. Such a collection of equipment might be used to automate the testing of drinking water for pollutants.
Instrumentation engineering
Instrumentation engineering is the engineering specialization focused on the principle and operation of measuring instruments that are used in design and configuration of automated systems in areas such as electrical and pneumatic domains, and the control of quantities being measured.
They typically work for industries with automated processes, such as chemical or manufacturing plants, with the goal of improving system productivity, reliability, safety, optimization and stability.
To control the parameters in a process or in a particular system, devices such as microprocessors, microcontrollers or PLCs are used, but their ultimate aim is to control the parameters of a system.
Instrumentation engineering is loosely defined because the required tasks are very domain dependent. An expert in the biomedical instrumentation of laboratory rats has very different concerns than the expert in rocket instrumentation. Common concerns of both are the selection of appropriate sensors based on size, weight, cost, reliability, accuracy, longevity, environmental robustness, and frequency response. Some sensors are literally fired in artillery shells. Others sense thermonuclear explosions until destroyed. Invariably sensor data must be recorded, transmitted or displayed. Recording rates and capacities vary enormously. Transmission can be trivial or can be clandestine, encrypted and low power in the presence of jamming. Displays can be trivially simple or can require consultation with human factors experts. Control system design varies from trivial to a separate specialty.
Instrumentation engineers are responsible for integrating the sensors with the recorders, transmitters, displays or control systems, and producing the Piping and instrumentation diagram for the process. They may design or specify installation, wiring and signal conditioning. They may be responsible for commissioning, calibration, testing and maintenance of the system.
In a research environment it is common for subject matter experts to have substantial instrumentation system expertise. An astronomer knows the structure of the universe and a great deal about telescopes – optics, pointing and cameras (or other sensing elements). That often includes the hard-won knowledge of the operational procedures that provide the best results. For example, an astronomer is often knowledgeable of techniques to minimize temperature gradients that cause air turbulence within the telescope.
Instrumentation technologists, technicians and mechanics specialize in troubleshooting, repairing and maintaining instruments and instrumentation systems.
Typical industrial transmitter signal types
Pneumatic loop (20-100KPa/3-15PSI) – Pneumatic
Current loop (4-20mA) – Electrical
HART – Data signalling, often overlaid on a current loop
Foundation Fieldbus – Data signalling
Profibus – Data signalling
Impact of modern development
Ralph Müller (1940) stated, "That the history of physical science is largely the history of instruments and their intelligent use is well known. The broad generalizations and theories which have arisen from time to time have stood or fallen on the basis of accurate measurement, and in several instances new instruments have had to be devised for the purpose. There is little evidence to show that the mind of modern man is superior to that of the ancients. His tools are incomparably better."
Davis Baird has argued that the major change associated with Floris Cohens identification of a "fourth big scientific revolution" after World War II is the development of scientific instrumentation, not only in chemistry but across the sciences. In chemistry, the introduction of new instrumentation in the 1940s was "nothing less than a scientific and technological revolution" in which classical wet-and-dry methods of structural organic chemistry were discarded, and new areas of research opened up.
As early as 1954, W. A. Wildhack discussed both the productive and destructive potential inherent in process control.
The ability to make precise, verifiable and reproducible measurements of the natural world, at levels that were not previously observable, using scientific instrumentation, has "provided a different texture of the world". This instrumentation revolution fundamentally changes human abilities to monitor and respond, as is illustrated in the examples of DDT monitoring and the use of UV spectrophotometry and gas chromatography to monitor water pollutants.
| Technology | Measuring instruments | null |
47454 | https://en.wikipedia.org/wiki/Stratosphere | Stratosphere | The stratosphere () is the second-lowest layer of the atmosphere of Earth, located above the troposphere and below the mesosphere. The stratosphere is composed of stratified temperature zones, with the warmer layers of air located higher (closer to outer space) and the cooler layers lower (closer to the planetary surface of the Earth). The increase of temperature with altitude is a result of the absorption of the Sun's ultraviolet (UV) radiation by the ozone layer, where ozone is exothermically photolyzed into oxygen in a cyclical fashion. This temperature inversion is in contrast to the troposphere, where temperature decreases with altitude, and between the troposphere and stratosphere is the tropopause border that demarcates the beginning of the temperature inversion.
Near the equator, the lower edge of the stratosphere is as high as , at mid-latitudes around , and at the poles about . Temperatures range from an average of near the tropopause to an average of near the mesosphere. Stratospheric temperatures also vary within the stratosphere as the seasons change, reaching particularly low temperatures in the polar night (winter). Winds in the stratosphere can far exceed those in the troposphere, reaching near in the Southern polar vortex.
Discovery
In 1902, Léon Teisserenc de Bort from France and Richard Assmann from Germany, in separate but coordinated publications and following years of observations, published the discovery of an isothermal layer at around 11–14 km (6.8-8.7 mi), which is the base of the lower stratosphere. This was based on temperature profiles from mostly unmanned and a few manned instrumented balloons.
Ozone layer
The mechanism describing the formation of the ozone layer was described by British mathematician and geophysicist Sydney Chapman in 1930, and is known as the Chapman cycle or ozone–oxygen cycle. Molecular oxygen absorbs high energy sunlight in the UV-C region, at wavelengths shorter than about 240 nm. Radicals produced from the homolytically split oxygen molecules combine with molecular oxygen to form ozone. Ozone in turn is photolysed much more rapidly than molecular oxygen as it has a stronger absorption that occurs at longer wavelengths, where the solar emission is more intense. Ozone (O3) photolysis produces O and O2. The oxygen atom product combines with atmospheric molecular oxygen to reform O3, releasing heat. The rapid photolysis and reformation of ozone heat the stratosphere, resulting in a temperature inversion. This increase of temperature with altitude is characteristic of the stratosphere; its resistance to vertical mixing means that it is stratified. Within the stratosphere temperatures increase with altitude (see temperature inversion); the top of the stratosphere has a temperature of about 270 K (−3°C or 26.6°F).
This vertical stratification, with warmer layers above and cooler layers below, makes the stratosphere dynamically stable: there is no regular convection and associated turbulence in this part of the atmosphere. However, exceptionally energetic convection processes, such as volcanic eruption columns and overshooting tops in severe supercell thunderstorms, may carry convection into the stratosphere on a very local and temporary basis. Overall, the attenuation of solar UV at wavelengths that damage DNA by the ozone layer allows life to exist on the surface of the planet outside of the ocean. All air entering the stratosphere must pass through the tropopause, the temperature minimum that divides the troposphere and stratosphere. The rising air is literally freeze dried; the stratosphere is a very dry place. The top of the stratosphere is called the stratopause, above which the temperature decreases with height.
Formation and destruction
Sydney Chapman gave a correct description of the source of stratospheric ozone and its ability to generate heat within the stratosphere; he also wrote that ozone may be destroyed by reacting with atomic oxygen, making two molecules of molecular oxygen. We now know that there are additional ozone loss mechanisms and that these mechanisms are catalytic, meaning that a small amount of the catalyst can destroy a great number of ozone molecules. The first is due to the reaction of hydroxyl radicals (•OH) with ozone. •OH is formed by the reaction of electrically excited oxygen atoms produced by ozone photolysis, with water vapor. While the stratosphere is dry, additional water vapor is produced in situ by the photochemical oxidation of methane (CH4). The HO2 radical produced by the reaction of OH with O3 is recycled to OH by reaction with oxygen atoms or ozone. In addition, solar proton events can significantly affect ozone levels via radiolysis with the subsequent formation of OH. Nitrous oxide (N2O) is produced by biological activity at the surface and is oxidised to NO in the stratosphere; the so-called NOx radical cycles also deplete stratospheric ozone. Finally, chlorofluorocarbon molecules are photolysed in the stratosphere releasing chlorine atoms that react with ozone giving ClO and O2. The chlorine atoms are recycled when ClO reacts with O in the upper stratosphere, or when ClO reacts with itself in the chemistry of the Antarctic ozone hole.
Paul J. Crutzen, Mario J. Molina and F. Sherwood Rowland were awarded the Nobel Prize in Chemistry in 1995 for their work describing the formation and decomposition of stratospheric ozone.
Aircraft flight
Commercial airliners typically cruise at altitudes of which is in the lower reaches of the stratosphere in temperate latitudes. This optimizes fuel efficiency, mostly due to the low temperatures encountered near the tropopause and low air density, reducing parasitic drag on the airframe. Stated another way, it allows the airliner to fly faster while maintaining lift equal to the weight of the plane. (The fuel consumption depends on the drag, which is related to the lift by the lift-to-drag ratio.) It also allows the airplane to stay above the turbulent weather of the troposphere.
The Concorde aircraft cruised at Mach 2 at about , and the SR-71 cruised at Mach 3 at , all within the stratosphere.
Because the temperature in the tropopause and lower stratosphere is largely constant with increasing altitude, very little convection and its resultant turbulence occurs there. Most turbulence at this altitude is caused by variations in the jet stream and other local wind shears, although areas of significant convective activity (thunderstorms) in the troposphere below may produce turbulence as a result of convective overshoot.
On October 24, 2014, Alan Eustace became the record holder for reaching the altitude record for a manned balloon at . Eustace also broke the world records for vertical speed skydiving, reached with a peak velocity of 1,321 km/h (822 mph) and total freefall distance of – lasting four minutes and 27 seconds.
Circulation and mixing
The stratosphere is a region of intense interactions among radiative, dynamical, and chemical processes, in which the horizontal mixing of gaseous components proceeds much more rapidly than does vertical mixing. The overall circulation of the stratosphere is termed as Brewer-Dobson circulation, which is a single-celled circulation, spanning from the tropics up to the poles, consisting of the tropical upwelling of air from the tropical troposphere and the extra-tropical downwelling of air. Stratospheric circulation is a predominantly wave-driven circulation in that the tropical upwelling is induced by the wave force by the westward propagating Rossby waves, in a phenomenon called Rossby-wave pumping.
An interesting feature of stratospheric circulation is the quasi-biennial oscillation (QBO) in the tropical latitudes, which is driven by gravity waves that are convectively generated in the troposphere. The QBO induces a secondary circulation that is important for the global stratospheric transport of tracers, such as ozone or water vapor.
Another large-scale feature that significantly influences stratospheric circulation is the breaking planetary waves resulting in intense quasi-horizontal mixing in the midlatitudes. This breaking is much more pronounced in the winter hemisphere where this region is called the surf zone. This breaking is caused due to a highly non-linear interaction between the vertically propagating planetary waves and the isolated high potential vorticity region known as the polar vortex. The resultant breaking causes large-scale mixing of air and other trace gases throughout the midlatitude surf zone. The timescale of this rapid mixing is much smaller than the much slower timescales of upwelling in the tropics and downwelling in the extratropics.
During northern hemispheric winters, sudden stratospheric warmings, caused by the absorption of Rossby waves in the stratosphere, can be observed in approximately half of winters when easterly winds develop in the stratosphere. These events often precede unusual winter weather <ref>M.P. Baldwin and T.J. Dunkerton. 'Stratospheric Harbingers of Anomalous Weather Regimes , Science Magazine.</ref> and may even be responsible for the cold European winters of the 1960s.
Stratospheric warming of the polar vortex results in its weakening. When the vortex is strong, it keeps the cold, high-pressure air masses contained in the Arctic; when the vortex weakens, air masses move equatorward, and results in rapid changes of weather in the mid latitudes.
Upper-atmospheric lightning
Upper-atmospheric lightning is a family of short-lived electrical-breakdown phenomena that occur well above the altitudes of normal lightning and storm clouds. Upper-atmospheric lightning is believed to be electrically induced forms of luminous plasma. Lightning extending above the troposphere into the stratosphere is referred to as blue jet, and that reaching into the mesosphere as red sprite.
Life
Bacteria
Bacterial life survives in the stratosphere, making it a part of the biosphere. In 2001, dust was collected at a height of 41 kilometres in a high-altitude balloon experiment and was found to contain bacterial material when examined later in the laboratory.
Birds
Some bird species have been reported to fly at the upper levels of the troposphere. On November 29, 1973, a Rüppell's vulture (Gyps rueppelli) was ingested into a jet engine above the Ivory Coast. Bar-headed geese (Anser indicus'') sometimes migrate over Mount Everest, whose summit is .
| Physical sciences | Atmosphere: General | Earth science |
47460 | https://en.wikipedia.org/wiki/Mesosphere | Mesosphere | The mesosphere (; ) is the third layer of the atmosphere, directly above the stratosphere and directly below the thermosphere. In the mesosphere, temperature decreases as altitude increases. This characteristic is used to define limits: it begins at the top of the stratosphere (sometimes called the stratopause), and ends at the mesopause, which is the coldest part of Earth's atmosphere, with temperatures below . The exact upper and lower boundaries of the mesosphere vary with latitude and with season (higher in winter and at the tropics, lower in summer and at the poles), but the lower boundary is usually located at altitudes from above sea level, and the upper boundary (the mesopause) is usually from .
The stratosphere and mesosphere are sometimes collectively referred to as the "middle atmosphere", which spans altitudes approximately between above Earth's surface. The mesopause, at an altitude of , separates the mesosphere from the thermosphere—the second-outermost layer of Earth's atmosphere. On Earth, the mesopause nearly co-incides with the turbopause, below which different chemical species are well-mixed due to turbulent eddies. Above this level the atmosphere becomes non-uniform because the scale heights of different chemical species differ according to their molecular masses.
The term near space is also sometimes used to refer to altitudes within the mesosphere. This term does not have a technical definition, but typically refers to the region roughly between the Armstrong limit (about 62,000 ft or 19 km, above which humans require a pressure suit in order to survive) and the Kármán line (where astrodynamics must take over from aerodynamics in order to achieve flight); or, by another definition, to the space between the highest altitude commercial airliners fly at (about 40,000 ft (12.2 km)) and the lowest perigee of satellites being able to orbit the Earth (about 45 mi (73 km)). Some sources distinguish between the terms "near space" and "upper atmosphere", so that only the layers closest to the Kármán line are described as "near space".
Temperature
Within the mesosphere, temperature decreases with increasing height. This is a result of decreasing absorption of solar radiation by the rarefied atmosphere having a diminishing relative ozone concentration as altitude increases (ozone being the main absorber in the UV wavelengths that survived absorption by the thermosphere). Additionally, this is also a result of increasing cooling by CO2 radiative emission. The top of the mesosphere, called the mesopause, is the coldest part of Earth's atmosphere. Temperatures in the upper mesosphere fall as low as about , varying according to latitude and season.
Dynamic features
The main most important features in this region are strong zonal (East-West) winds, atmospheric tides, internal atmospheric gravity waves (commonly called "gravity waves"), and planetary waves. Most of these tides and waves start in the troposphere and lower stratosphere, and propagate to the mesosphere. In the mesosphere, gravity-wave amplitudes can become so large that the waves become unstable and dissipate. This dissipation deposits momentum into the mesosphere and largely drives global circulation.
Noctilucent clouds are located in the mesosphere. The upper mesosphere is also the region of the ionosphere known as the D layer, which is only present during the day when some ionization occurs with nitric oxide being ionized by Lyman series-alpha hydrogen radiation. The ionization is so weak that when night falls, and the source of ionization is removed, the free electron and ion form back into a neutral molecule.
A deep sodium layer is located between . Made of unbound, non-ionized atoms of sodium, the sodium layer radiates weakly to contribute to the airglow. The sodium has an average concentration of 400,000 atoms per cubic centimetre. This band is regularly replenished by sodium sublimating from incoming meteors. Astronomers have begun utilizing this sodium band to create "guide stars" as part of the adaptive optical correction process used to produce ultra-sharp ground-based observations. Other metal layers, e.g. iron and potassium, exist in the upper mesosphere/lower thermosphere region as well.
Beginning in October 2018, a distinct type of aurora has been identified, originating in the mesosphere. Often referred to as 'dunes' due to their resemblance to sandy ripples on a beach, the green undulating lights extend toward the equator. They have been identified as originating about above the surface. Since auroras are caused by ultra-high-speed solar particles interacting with atmospheric molecules, the green color of these dunes has tentatively been explained by the interaction of those solar particles with oxygen molecules. The dunes therefore occur where mesospheric oxygen is more concentrated.
Millions of meteors enter the Earth's atmosphere, averaging 40,000 tons per year. The ablated material, called meteoric smoke, is thought to serve as condensation nuclei for noctilucent clouds.
Exploration
The mesosphere lies above altitude records for aircraft, while only the lowest few kilometers are accessible to balloons, for which the altitude record is . Meanwhile, the mesosphere is below the minimum altitude for orbital spacecraft due to high atmospheric drag. It has only been accessed through the use of sounding rockets, which are only capable of taking mesospheric measurements for a few minutes per mission. As a result, it is the least-understood part of the atmosphere, resulting in the humorous moniker ignorosphere. The presence of red sprites and blue jets (electrical discharges or lightning within the lower mesosphere), noctilucent clouds, and density shears within this poorly understood layer are of current scientific interest.
On February 1, 2003, broke up on reentry at about altitude, in the lower mesosphere, killing all seven crew members.
Phenomena in mesosphere and near space
Airglow
Atmospheric tides
Ionosphere
Meteors
Noctilucent clouds
Polar aurora
Sprite (lightning)
Upper atmospheric lightning (Transient luminous event)
| Physical sciences | Atmosphere: General | Earth science |
47463 | https://en.wikipedia.org/wiki/Thermosphere | Thermosphere | The thermosphere is the layer in the Earth's atmosphere directly above the mesosphere and below the exosphere. Within this layer of the atmosphere, ultraviolet radiation causes photoionization/photodissociation of molecules, creating ions; the thermosphere thus constitutes the larger part of the ionosphere. Taking its name from the Greek θερμός (pronounced thermos) meaning heat, the thermosphere begins at about 80 km (50 mi) above sea level. At these high altitudes, the residual atmospheric gases sort into strata according to molecular mass (see turbosphere). Thermospheric temperatures increase with altitude due to absorption of highly energetic solar radiation. Temperatures are highly dependent on solar activity, and can rise to or more. Radiation causes the atmospheric particles in this layer to become electrically charged, enabling radio waves to be refracted and thus be received beyond the horizon. In the exosphere, beginning at about 600 km (375 mi) above sea level, the atmosphere turns into space, although, by the judging criteria set for the definition of the Kármán line (100 km), most of the thermosphere is part of space. The border between the thermosphere and exosphere is known as the thermopause.
The highly attenuated gas in this layer can reach . Despite the high temperature, an observer or object will experience low temperatures in the thermosphere, because the extremely low density of the gas (practically a hard vacuum) is insufficient for the molecules to conduct heat. A normal thermometer will read significantly below , at least at night, because the energy lost by thermal radiation would exceed the energy acquired from the atmospheric gas by direct contact. In the anacoustic zone above , the density is so low that molecular interactions are too infrequent to permit the transmission of sound.
The dynamics of the thermosphere are dominated by atmospheric tides, which are driven predominantly by diurnal heating. Atmospheric waves dissipate above this level because of collisions between the neutral gas and the ionospheric plasma.
The thermosphere is uninhabited with the exception of the International Space Station, which orbits the Earth within the middle of the thermosphere between and the Tiangong space station, which orbits between .
Neutral gas constituents
It is convenient to separate the atmospheric regions according to the two temperature minima at an altitude of about (the tropopause) and at about (the mesopause) (Figure 1). The thermosphere (or the upper atmosphere) is the height region above , while the region between the tropopause and the mesopause is the middle atmosphere (stratosphere and mesosphere) where absorption of solar UV radiation generates the temperature maximum near an altitude of and causes the ozone layer.
The density of the Earth's atmosphere decreases nearly exponentially with altitude. The total mass of the atmosphere is M = ρA H ≃ 1 kg/cm2 within a column of one square centimeter above the ground (with ρA = 1.29 kg/m3 the atmospheric density on the ground at z = 0 m altitude, and H ≃ 8 km the average atmospheric scale height). Eighty percent of that mass is concentrated within the troposphere. The mass of the thermosphere above about is only 0.002% of the total mass. Therefore, no significant energetic feedback from the thermosphere to the lower atmospheric regions can be expected.
Turbulence causes the air within the lower atmospheric regions below the turbopause at about to be a mixture of gases that does not change its composition. Its mean molecular weight is 29 g/mol with molecular oxygen (O2) and nitrogen (N2) as the two dominant constituents. Above the turbopause, however, diffusive separation of the various constituents is significant, so that each constituent follows its barometric height structure with a scale height inversely proportional to its molecular weight. The lighter constituents atomic oxygen (O), helium (He), and hydrogen (H) successively dominate above an altitude of about and vary with geographic location, time, and solar activity. The ratio
N2/O which is a measure of the electron density at the ionospheric F region is highly affected by these variations. These changes follow from the diffusion of the minor constituents through the major gas component during dynamic processes.
The thermosphere contains an appreciable concentration of elemental sodium located in a thick band that occurs at the edge of the mesosphere, above Earth's surface. The sodium has an average concentration of 400,000 atoms per cubic centimeter. This band is regularly replenished by sodium sublimating from incoming meteors. Astronomers have begun using this sodium band to create "guide stars" as part of the optical correction process in producing ultra-sharp ground-based observations.
Energy input
Energy budget
The thermospheric temperature can be determined from density observations as well as from direct satellite measurements. The temperature vs. altitude z in Fig. 1 can be simulated by the so-called Bates profile:
(1)
with T∞ the exospheric temperature above about 400 km altitude,
To = 355 K, and zo = 120 km reference temperature and height, and s an empirical parameter depending on T∞ and decreasing with T∞. That formula is derived from a simple equation of heat conduction. One estimates a total heat input of qo≃ 0.8 to 1.6 mW/m2 above zo = 120 km altitude. In order to obtain equilibrium conditions, that heat input qo above zo is lost to the lower atmospheric regions by heat conduction.
The exospheric temperature T∞ is a fair measurement of the solar XUV radiation. Since solar radio emission F at 10.7 cm wavelength is a good indicator of solar activity, one can apply the empirical formula for quiet magnetospheric conditions.
(2)
with T∞ in K, Fo in 10−2 W m−2 Hz−1 (the Covington index) a value of F averaged over several solar cycles. The Covington index varies typically between 70 and 250 during a solar cycle, and never drops below about 50. Thus, T∞ varies between about 740 and 1350 K. During very quiet magnetospheric conditions, the still continuously flowing magnetospheric energy input contributes by about 250 K to the residual temperature of 500 K in eq.(2). The rest of 250 K in eq.(2) can be attributed to atmospheric waves generated within the troposphere and dissipated within the lower thermosphere.
Solar XUV radiation
The solar X-ray and extreme ultraviolet radiation (XUV) at wavelengths < 170 nm is almost completely absorbed within the thermosphere. This radiation causes the various ionospheric layers as well as a temperature increase at these heights (Figure 1).
While the solar visible light (380 to 780 nm) is nearly constant with the variability of not more than about 0.1% of the solar constant, the solar XUV radiation is highly variable in time and space. For instance, X-ray bursts associated with solar flares can dramatically increase their intensity over preflare levels by many orders of magnitude over some time of tens of minutes. In the extreme ultraviolet, the Lyman α line at 121.6 nm represents an important source of ionization and dissociation at ionospheric D layer heights. During quiet periods of solar activity, it alone contains more energy than the rest of the XUV spectrum. Quasi-periodic changes of the order of 100% or greater, with periods of 27 days and 11 years, belong to the prominent variations of solar XUV radiation. However, irregular fluctuations over all time scales are present all the time. During the low solar activity, about half of the total energy input into the thermosphere is thought to be solar XUV radiation. That solar XUV energy input occurs only during daytime conditions, maximizing at the equator during equinox.
Solar wind
The second source of energy input into the thermosphere is solar wind energy which is transferred to the magnetosphere by mechanisms that are not well understood. One possible way to transfer energy is via a hydrodynamic dynamo process. Solar wind particles penetrate the polar regions of the magnetosphere where the geomagnetic field lines are essentially vertically directed. An electric field is generated, directed from dawn to dusk. Along the last closed geomagnetic field lines with their footpoints within the auroral zones, field-aligned electric currents can flow into the ionospheric dynamo region where they are closed by electric Pedersen and Hall currents. Ohmic losses of the Pedersen currents heat the lower thermosphere (see e.g., Magnetospheric electric convection field). Also, penetration of high energetic particles from the magnetosphere into the auroral regions enhance drastically the electric conductivity, further increasing the electric currents and thus Joule heating. During the quiet magnetospheric activity, the magnetosphere contributes perhaps by a quarter to the thermosphere's energy budget. This is about 250 K of the exospheric temperature in eq.(2). During the very large activity, however, this heat input can increase substantially, by a factor of four or more. That solar wind input occurs mainly in the auroral regions during both day and night.
Atmospheric waves
Two kinds of large-scale atmospheric waves within the lower atmosphere exist: internal waves with finite vertical wavelengths which can transport wave energy upward, and external waves with infinitely large wavelengths that cannot transport wave energy. Atmospheric gravity waves and most of the atmospheric tides generated within the troposphere belong to the internal waves. Their density amplitudes increase exponentially with height so that at the mesopause these waves become turbulent and their energy is dissipated (similar to breaking of ocean waves at the coast), thus contributing to the heating of the thermosphere by about 250 K in eq.(2). On the other hand, the fundamental diurnal tide labeled (1, −2) which is most efficiently excited by solar irradiance is an external wave and plays only a marginal role within the lower and middle atmosphere. However, at thermospheric altitudes, it becomes the predominant wave. It drives the electric Sq-current within the ionospheric dynamo region between about 100 and 200 km height.
Heating, predominately by tidal waves, occurs mainly at lower and middle latitudes. The variability of this heating depends on the meteorological conditions within the troposphere and middle atmosphere, and may not exceed about 50%.
Dynamics
Within the thermosphere above an altitude of about , all atmospheric waves successively become external waves, and no significant vertical wave structure is visible. The atmospheric wave modes degenerate to the spherical functions Pnm with m a meridional wave number and n the zonal wave number (m = 0: zonal mean flow; m = 1: diurnal tides; m = 2: semidiurnal tides; etc.). The thermosphere becomes a damped oscillator system with low-pass filter characteristics. This means that smaller-scale waves (greater numbers of (n,m)) and higher frequencies are suppressed in favor of large-scale waves and lower frequencies. If one considers very quiet magnetospheric disturbances and a constant mean exospheric temperature (averaged over the sphere), the observed temporal and spatial distribution of the exospheric temperature distribution can be described by a sum of spheric functions:
(3)
Here, it is φ latitude, λ longitude, and t time, ωa the angular frequency of one year, ωd the angular frequency of one solar day, and τ = ωdt + λ the local time. ta = June 21 is the date of northern summer solstice, and τd = 15:00 is the local time of maximum diurnal temperature.
The first term in (3) on the right is the global mean of the exospheric temperature (of the order of 1000 K). The second term [with P20 = 0.5(3 sin2(φ)−1)] represents the heat surplus at lower latitudes and a corresponding heat deficit at higher latitudes (Fig. 2a). A thermal wind system develops with the wind toward the poles in the upper level and winds away from the poles in the lower level. The coefficient ΔT20 ≈ 0.004 is small because Joule heating in the aurora regions compensates that heat surplus even during quiet magnetospheric conditions. During disturbed conditions, however, that term becomes dominant, changing sign so that now heat surplus is transported from the poles to the equator. The third term (with P10 = sin φ) represents heat surplus on the summer hemisphere and is responsible for the transport of excess heat from the summer into the winter hemisphere (Fig. 2b). Its relative amplitude is of the order ΔT10 ≃ 0.13. The fourth term (with P11(φ) = cos φ) is the dominant diurnal wave (the tidal mode (1,−2)). It is responsible for the transport of excess heat from the daytime hemisphere into the nighttime hemisphere (Fig. 2d). Its relative amplitude is ΔT11≃ 0.15, thus on the order of 150 K. Additional terms (e.g., semiannual, semidiurnal terms, and higher-order terms) must be added to eq.(3). However, they are of minor importance. Corresponding sums can be developed for density, pressure, and the various gas constituents.
Thermospheric storms
In contrast to solar XUV radiation, magnetospheric disturbances, indicated on the ground by geomagnetic variations, show an unpredictable impulsive character, from short periodic disturbances of the order of hours to long-standing giant storms of several days' duration. The reaction of the thermosphere to a large magnetospheric storm is called a thermospheric storm. Since the heat input into the thermosphere occurs at high latitudes (mainly into the auroral regions), the heat transport is represented by the term P20 in eq.(3) is reversed. Also, due to the impulsive form of the disturbance, higher-order terms are generated which, however, possess short decay times and thus quickly disappear. The sum of these modes determines the "travel time" of the disturbance to the lower latitudes, and thus the response time of the thermosphere with respect to the magnetospheric disturbance. Important for the development of an ionospheric storm is the increase of the ratio N2/O during a thermospheric storm at middle and higher latitude. An increase of N2 increases the loss process of the ionospheric plasma and causes therefore a decrease of the electron density within the ionospheric F-layer (negative ionospheric storm).
Climate change
A contraction of the thermosphere has been observed as a possible result in part due to increased carbon dioxide concentrations, the strongest cooling and contraction occurring in that layer during solar minimum. The most recent contraction in 2008–2009 was the largest such since at least 1967.
| Physical sciences | Atmosphere: General | Earth science |
47474 | https://en.wikipedia.org/wiki/Aperture | Aperture | In optics, the aperture of an optical system (including a system consisted of a single lens) is a hole or an opening that primarily limits light propagated through the system. More specifically, the entrance pupil as the front side image of the aperture and focal length of an optical system determine the cone angle of a bundle of rays that comes to a focus in the image plane.
An optical system typically has many openings or structures that limit ray bundles (ray bundles are also known as pencils of light). These structures may be the edge of a lens or mirror, or a ring or other fixture that holds an optical element in place or may be a special element such as a diaphragm placed in the optical path to limit the light admitted by the system. In general, these structures are called stops, and the aperture stop is the stop that primarily determines the cone of rays that an optical system accepts (see entrance pupil). As a result, it also determines the ray cone angle and brightness at the image point (see exit pupil). The aperture stop generally depends on the object point location; on-axis object points at different object planes may have different aperture stops, and even object points at different lateral locations at the same object plane may have different aperture stops (vignetted). In practice, many object systems are designed to have a single aperture stop at designed working distance and field of view.
In some contexts, especially in photography and astronomy, aperture refers to the opening diameter of the aperture stop through which light can pass. For example, in a telescope, the aperture stop is typically the edges of the objective lens or mirror (or of the mount that holds it). One then speaks of a telescope as having, for example, a aperture. The aperture stop is not necessarily the smallest stop in the system. Magnification and demagnification by lenses and other elements can cause a relatively large stop to be the aperture stop for the system. In astrophotography, the aperture may be given as a linear measure (for example, in inches or millimetres) or as the dimensionless ratio between that measure and the focal length. In other photography, it is usually given as a ratio.
A usual expectation is that the term aperture refers to the opening of the aperture stop, but in reality, the term aperture and the aperture stop are mixed in use. Sometimes even stops that are not the aperture stop of an optical system are also called apertures. Contexts need to clarify these terms.
The word aperture is also used in other contexts to indicate a system which blocks off light outside a certain region. In astronomy, for example, a photometric aperture around a star usually corresponds to a circular window around the image of a star within which the light intensity is assumed.
Application
The aperture stop is an important element in most optical designs. Its most obvious feature is that it limits the amount of light that can reach the image/film plane. This can be either unavoidable due to the practical limit of the aperture stop size, or deliberate to prevent saturation of a detector or overexposure of film. In both cases, the size of the aperture stop determines the amount of light admitted by an optical system. The aperture stop also affects other optical system properties:
The opening size of the stop is one factor that affects DOF (depth of field). A smaller stop (larger f number) produces a longer DOF because it only allows a smaller angle of the cone of light reaching the image plane so the spread of the image of an object point is reduced. A longer DOF allows objects at a wide range of distances from the viewer to all be in focus at the same time.
The stop limits the effect of optical aberrations by limiting light such that the light does not reach edges of optics where aberrations are usually stronger than the optics centers. If the opening of the stop (called the aperture) is too large, then the image will be distorted by stronger aberrations. More sophisticated optical system designs can mitigate the effect of aberrations, allowing a larger aperture and therefore greater light collecting ability.
The stop determines whether the image will be vignetted. Larger stops can cause the light intensity reaching the film or detector to fall off toward the edges of the picture, especially when, for off-axis points, a different stop becomes the aperture stop by virtue of cutting off more light than did the stop that was the aperture stop on the optic axis.
The stop location determines the telecentricity. If the aperture stop of a lens is located at the front focal plane of the lens, then it becomes image-space telecentricity, i.e., the lateral size of the image is insensitive to the image plane location. If the stop is at the back focal plane of the lens, then it becomes object-space telecentricity where the image size is insensitive to the object plane location. The telecentricity helps precise two-dimensional measurements because measurement systems with the telecentricity are insensitive to axial position errors of samples or the sensor.
In addition to an aperture stop, a photographic lens may have one or more field stops, which limit the system's field of view. When the field of view is limited by a field stop in the lens (rather than at the film or sensor) vignetting results; this is only a problem if the resulting field of view is less than was desired.
In astronomy, the opening diameter of the aperture stop (called the aperture) is a critical parameter in the design of a telescope. Generally, one would want the aperture to be as large as possible, to collect the maximum amount of light from the distant objects being imaged. The size of the aperture is limited, however, in practice by considerations of its manufacturing cost and time and its weight, as well as prevention of aberrations (as mentioned above).
Apertures are also used in laser energy control, close aperture z-scan technique, diffractions/patterns, and beam cleaning. Laser applications include spatial filters, Q-switching, high intensity x-ray control.
In light microscopy, the word aperture may be used with reference to either the condenser (that changes the angle of light onto the specimen field), field iris (that changes the area of illumination on specimens) or possibly objective lens (forms primary images). See Optical microscope.
In photography
The aperture stop of a photographic lens can be adjusted to control the amount of light reaching the film or image sensor. In combination with variation of shutter speed, the aperture size will regulate the film's or image sensor's degree of exposure to light. Typically, a fast shutter will require a larger aperture to ensure sufficient light exposure, and a slow shutter will require a smaller aperture to avoid excessive exposure.
A device called a diaphragm usually serves as the aperture stop and controls the aperture (the opening of the aperture stop). The diaphragm functions much like the iris of the eye – it controls the effective diameter of the lens opening (called pupil in the eyes). Reducing the aperture size (increasing the f-number) provides less light to sensor and also increases the depth of field (by limiting the angle of cone of image light reaching the sensor), which describes the extent to which subject matter lying closer than or farther from the actual plane of focus appears to be in focus. In general, the smaller the aperture (the larger the f-number), the greater the distance from the plane of focus the subject matter may be while still appearing in focus.
The lens aperture is usually specified as an f-number, the ratio of focal length to effective aperture diameter (the diameter of the entrance pupil). A lens typically has a set of marked "f-stops" that the f-number can be set to. A lower f-number denotes a greater aperture which allows more light to reach the film or image sensor. The photography term "one f-stop" refers to a factor of (approx. 1.41) change in f-number which corresponds to a change in aperture diameter, which in turn corresponds to a factor of 2 change in light intensity (by a factor 2 change in the aperture area).
Aperture priority is a semi-automatic shooting mode used in cameras. It permits the photographer to select an aperture setting and let the camera decide the shutter speed and sometimes also ISO sensitivity for the correct exposure. This is also referred to as Aperture Priority Auto Exposure, A mode, AV mode (aperture-value mode), or semi-auto mode.
Typical ranges of apertures used in photography are about – or – , covering six stops, which may be divided into wide, middle, and narrow of two stops each, roughly (using round numbers) – , – , and – or (for a slower lens) – , – , and – . These are not sharp divisions, and ranges for specific lenses vary.
Maximum and minimum apertures
The specifications for a given lens typically include the maximum and minimum aperture (opening) sizes, for example, – . In this case, is currently the maximum aperture (the widest opening on a full-frame format for practical use), and is the minimum aperture (the smallest opening). The maximum aperture tends to be of most interest and is always included when describing a lens. This value is also known as the lens "speed", as it affects the exposure time. As the aperture area is proportional to the light admitted by a lens or an optical system, the aperture diameter is proportional to the square root of the light admitted, and thus inversely proportional to the square root of required exposure time, such that an aperture of allows for exposure times one quarter that of . ( is 4 times larger than in the aperture area.)
Lenses with apertures opening or wider are referred to as "fast" lenses, although the specific point has changed over time (for example, in the early 20th century aperture openings wider than were considered fast. The fastest lenses for the common 35 mm film format in general production have apertures of or , with more at and , and many at or slower; is unusual, though sees some use. When comparing "fast" lenses, the image format used must be considered. Lenses designed for a small format such as half frame or APS-C need to project a much smaller image circle than a lens used for large format photography. Thus the optical elements built into the lens can be far smaller and cheaper.
In exceptional circumstances lenses can have even wider apertures with f-numbers smaller than 1.0; see lens speed: fast lenses for a detailed list. For instance, both the current Leica Noctilux-M 50mm ASPH and a 1960s-era Canon 50mm rangefinder lens have a maximum aperture of . Cheaper alternatives began appearing in the early 2010s, such as the Cosina Voigtländer Nokton (several in the range) and () Super Nokton manual focus lenses in the for the Micro Four-Thirds System, and the Venus Optics (Laowa) Argus .
Professional lenses for some movie cameras have f-numbers as small as . Stanley Kubrick's film Barry Lyndon has scenes shot by candlelight with a NASA/Zeiss 50mm f/0.7, the fastest lens in film history. Beyond the expense, these lenses have limited application due to the correspondingly shallower depth of field (DOF) – the scene must either be shallow, shot from a distance, or will be significantly defocused, though this may be the desired effect.
Zoom lenses typically have a maximum relative aperture (minimum f-number) of to through their range. High-end lenses will have a constant aperture, such as or , which means that the relative aperture will stay the same throughout the zoom range. A more typical consumer zoom will have a variable maximum relative aperture since it is harder and more expensive to keep the maximum relative aperture proportional to the focal length at long focal lengths; to is an example of a common variable aperture range in a consumer zoom lens.
By contrast, the minimum aperture does not depend on the focal length – it is limited by how narrowly the aperture closes, not the lens design – and is instead generally chosen based on practicality: very small apertures have lower sharpness due to diffraction at aperture edges, while the added depth of field is not generally useful, and thus there is generally little benefit in using such apertures. Accordingly, DSLR lens typically have minimum aperture of , , or , while large format may go down to , as reflected in the name of Group f/64. Depth of field is a significant concern in macro photography, however, and there one sees smaller apertures. For example, the Canon MP-E 65mm can have effective aperture (due to magnification) as small as . The pinhole optic for Lensbaby creative lenses has an aperture of just .
Aperture area
The amount of light captured by an optical system is proportional to the area of the entrance pupil that is the object space-side image of the aperture of the system, equal to:
Where the two equivalent forms are related via the f-number N = f / D, with focal length f and entrance pupil diameter D.
The focal length value is not required when comparing two lenses of the same focal length; a value of 1 can be used instead, and the other factors can be dropped as well, leaving area proportion to the reciprocal square of the f-number N.
If two cameras of different format sizes and focal lengths have the same angle of view, and the same aperture area, they gather the same amount of light from the scene. In that case, the relative focal-plane illuminance, however, would depend only on the f-number N, so it is less in the camera with the larger format, longer focal length, and higher f-number. This assumes both lenses have identical transmissivity.
Aperture control
Though as early as 1933 Torkel Korling had invented and patented for the Graflex large format reflex camera an automatic aperture control, not all early 35mm single lens reflex cameras had the feature. With a small aperture, this darkened the viewfinder, making viewing, focusing, and composition difficult. Korling's design enabled full-aperture viewing for accurate focus, closing to the pre-selected aperture opening when the shutter was fired and simultaneously synchronising the firing of a flash unit. From 1956 SLR camera manufacturers separately developed automatic aperture control (the Miranda T 'Pressure Automatic Diaphragm', and other solutions on the Exakta Varex IIa and Praktica FX2) allowing viewing at the lens's maximum aperture, stopping the lens down to the working aperture at the moment of exposure, and returning the lens to maximum aperture afterward. The first SLR cameras with internal ("through-the-lens" or "TTL") meters (e.g., the Pentax Spotmatic) required that the lens be stopped down to the working aperture when taking a meter reading. Subsequent models soon incorporated mechanical coupling between the lens and the camera body, indicating the working aperture to the camera for exposure while allowing the lens to be at its maximum aperture for composition and focusing; this feature became known as open-aperture metering.
For some lenses, including a few long telephotos, lenses mounted on bellows, and perspective-control and tilt/shift lenses, the mechanical linkage was impractical, and automatic aperture control was not provided. Many such lenses incorporated a feature known as a "preset" aperture, which allows the lens to be set to working aperture and then quickly switched between working aperture and full aperture without looking at the aperture control. A typical operation might be to establish rough composition, set the working aperture for metering, return to full aperture for a final check of focus and composition, and focusing, and finally, return to working aperture just before exposure. Although slightly easier than stopped-down metering, operation is less convenient than automatic operation. Preset aperture controls have taken several forms; the most common has been the use of essentially two lens aperture rings, with one ring setting the aperture and the other serving as a limit stop when switching to working aperture. Examples of lenses with this type of preset aperture control are the Nikon PC Nikkor 28 mm and the SMC Pentax Shift 6×7 75 mm . The Nikon PC Micro-Nikkor 85 mm lens incorporates a mechanical pushbutton that sets working aperture when pressed and restores full aperture when pressed a second time.
Canon EF lenses, introduced in 1987, have electromagnetic diaphragms, eliminating the need for a mechanical linkage between the camera and the lens, and allowing automatic aperture control with the Canon TS-E tilt/shift lenses. Nikon PC-E perspective-control lenses, introduced in 2008, also have electromagnetic diaphragms, a feature extended to their E-type range in 2013.
Optimal aperture
Optimal aperture depends both on optics (the depth of the scene versus diffraction), and on the performance of the lens.
Optically, as a lens is stopped down, the defocus blur at the Depth of Field (DOF) limits decreases but diffraction blur increases. The presence of these two opposing factors implies a point at which the combined blur spot is minimized (Gibson 1975, 64); at that point, the f-number is optimal for image sharpness, for this given depth of field – a wider aperture (lower f-number) causes more defocus, while a narrower aperture (higher f-number) causes more diffraction.
As a matter of performance, lenses often do not perform optimally when fully opened, and thus generally have better sharpness when stopped down some – this is sharpness in the plane of critical focus, setting aside issues of depth of field. Beyond a certain point, there is no further sharpness benefit to stopping down, and the diffraction occurred at the edges of the aperture begins to become significant for imaging quality. There is accordingly a sweet spot, generally in the – range, depending on lens, where sharpness is optimal, though some lenses are designed to perform optimally when wide open. How significant this varies between lenses, and opinions differ on how much practical impact this has.
While optimal aperture can be determined mechanically, how much sharpness is required depends on how the image will be used – if the final image is viewed under normal conditions (e.g., an 8″×10″ image viewed at 10″), it may suffice to determine the f-number using criteria for minimum required sharpness, and there may be no practical benefit from further reducing the size of the blur spot. But this may not be true if the final image is viewed under more demanding conditions, e.g., a very large final image viewed at normal distance, or a portion of an image enlarged to normal size (Hansma 1996). Hansma also suggests that the final-image size may not be known when a photograph is taken, and obtaining the maximum practicable sharpness allows the decision to make a large final image to be made at a later time; see also critical sharpness.
In biology
In many living optical systems, the eye consists of an iris which adjusts the size of the pupil, through which light enters. The iris is analogous to the diaphragm, and the pupil (which is the adjustable opening in the iris) the aperture. Refraction in the cornea causes the effective aperture (the entrance pupil in optics parlance) to differ slightly from the physical pupil diameter. The entrance pupil is typically about 4 mm in diameter, although it can range from as narrow as 2 mm () in diameter in a brightly lit place to 8 mm () in the dark as part of adaptation. In rare cases in some individuals are able to dilate their pupils even beyond 8 mm (in scotopic lighting, close to the physical limit of the iris. In humans, the average iris diameter is about 11.5 mm, which naturally influences the maximal size of the pupil as well, where larger iris diameters would typically have pupils which are able to dilate to a wider extreme than those with smaller irises. Maximum dilated pupil size also decreases with age.
The iris controls the size of the pupil via two complementary sets muscles, the sphincter and dilator muscles, which are innervated by the parasympathetic and sympathetic nervous systems respectively, and act to induce pupillary constriction and dilation respectively. The state of the pupil is closely influenced by various factors, primarily light (or absence of light), but also by emotional state, interest in the subject of attention, arousal, sexual stimulation, physical activity, accommodation state, and cognitive load. The field of view is not affected by the size of the pupil.
Some individuals are also able to directly exert manual and conscious control over their iris muscles and hence are able to voluntarily constrict and dilate their pupils on command. However, this ability is rare and potential use or advantages are unclear.
Equivalent aperture range
In digital photography, the 35mm-equivalent aperture range is sometimes considered to be more important than the actual f-number. Equivalent aperture is the f-number adjusted to correspond to the f-number of the same size absolute aperture diameter on a lens with a 35mm equivalent focal length. Smaller equivalent f-numbers are expected to lead to higher image quality based on more total light from the subject, as well as lead to reduced depth of field. For example, a Sony Cyber-shot DSC-RX10 uses a 1" sensor, 24 – 200 mm with maximum aperture constant along the zoom range; has equivalent aperture range , which is a lower equivalent f-number than some other cameras with smaller sensors.
However, modern optical research concludes that sensor size does not actually play a part in the depth of field in an image. An aperture's f-number is not modified by the camera's sensor size because it is a ratio that only pertains to the attributes of the lens. Instead, the higher crop factor that comes as a result of a smaller sensor size means that, in order to get an equal framing of the subject, the photo must be taken from further away, which results in a less blurry background, changing the perceived depth of field. Similarly, a smaller sensor size with an equivalent aperture will result in a darker image because of the pixel density of smaller sensors with equivalent megapixels. Every photosite on a camera's sensor requires a certain amount of surface area that is not sensitive to light, a factor that results in differences in pixel pitch and changes in the signal-noise ratio. However, neither the changed depth of field, nor the perceived change in light sensitivity are a result of the aperture. Instead, equivalent aperture can be seen as a rule of thumb to judge how changes in sensor size might affect an image, even if qualities like pixel density and distance from the subject are the actual causes of changes in the image.
In scanning or sampling
The terms scanning aperture and sampling aperture are often used to refer to the opening through which an image is sampled, or scanned, for example in a Drum scanner, an image sensor, or a television pickup apparatus. The sampling aperture can be a literal optical aperture, that is, a small opening in space, or it can be a time-domain aperture for sampling a signal waveform.
For example, film grain is quantified as graininess via a measurement of film density fluctuations as seen through a 0.048 mm sampling aperture.
In popular culture
Aperture Science, a fictional company in the Portal fictional universe, is named after the optical system. The company's logo heavily features an aperture in its logo, and has come to symbolize the series, fictional company, and the Aperture Science Laboratories Computer-Aided Enrichment Center that the game series takes place in.
| Physical sciences | Optics | Physics |
47481 | https://en.wikipedia.org/wiki/Aquifer | Aquifer | An aquifer is an underground layer of water-bearing material, consisting of permeable or fractured rock, or of unconsolidated materials (gravel, sand, or silt). Aquifers vary greatly in their characteristics. The study of water flow in aquifers and the characterization of aquifers is called hydrogeology. Related terms include aquitard, which is a bed of low permeability along an aquifer, and aquiclude (or aquifuge), which is a solid, impermeable area underlying or overlying an aquifer, the pressure of which could lead to the formation of a confined aquifer. The classification of aquifers is as follows: Saturated versus unsaturated; aquifers versus aquitards; confined versus unconfined; isotropic versus anisotropic; porous, karst, or fractured; transboundary aquifer.
Groundwater from aquifers can be sustainably harvested by humans through the use of qanats leading to a well. This groundwater is a major source of fresh water for many regions, however can present a number of challenges such as overdrafting (extracting groundwater beyond the equilibrium yield of the aquifer), groundwater-related subsidence of land, and the salinization or pollution of the groundwater.
Properties
Depth
Aquifers occur from near-surface to deeper than . Those closer to the surface are not only more likely to be used for water supply and irrigation, but are also more likely to be replenished by local rainfall. Although aquifers are sometimes characterized as "underground rivers or lakes," they are actually porous rock saturated with water.
Many desert areas have limestone hills or mountains within them or close to them that can be exploited as groundwater resources. Part of the Atlas Mountains in North Africa, the Lebanon and Anti-Lebanon ranges between Syria and Lebanon, the Jebel Akhdar in Oman, parts of the Sierra Nevada and neighboring ranges in the United States' Southwest, have shallow aquifers that are exploited for their water. Overexploitation can lead to the exceeding of the practical sustained yield; i.e., more water is taken out than can be replenished.
Along the coastlines of certain countries, such as Libya and Israel, increased water usage associated with population growth has caused a lowering of the water table and the subsequent contamination of the groundwater with saltwater from the sea.
In 2013 large freshwater aquifers were discovered under continental shelves off Australia, China, North America and South Africa. They contain an estimated half a million cubic kilometers of "low salinity" water that could be economically processed into potable water. The reserves formed when ocean levels were lower and rainwater made its way into the ground in land areas that were not submerged until the ice age ended 20,000 years ago. The volume is estimated to be 100 times the amount of water extracted from other aquifers since 1900.
Groundwater recharge
Classification
An aquitard is a zone within the Earth that restricts the flow of groundwater from one aquifer to another. An aquitard can sometimes, if completely impermeable, be called an aquiclude or aquifuge. Aquitards are composed of layers of either clay or non-porous rock with low hydraulic conductivity.
Saturated versus unsaturated
Groundwater can be found at nearly every point in the Earth's shallow subsurface to some degree, although aquifers do not necessarily contain fresh water. The Earth's crust can be divided into two regions: the saturated zone or phreatic zone (e.g., aquifers, aquitards, etc.), where all available spaces are filled with water, and the unsaturated zone (also called the vadose zone), where there are still pockets of air that contain some water, but can be filled with more water.
Saturated means the pressure head of the water is greater than atmospheric pressure (it has a gauge pressure > 0). The definition of the water table is the surface where the pressure head is equal to atmospheric pressure (where gauge pressure = 0).
Unsaturated conditions occur above the water table where the pressure head is negative (absolute pressure can never be negative, but gauge pressure can) and the water that incompletely fills the pores of the aquifer material is under suction. The water content in the unsaturated zone is held in place by surface adhesive forces and it rises above the water table (the zero-gauge-pressure isobar) by capillary action to saturate a small zone above the phreatic surface (the capillary fringe) at less than atmospheric pressure. This is termed tension saturation and is not the same as saturation on a water-content basis. Water content in a capillary fringe decreases with increasing distance from the phreatic surface. The capillary head depends on soil pore size. In sandy soils with larger pores, the head will be less than in clay soils with very small pores. The normal capillary rise in a clayey soil is less than but can range between .
The capillary rise of water in a small-diameter tube involves the same physical process. The water table is the level to which water will rise in a large-diameter pipe (e.g., a well) that goes down into the aquifer and is open to the atmosphere.
Aquifers versus aquitards
Aquifers are typically saturated regions of the subsurface that produce an economically feasible quantity of water to a well or spring (e.g., sand and gravel or fractured bedrock often make good aquifer materials).
An aquitard is a zone within the Earth that restricts the flow of groundwater from one aquifer to another. A completely impermeable aquitard is called an aquiclude or aquifuge. Aquitards contain layers of either clay or non-porous rock with low hydraulic conductivity.
In mountainous areas (or near rivers in mountainous areas), the main aquifers are typically unconsolidated alluvium, composed of mostly horizontal layers of materials deposited by water processes (rivers and streams), which in cross-section (looking at a two-dimensional slice of the aquifer) appear to be layers of alternating coarse and fine materials. Coarse materials, because of the high energy needed to move them, tend to be found nearer the source (mountain fronts or rivers), whereas the fine-grained material will make it farther from the source (to the flatter parts of the basin or overbank areas—sometimes called the pressure area). Since there are less fine-grained deposits near the source, this is a place where aquifers are often unconfined (sometimes called the forebay area), or in hydraulic communication with the land surface.
Confined versus unconfined
An unconfined aquifer has no impermeable barrier immediately above it, such that the water level can rise in response to recharge. A confined aquifer has an overlying impermeable barrier that prevents the water level in the aquifer from rising any higher. An aquifer in the same geologic unit may be confined in one area and unconfined in another. Unconfined aquifers are sometimes also called water table or phreatic aquifers, because their upper boundary is the water table or phreatic surface (see Biscayne Aquifer). Typically (but not always) the shallowest aquifer at a given location is unconfined, meaning it does not have a confining layer (an aquitard or aquiclude) between it and the surface. The term "perched" refers to ground water accumulating above a low-permeability unit or strata, such as a clay layer. This term is generally used to refer to a small local area of ground water that occurs at an elevation higher than a regionally extensive aquifer. The difference between perched and unconfined aquifers is their size (perched is smaller). Confined aquifers are aquifers that are overlain by a confining layer, often made up of clay. The confining layer might offer some protection from surface contamination.
If the distinction between confined and unconfined is not clear geologically (i.e., if it is not known if a clear confining layer exists, or if the geology is more complex, e.g., a fractured bedrock aquifer), the value of storativity returned from an aquifer test can be used to determine it (although aquifer tests in unconfined aquifers should be interpreted differently than confined ones). Confined aquifers have very low storativity values (much less than 0.01, and as little as ), which means that the aquifer is storing water using the mechanisms of aquifer matrix expansion and the compressibility of water, which typically are both quite small quantities. Unconfined aquifers have storativities (typically called specific yield) greater than 0.01 (1% of bulk volume); they release water from storage by the mechanism of actually draining the pores of the aquifer, releasing relatively large amounts of water (up to the drainable porosity of the aquifer material, or the minimum volumetric water content).
Isotropic versus anisotropic
In isotropic aquifers or aquifer layers the hydraulic conductivity (K) is equal for flow in all directions, while in anisotropic conditions it differs, notably in horizontal (Kh) and vertical (Kv) sense.
Semi-confined aquifers with one or more aquitards work as an anisotropic system, even when the separate layers are isotropic, because the compound Kh and Kv values are different (see hydraulic transmissivity and hydraulic resistance).
When calculating flow to drains or flow to wells in an aquifer, the anisotropy is to be taken into account lest the resulting design of the drainage system may be faulty.
Porous, karst, or fractured
To properly manage an aquifer its properties must be understood. Many properties must be known to predict how an aquifer will respond to rainfall, drought, pumping, and contamination. Considerations include where and how much water enters the groundwater from rainfall and snowmelt, how fast and in what direction the groundwater travels, and how much water leaves the ground as springs. Computer models can be used to test how accurately the understanding of the aquifer properties matches the actual aquifer performance. Environmental regulations require sites with potential sources of contamination to demonstrate that the hydrology has been characterized.
Porous
Porous aquifers typically occur in sand and sandstone. Porous aquifer properties depend on the depositional sedimentary environment and later natural cementation of the sand grains. The environment where a sand body was deposited controls the orientation of the sand grains, the horizontal and vertical variations, and the distribution of shale layers. Even thin shale layers are important barriers to groundwater flow. All these factors affect the porosity and permeability of sandy aquifers.
Sandy deposits formed in shallow marine environments and in windblown sand dune environments have moderate to high permeability while sandy deposits formed in river environments have low to moderate permeability. Rainfall and snowmelt enter the groundwater where the aquifer is near the surface. Groundwater flow directions can be determined from potentiometric surface maps of water levels in wells and springs. Aquifer tests and well tests can be used with Darcy's law flow equations to determine the ability of a porous aquifer to convey water.
Analyzing this type of information over an area gives an indication how much water can be pumped without overdrafting and how contamination will travel. In porous aquifers groundwater flows as slow seepage in pores between sand grains. A groundwater flow rate of 1 foot per day (0.3 m/d) is considered to be a high rate for porous aquifers, as illustrated by the water slowly seeping from sandstone in the accompanying image to the left.
Porosity is important, but, alone, it does not determine a rock's ability to act as an aquifer. Areas of the Deccan Traps (a basaltic lava) in west central India are good examples of rock formations with high porosity but low permeability, which makes them poor aquifers. Similarly, the micro-porous (Upper Cretaceous) Chalk Group of south east England, although having a reasonably high porosity, has a low grain-to-grain permeability, with its good water-yielding characteristics mostly due to micro-fracturing and fissuring.
Karst
Karst aquifers typically develop in limestone. Surface water containing natural carbonic acid moves down into small fissures in limestone. This carbonic acid gradually dissolves limestone thereby enlarging the fissures. The enlarged fissures allow a larger quantity of water to enter which leads to a progressive enlargement of openings. Abundant small openings store a large quantity of water. The larger openings form a conduit system that drains the aquifer to springs.
Characterization of karst aquifers requires field exploration to locate sinkholes, swallets, sinking streams, and springs in addition to studying geologic maps. Conventional hydrogeologic methods such as aquifer tests and potentiometric mapping are insufficient to characterize the complexity of karst aquifers. These conventional investigation methods need to be supplemented with dye traces, measurement of spring discharges, and analysis of water chemistry. U.S. Geological Survey dye tracing has determined that conventional groundwater models that assume a uniform distribution of porosity are not applicable for karst aquifers.
Linear alignment of surface features such as straight stream segments and sinkholes develop along fracture traces. Locating a well in a fracture trace or intersection of fracture traces increases the likelihood to encounter good water production. Voids in karst aquifers can be large enough to cause destructive collapse or subsidence of the ground surface that can initiate a catastrophic release of contaminants. Groundwater flow rate in karst aquifers is much more rapid than in porous aquifers as shown in the accompanying image to the left. For example, in the Barton Springs Edwards aquifer, dye traces measured the karst groundwater flow rates from 0.5 to 7 miles per day (0.8 to 11.3 km/d). The rapid groundwater flow rates make karst aquifers much more sensitive to groundwater contamination than porous aquifers.
In the extreme case, groundwater may exist in underground rivers (e.g., caves underlying karst topography).
Fractured
If a rock unit of low porosity is highly fractured, it can also make a good aquifer (via fissure flow), provided the rock has a hydraulic conductivity sufficient to facilitate movement of water.
Human use of groundwater
Challenges for using groundwater include: overdrafting (extracting groundwater beyond the equilibrium yield of the aquifer), groundwater-related subsidence of land, groundwater becoming saline, groundwater pollution.
By country or continent
Africa
Aquifer depletion is a problem in some areas, especially in northern Africa, where one example is the Great Manmade River project of Libya. However, new methods of groundwater management such as artificial recharge and injection of surface waters during seasonal wet periods has extended the life of many freshwater aquifers, especially in the United States.
Australia
The Great Artesian Basin situated in Australia is arguably the largest groundwater aquifer in the world (over ). It plays a large part in water supplies for Queensland, and some remote parts of South Australia.
Canada
Discontinuous sand bodies at the base of the McMurray Formation in the Athabasca Oil Sands region of northeastern Alberta, Canada, are commonly referred to as the Basal Water Sand (BWS) aquifers. Saturated with water, they are confined beneath impermeable bitumen-saturated sands that are exploited to recover bitumen for synthetic crude oil production. Where they are deep-lying and recharge occurs from underlying Devonian formations they are saline, and where they are shallow and recharged by surface water they are non-saline. The BWS typically pose problems for the recovery of bitumen, whether by open-pit mining or by in situ methods such as steam-assisted gravity drainage (SAGD), and in some areas they are targets for waste-water injection.
South America
The Guarani Aquifer, located beneath the surface of Argentina, Brazil, Paraguay, and Uruguay, is one of the world's largest aquifer systems and is an important source of fresh water. Named after the Guarani people, it covers , with a volume of about , a thickness of between and a maximum depth of about .
United States
The Ogallala Aquifer of the central United States is one of the world's great aquifers, but in places it is being rapidly depleted by growing municipal use, and continuing agricultural use. This huge aquifer, which underlies portions of eight states, contains primarily fossil water from the time of the last glaciation. Annual recharge, in the more arid parts of the aquifer, is estimated to total only about 10 percent of annual withdrawals. According to a 2013 report by the United States Geological Survey (USGS), the depletion between 2001 and 2008, inclusive, is about 32 percent of the cumulative depletion during the entire 20th century.
In the United States, the biggest users of water from aquifers include agricultural irrigation and oil and coal extraction. "Cumulative total groundwater depletion in the United States accelerated in the late 1940s and continued at an almost steady linear rate through the end of the century. In addition to widely recognized environmental consequences, groundwater depletion also adversely impacts the long-term sustainability of groundwater supplies to help meet the Nation’s water needs."
An example of a significant and sustainable carbonate aquifer is the Edwards Aquifer in central Texas. This carbonate aquifer has historically been providing high quality water for nearly 2 million people, and even today, is full because of tremendous recharge from a number of area streams, rivers and lakes. The primary risk to this resource is human development over the recharge areas.
| Physical sciences | Hydrology | Earth science |
47484 | https://en.wikipedia.org/wiki/Atmospheric%20pressure | Atmospheric pressure | Atmospheric pressure, also known as air pressure or barometric pressure (after the barometer), is the pressure within the atmosphere of Earth. The standard atmosphere (symbol: atm) is a unit of pressure defined as , which is equivalent to 1,013.25 millibars, 760mm Hg, 29.9212inchesHg, or 14.696psi. The atm unit is roughly equivalent to the mean sea-level atmospheric pressure on Earth; that is, the Earth's atmospheric pressure at sea level is approximately 1 atm.
In most circumstances, atmospheric pressure is closely approximated by the hydrostatic pressure caused by the weight of air above the measurement point. As elevation increases, there is less overlying atmospheric mass, so atmospheric pressure decreases with increasing elevation. Because the atmosphere is thin relative to the Earth's radius—especially the dense atmospheric layer at low altitudes—the Earth's gravitational acceleration as a function of altitude can be approximated as constant and contributes little to this fall-off. Pressure measures force per unit area, with SI units of pascals (1 pascal = 1 newton per square metre, 1N/m2). On average, a column of air with a cross-sectional area of 1 square centimetre (cm2), measured from the mean (average) sea level to the top of Earth's atmosphere, has a mass of about 1.03 kilogram and exerts a force or "weight" of about 10.1 newtons, resulting in a pressure of 10.1 N/cm2 or 101kN/m2 (101 kilopascals, kPa). A column of air with a cross-sectional area of 1in2 would have a weight of about 14.7lbf, resulting in a pressure of 14.7lbf/in2.
Mechanism
Atmospheric pressure is caused by the gravitational attraction of the planet on the atmospheric gases above the surface and is a function of the mass of the planet, the radius of the surface, and the amount and composition of the gases and their vertical distribution in the atmosphere. It is modified by the planetary rotation and local effects such as wind velocity, density variations due to temperature and variations in composition.
Mean sea-level pressure
The mean sea-level pressure (MSLP) is the atmospheric pressure at mean sea level. This is the atmospheric pressure normally given in weather reports on radio, television, and newspapers or on the Internet.
The altimeter setting in aviation is an atmospheric pressure adjustment.
Average sea-level pressure is . In aviation weather reports (METAR), QNH is transmitted around the world in hectopascals or millibars (1 hectopascal = 1 millibar), except in the United States, Canada, and Japan where it is reported in inches of mercury (to two decimal places). The United States and Canada also report sea-level pressure SLP, which is adjusted to sea level by a different method, in the remarks section, not in the internationally transmitted part of the code, in hectopascals or millibars. However, in Canada's public weather reports, sea level pressure is instead reported in kilopascals.
In the US weather code remarks, three digits are all that are transmitted; decimal points and the one or two most significant digits are omitted: is transmitted as 132; is transmitted as 000; 998.7hPa is transmitted as 987; etc. The highest sea-level pressure on Earth occurs in Siberia, where the Siberian High often attains a sea-level pressure above , with record highs close to . The lowest measurable sea-level pressure is found at the centres of tropical cyclones and tornadoes, with a record low of . A system transmitting the last three digits transmits the same code (800) for 1080.0 hPa as for 980.0 hPa.
Surface pressure
Surface pressure is the atmospheric pressure at a location on Earth's surface (terrain and oceans). It is directly proportional to the mass of air over that location.
For numerical reasons, atmospheric models such as general circulation models (GCMs) usually predict the nondimensional logarithm of surface pressure.
The average value of surface pressure on Earth is 985 hPa. This is in contrast to mean sea-level pressure, which involves the extrapolation of pressure to sea level for locations above or below sea level. The average pressure at mean sea level (MSL) in the International Standard Atmosphere (ISA) is 1,013.25 hPa, or 1 atmosphere (atm), or 29.92 inches of mercury.
Pressure (P), mass (m), and acceleration due to gravity (g) are related by P = F/A = (m*g)/A, where A is the surface area. Atmospheric pressure is thus proportional to the weight per unit area of the atmospheric mass above that location.
Altitude variation
Pressure on Earth varies with the altitude of the surface, so air pressure on mountains is usually lower than air pressure at sea level. Pressure varies smoothly from the Earth's surface to the top of the mesosphere. Although the pressure changes with the weather, NASA has averaged the conditions for all parts of the earth year-round. As altitude increases, atmospheric pressure decreases. One can calculate the atmospheric pressure at a given altitude. Temperature and humidity also affect the atmospheric pressure. Pressure is proportional to temperature and inversely related to humidity, and both of these are necessary to compute an accurate figure. The graph was developed for a temperature of 15 °C and a relative humidity of 0%.
At low altitudes above sea level, the pressure decreases by about for every 100 metres. For higher altitudes within the troposphere, the following equation (the barometric formula) relates atmospheric pressure p to altitude h:
The values in these equations are:
Local variation
Atmospheric pressure varies widely on Earth, and these changes are important in studying weather and climate. Atmospheric pressure shows a diurnal or semidiurnal (twice-daily) cycle caused by global atmospheric tides. This effect is strongest in tropical zones, with an amplitude of a few hectopascals, and almost zero in polar areas. These variations have two superimposed cycles, a circadian (24 h) cycle, and a semi-circadian (12 h) cycle.
Records
The highest adjusted-to-sea level barometric pressure ever recorded on Earth (above 750 meters) was measured in Tosontsengel, Mongolia on 19 December 2001. The highest adjusted-to-sea level barometric pressure ever recorded (below 750 meters) was at Agata in Evenk Autonomous Okrug, Russia (66°53'N, 93°28'E, elevation: ) on 31 December 1968 of . The discrimination is due to the problematic assumptions (assuming a standard lapse rate) associated with reduction of sea level from high elevations.
The Dead Sea, the lowest place on Earth at below sea level, has a correspondingly high typical atmospheric pressure of 1,065hPa. A below-sea-level surface pressure record of was set on 21 February 1961.
The lowest non-tornadic atmospheric pressure ever measured was 870 hPa (0.858 atm; 25.69 inHg), set on 12 October 1979, during Typhoon Tip in the western Pacific Ocean. The measurement was based on an instrumental observation made from a reconnaissance aircraft.
Measurement based on the depth of water
One atmosphere () is also the pressure caused by the weight of a column of freshwater of approximately . Thus, a diver 10.3 m under water experiences a pressure of about 2 atmospheres (1 atm of air plus 1 atm of water). Conversely, 10.3 m is the maximum height to which water can be raised using suction under standard atmospheric conditions.
Low pressures, such as natural gas lines, are sometimes specified in inches of water, typically written as w.c. (water column) gauge or w.g. (inches water) gauge. A typical gas-using residential appliance in the US is rated for a maximum of , which is approximately 14 w.g. Similar metric units with a wide variety of names and notation based on millimetres, centimetres or metres are now less commonly used.
Boiling point of liquids
Pure water boils at at earth's standard atmospheric pressure. The boiling point is the temperature at which the vapour pressure is equal to the atmospheric pressure around the liquid. Because of this, the boiling point of liquids is lower at lower pressure and higher at higher pressure. Cooking at high elevations, therefore, requires adjustments to recipes or pressure cooking. A rough approximation of elevation can be obtained by measuring the temperature at which water boils; in the mid-19th century, this method was used by explorers. Conversely, if one wishes to evaporate a liquid at a lower temperature, for example in distillation, the atmospheric pressure may be lowered by using a vacuum pump, as in a rotary evaporator.
Measurement and maps
An important application of the knowledge that atmospheric pressure varies directly with altitude was in determining the height of hills and mountains, thanks to reliable pressure measurement devices. In 1774, Maskelyne was confirming Newton's theory of gravitation at and on Schiehallion mountain in Scotland, and he needed to measure elevations on the mountain's sides accurately. William Roy, using barometric pressure, was able to confirm Maskelyne's height determinations; the agreement was within one meter (3.28 feet). This method became and continues to be useful for survey work and map making.
| Physical sciences | Atmosphere | null |
47486 | https://en.wikipedia.org/wiki/Atoll | Atoll | An atoll () is a ring-shaped island, including a coral rim that encircles a lagoon. There may be coral islands or cays on the rim. Atolls are located in warm tropical or subtropical parts of the oceans and seas where corals can develop. Most of the approximately 440 atolls in the world are in the Pacific Ocean.
Two different, well-cited models, the subsidence model and the antecedent karst model, have been used to explain the development of atolls. According to Charles Darwin's subsidence model, the formation of an atoll is explained by the sinking of a volcanic island around which a coral fringing reef has formed. Over geologic time, the volcanic island becomes extinct and eroded as it subsides completely beneath the surface of the ocean. As the volcanic island subsides, the coral fringing reef becomes a barrier reef that is detached from the island. Eventually, reef and the small coral islets on top of it are all that is left of the original island, and a lagoon has taken the place of the former volcano. The lagoon is not the former volcanic crater. For the atoll to persist, the coral reef must be maintained at the sea surface, with coral growth matching any relative change in sea level (sinking of the island or rising oceans).
An alternative model for the origin of atolls is called the antecedent karst model. In the antecedent karst model, the first step in the formation of an atoll is the development of a flat top, mound-like coral reef during the subsidence of an oceanic island of either volcanic or nonvolcanic origin below sea level. Then, when relative sea level drops below the level of the flat surface of coral reef, it is exposed to the atmosphere as a flat topped island which is dissolved by rainfall to form limestone karst. Because of hydrologic properties of this karst, the rate of dissolution of the exposed coral is lowest along its rim and the rate of dissolution increases inward to its maximum at the center of the island. As a result, a saucer shaped island with a raised rim forms. When relative sea level submerges the island again, the rim provides a rocky core on which coral grow again to form the islands of an atoll and the flooded bottom of the saucer forms the lagoon within them.
Usage
The word atoll comes from the Dhivehi word (, ). Dhivehi is an Indo-Aryan language spoken in the Maldives. The word's first recorded English use was in 1625 as atollon. Charles Darwin coined the term in his monograph, The Structure and Distribution of Coral Reefs. He recognized the word's indigenous origin and defined it as a "circular group of coral islets", synonymously with "lagoon-island".
More modern definitions of atoll describe them as "annular reefs enclosing a lagoon in which there are no promontories other than reefs and islets composed of reef detritus" or "in an exclusively morphological sense, [as] a ring-shaped ribbon reef enclosing a lagoon".
Distribution and size
There are approximately 440 atolls in the world. Most of the world's atolls are in the Pacific Ocean (with concentrations in the Caroline Islands, the Coral Sea Islands, the Marshall Islands, the Tuamotu Islands, Kiribati, Tokelau, and Tuvalu) and the Indian Ocean (the Chagos Archipelago, Lakshadweep, the atolls of the Maldives, and the Outer Islands of Seychelles). In addition, Indonesia also has several atolls spread across the archipelago, such as in the Thousand Islands, Taka Bonerate Islands, and atolls in the Raja Ampat Islands. The Atlantic Ocean has no large groups of atolls, other than eight atolls east of Nicaragua that belong to the Colombian department of San Andres and Providencia in the Caribbean.
Reef-building corals will thrive only in warm tropical and subtropical waters of oceans and seas, and therefore atolls are found only in the tropics and subtropics. The northernmost atoll in the world is Kure Atoll at 28°25′ N, along with other atolls of the Northwestern Hawaiian Islands. The southernmost atolls in the world are Elizabeth Reef at 29°57′ S, and nearby Middleton Reef at 29°27′ S, in the Tasman Sea, both of which are part of the Coral Sea Islands Territory. The next southerly atoll is Ducie Island in the Pitcairn Islands Group, at 24°41′ S.
The atoll closest to the Equator is Aranuka of Kiribati. Its southern tip is just north of the Equator.
Bermuda is sometimes claimed as the "northernmost atoll" at a latitude of 32°18′ N. At this latitude, coral reefs would not develop without the warming waters of the Gulf Stream. However, Bermuda is termed a pseudo-atoll because its general form, while resembling that of an atoll, has a very different origin of formation.
In most cases, the land area of an atoll is very small in comparison to the total area. Atoll islands are low lying, with their elevations less than . Measured by total area, Lifou () is the largest raised coral atoll of the world, followed by Rennell Island (). More sources, however, list Kiritimati as the largest atoll in the world in terms of land area. It is also a raised coral atoll ( land area; according to other sources even ), main lagoon, other lagoons (according to other sources total lagoon size).
The geological formation known as a reef knoll refers to the elevated remains of an ancient atoll within a limestone region, appearing as a hill. The second largest atoll by dry land area is Aldabra, with . Huvadhu Atoll, situated in the southern region of the Maldives, holds the distinction of being the largest atoll based on the sheer number of islands it comprises, with a total of 255 individual islands.
List of atolls
Gallery
Formation
In 1842, Charles Darwin explained the creation of coral atolls in the southern Pacific Ocean based upon observations made during a five-year voyage aboard HMS Beagle from 1831 to 1836. Darwin's explanation suggests that several tropical island types: from high volcanic island, through barrier reef island, to atoll, represented a sequence of gradual subsidence of what started as an oceanic volcano. He reasoned that a fringing coral reef surrounding a volcanic island in the tropical sea will grow upward as the island subsides (sinks), becoming an "almost atoll", or barrier reef island, as typified by an island such as Aitutaki in the Cook Islands, and Bora Bora and others in the Society Islands. The fringing reef becomes a barrier reef for the reason that the outer part of the reef maintains itself near sea level through biotic growth, while the inner part of the reef falls behind, becoming a lagoon because conditions are less favorable for the coral and calcareous algae responsible for most reef growth. In time, subsidence carries the old volcano below the ocean surface and the barrier reef remains. At this point, the island has become an atoll.
As formulated by J. E. Hoffmeister, F. S. McNeil, E. G. Prudy, and others, the antecedent karst model argues that atolls are Pleistocene features that are the direct result of the interaction between subsidence and preferential karst dissolution that occurred in the interior of flat topped coral reefs during exposure during glacial lowstands of sea level. The elevated rims along an island created by this preferential karst dissolution become the sites of coral growth and islands of atolls when flooded during interglacial highstands.
The research of A. W. Droxler and others supports the antecedent karst model as they found that the morphology of modern atolls are independent of any influence of an underlying submerged and buried island and are not rooted to an initial fringing reef/barrier reef attached to a slowly subsiding volcanic edifice. In fact, the Neogene reefs underlying the studied modern atolls overlie and completely bury the subsided island are all non-atoll, flat-topped reefs. In fact, they found that atolls did not form doing the subsidence of an island until MIS-11, Mid-Brunhes, long after the many the former islands had been completely submerged and buried by flat topped reefs during the Neogene.
Atolls are the product of the growth of tropical marine organisms, and so these islands are found only in warm tropical waters. Volcanic islands located beyond the warm water temperature requirements of hermatypic (reef-building) organisms become seamounts as they subside, and are eroded away at the surface. An island that is located where the ocean water temperatures are just sufficiently warm for upward reef growth to keep pace with the rate of subsidence is said to be at the Darwin Point. Islands in colder, more polar regions evolve toward seamounts or guyots; warmer, more equatorial islands evolve toward atolls, for example Kure Atoll. However, ancient atolls during the Mesozoic appear to exhibit different growth and evolution patterns.
Coral atolls are important as sites where dolomitization of calcite occurs. Several models have been proposed for the dolomitization of calcite and aragonite within them. They are the evaporative, seepage-reflux, mixing-zone, burial, and seawater models. Although the origin of replacement dolomites remains problematic and controversial, it is generally accepted that seawater was the source of magnesium for dolomitization and the fluid in which calcite was dolomitized to form the dolomites found within atolls. Various processes have been invoked to drive large amounts of seawater through an atoll in order for dolomitization to occur.
Investigation by the Royal Society of London
In 1896, 1897 and 1898, the Royal Society of London carried out drilling on Funafuti atoll in Tuvalu for the purpose of investigating the formation of coral reefs. They wanted to determine whether traces of shallow water organisms could be found at depth in the coral of Pacific atolls. This investigation followed the work on the structure and distribution of coral reefs conducted by Charles Darwin in the Pacific.
The first expedition in 1896 was led by Professor William Johnson Sollas of the University of Oxford. Geologists included Walter George Woolnough and Edgeworth David of the University of Sydney. Professor Edgeworth David led the expedition in 1897. The third expedition in 1898 was led by Alfred Edmund Finckh.
| Physical sciences | Oceanic and coastal landforms | null |
47488 | https://en.wikipedia.org/wiki/Barometer | Barometer | A barometer is a scientific instrument that is used to measure air pressure in a certain environment. Pressure tendency can forecast short term changes in the weather. Many measurements of air pressure are used within surface weather analysis to help find surface troughs, pressure systems and frontal boundaries.
Barometers and pressure altimeters (the most basic and common type of altimeter) are essentially the same instrument, but used for different purposes. An altimeter is intended to be used at different levels matching the corresponding atmospheric pressure to the altitude, while a barometer is kept at the same level and measures subtle pressure changes caused by weather and elements of weather. The average atmospheric pressure on the Earth's surface varies between 940 and 1040 hPa (mbar). The average atmospheric pressure at sea level is 1013 hPa (mbar).
Etymology
The word barometer is derived from the Ancient Greek (), meaning "weight", and (), meaning "measure".
History
Evangelista Torricelli is usually credited with inventing the barometer in 1643, although the historian W. E. Knowles Middleton suggests the more likely date is 1644 (when Torricelli first reported his experiments; the 1643 date was only suggested after his death). Gasparo Berti, an Italian mathematician and astronomer, also built a rudimentary water barometer sometime between 1640 and 1644, but it was not a true barometer as it was not intended to move and record variable air pressure. French scientist and philosopher René Descartes described the design of an experiment to determine atmospheric pressure as early as 1631, but there is no evidence that he built a working barometer at that time.
Baliani's siphon experiment
On 27 July 1630, Giovanni Battista Baliani wrote a letter to Galileo Galilei explaining an experiment he had made in which a siphon, led over a hill about 21 m high, failed to work. When the end of the siphon was opened in a reservoir, the water level in that limb would sink to about 10 m above the reservoir. Galileo responded with an explanation of the phenomenon: he proposed that it was the power of a vacuum that held the water up, and at a certain height the amount of water simply became too much and the force could not hold any more, like a cord that can support only so much weight. This was a restatement of the theory of horror vacui ("nature abhors a vacuum"), which dates to Aristotle, and which Galileo restated as resistenza del vacuo.
Berti's vacuum experiment
Galileo's ideas, presented in his Discorsi (Two New Sciences), reached Rome in December 1638. Physicists Gasparo Berti and father Raffaello Magiotti were excited by these ideas, and decided to seek a better way to attempt to produce a vacuum other than with a siphon. Magiotti devised such an experiment. Four accounts of the experiment exist, all written some years later. No exact date was given, but since Two New Sciences reached Rome in December 1638, and Berti died before January 2, 1644, science historian W. E. Knowles Middleton places the event to sometime between 1639 and 1643. Present were Berti, Magiotti, Jesuit polymath Athanasius Kircher, and Jesuit physicist Niccolò Zucchi.
In brief, Berti's experiment consisted of filling with water a long tube that had both ends plugged, then standing the tube in a basin of water. The bottom end of the tube was opened, and water that had been inside of it poured out into the basin. However, only part of the water in the tube flowed out, and the level of the water inside the tube stayed at an exact level, which happened to be , the same height limit Baliani had observed in the siphon. What was most important about this experiment was that the lowering water had left a space above it in the tube which had no intermediate contact with air to fill it up. This seemed to suggest the possibility of a vacuum existing in the space above the water.
Evangelista Torricelli
Evangelista Torricelli, a friend and student of Galileo, interpreted the results of the experiments in a novel way. He proposed that the weight of the atmosphere, not an attracting force of the vacuum, held the water in the tube. In a letter to Michelangelo Ricci in 1644 concerning the experiments, he wrote:
Many have said that a vacuum does not exist, others that it does exist in spite of the repugnance of nature and with difficulty; I know of no one who has said that it exists without difficulty and without a resistance from nature. I argued thus: If there can be found a manifest cause from which the resistance can be derived which is felt if we try to make a vacuum, it seems to me foolish to try to attribute to vacuum those operations which follow evidently from some other cause; and so by making some very easy calculations, I found that the cause assigned by me (that is, the weight of the atmosphere) ought by itself alone to offer a greater resistance than it does when we try to produce a vacuum.
It was traditionally thought, especially by the Aristotelians, that the air did not have weight; that is, that the kilometers of air above the surface of the Earth did not exert any weight on the bodies below it. Even Galileo had accepted the weightlessness of air as a simple truth. Torricelli proposed that rather than an attractive force of the vacuum sucking up water, air did indeed have weight, which pushed on the water, holding up a column of it. He argued that the level that the water stayed at—c. 10.3 m above the water surface below—was reflective of the force of the air's weight pushing on the water in the basin, setting a limit for how far down the water level could sink in a tall, closed, water-filled tube. He viewed the barometer as a balance—an instrument for measurement—as opposed to merely an instrument for creating a vacuum, and since he was the first to view it this way, he is traditionally considered the inventor of the barometer, in the sense in which we now use the term.
Torricelli's mercury barometer
Because of rumors circulating in Torricelli's gossipy Italian neighbourhood, which included that he was engaged in some form of sorcery or witchcraft, Torricelli realized he had to keep his experiment secret to avoid the risk of being arrested. He needed to use a liquid that was heavier than water, and from his previous association and suggestions by Galileo, he deduced that by using mercury, a shorter tube could be used. With mercury, which is about 14 times denser than water, a tube only 80 cm was now needed, not 10.5 m.
Blaise Pascal
In 1646, Blaise Pascal along with Pierre Petit, had repeated and perfected Torricelli's experiment after hearing about it from Marin Mersenne, who himself had been shown the experiment by Torricelli toward the end of 1644. Pascal further devised an experiment to test the Aristotelian proposition that it was vapours from the liquid that filled the space in a barometer. His experiment compared water with wine, and since the latter was considered more "spiritous", the Aristotelians expected the wine to stand lower (since more vapours would mean more pushing down on the liquid column). Pascal performed the experiment publicly, inviting the Aristotelians to predict the outcome beforehand. The Aristotelians predicted the wine would stand lower. It did not.
First atmospheric pressure vs. altitude experiment
However, Pascal went even further to test the mechanical theory. If, as suspected by mechanical philosophers like Torricelli and Pascal, air had weight, the pressure would be less at higher altitudes. Therefore, Pascal wrote to his brother-in-law, Florin Perier, who lived near a mountain called the Puy de Dôme, asking him to perform a crucial experiment. Perier was to take a barometer up the Puy de Dôme and make measurements along the way of the height of the column of mercury. He was then to compare it to measurements taken at the foot of the mountain to see if those measurements taken higher up were in fact smaller. In September 1648, Perier carefully and meticulously carried out the experiment, and found that Pascal's predictions had been correct. The column of mercury stood lower as the barometer was carried to a higher altitude.
Types
Water barometers
The concept that decreasing atmospheric pressure predicts stormy weather, postulated by Lucien Vidi, provides the theoretical basis for a weather prediction device called a "weather glass" or a "Goethe barometer" (named for Johann Wolfgang von Goethe, the renowned German writer and polymath who developed a simple but effective weather ball barometer using the principles developed by Torricelli). The French name, le baromètre Liègeois, is used by some English speakers. This name reflects the origins of many early weather glasses – the glass blowers of Liège, Belgium.
The weather ball barometer consists of a glass container with a sealed body, half filled with water. A narrow spout connects to the body below the water level and rises above the water level. The narrow spout is open to the atmosphere. When the air pressure is lower than it was at the time the body was sealed, the water level in the spout will rise above the water level in the body; when the air pressure is higher, the water level in the spout will drop below the water level in the body. A variation of this type of barometer can be easily made at home.
Mercury barometers
A mercury barometer is an instrument used to measure atmospheric pressure in a certain location and has a vertical glass tube closed at the top sitting in an open mercury-filled basin at the bottom. Mercury in the tube adjusts until the weight of it balances the atmospheric force exerted on the reservoir. High atmospheric pressure places more force on the reservoir, forcing mercury higher in the column. Low pressure allows the mercury to drop to a lower level in the column by lowering the force placed on the reservoir. Since higher temperature levels around the instrument will reduce the density of the mercury, the scale for reading the height of the mercury is adjusted to compensate for this effect. The tube has to be at least as long as the amount dipping in the mercury + head space + the maximum length of the column.
Torricelli documented that the height of the mercury in a barometer changed slightly each day and concluded that this was due to the changing pressure in the atmosphere. He wrote: "We live submerged at the bottom of an ocean of elementary air, which is known by incontestable experiments to have weight". Inspired by Torricelli, Otto von Guericke on 5 December 1660 found that air pressure was unusually low and predicted a storm, which occurred the next day.
The mercury barometer's design gives rise to the expression of atmospheric pressure in inches or millimeters of mercury (mmHg). A torr was originally defined as 1 mmHg. The pressure is quoted as the level of the mercury's height in the vertical column. Typically, atmospheric pressure is measured between and of Hg. One atmosphere (1 atm) is equivalent to of mercury.
Design changes to make the instrument more sensitive, simpler to read, and easier to transport resulted in variations such as the basin, siphon, wheel, cistern, Fortin, multiple folded, stereometric, and balance barometers.
In 2007, a European Union directive was enacted to restrict the use of mercury in new measuring instruments intended for the general public, effectively ending the production of new mercury barometers in Europe. The repair and trade of antiques (produced before late 1957) remained unrestricted.
Fitzroy barometer
Fitzroy barometers combine the standard mercury barometer with a thermometer, as well as a guide of how to interpret pressure changes.
Fortin barometer
Fortin barometers use a variable displacement mercury cistern, usually constructed with a thumbscrew pressing on a leather diaphragm bottom (V in the diagram). This compensates for displacement of mercury in the column with varying pressure. To use a Fortin barometer, the level of mercury is set to zero by using the thumbscrew to make an ivory pointer (O in the diagram) just touch the surface of the mercury. The pressure is then read on the column by adjusting the vernier scale so that the mercury just touches the sightline at Z. Some models also employ a valve for closing the cistern, enabling the mercury column to be forced to the top of the column for transport. This prevents water-hammer damage to the column in transit.
Sympiesometer
A sympiesometer is a compact and lightweight barometer that was widely used on ships in the early 19th century. The sensitivity of this barometer was also used to measure altitude.
Sympiesometers have two parts. One is a traditional mercury thermometer that is needed to calculate the expansion or contraction of the fluid in the barometer. The other is the barometer, consisting of a J-shaped tube open at the lower end and closed at the top, with small reservoirs at both ends of the tube.
Wheel barometers
A wheel barometer uses a "J" tube sealed at the top of the longer limb. The shorter limb is open to the atmosphere, and floating on top of the mercury there is a small glass float. A fine silken thread is attached to the float which passes up over a wheel and then back down to a counterweight (usually protected in another tube). The wheel turns the point on the front of the barometer. As atmospheric pressure increases, mercury moves from the short to the long limb, the float falls, and the pointer moves. When pressure falls, the mercury moves back, lifting the float and turning the dial the other way.
Around 1810 the wheel barometer, which could be read from a great distance, became the first practical and commercial instrument favoured by farmers and the educated classes in the UK. The face of the barometer was circular with a simple dial pointing to an easily readable scale: "Rain - Change - Dry" with the "Change" at the top centre of the dial. Later models added a barometric scale with finer graduations: "Stormy (28 inches of mercury), Much Rain (28.5), Rain (29), Change (29.5), Fair (30), Set fair (30.5), very dry(31)".
Natalo Aiano is recognised as one of the finest makers of wheel barometers, an early pioneer in a wave of artisanal Italian instrument and barometer makers that were encouraged to emigrate to the UK. He listed as working in Holborn, London –1805. From 1770 onwards, a large number of Italians came to England because they were accomplished glass blowers or instrument makers. By 1840 it was fair to say that the Italians dominated the industry in England.
Vacuum pump oil barometer
Using vacuum pump oil as the working fluid in a barometer has led to the creation of the new "World's Tallest Barometer" in February 2013. The barometer at Portland State University (PSU) uses doubly distilled vacuum pump oil and has a nominal height of about 12.4 m for the oil column height; expected excursions are in the range of ±0.4 m over the course of a year. Vacuum pump oil has very low vapour pressure and is available in a range of densities; the lowest density vacuum oil was chosen for the PSU barometer to maximize the oil column height.
Aneroid barometers
An aneroid barometer is an instrument used for measuring air pressure via a method that does not involve liquid. Invented in 1844 by French scientist Lucien Vidi, the aneroid barometer uses a small, flexible metal box called an aneroid cell (capsule), which is made from an alloy of beryllium and copper. The evacuated capsule (or usually several capsules, stacked to add up their movements) is prevented from collapsing by a strong spring. Small changes in external air pressure cause the cell to expand or contract. This expansion and contraction drives mechanical levers such that the tiny movements of the capsule are amplified and displayed on the face of the aneroid barometer. Many models include a manually set needle which is used to mark the current measurement so that a relative change can be seen. This type of barometer is common in homes and in recreational boats. It is also used in meteorology, mostly in barographs, and as a pressure instrument in radiosondes.
Barographs
A barograph is a recording aneroid barometer where the changes in atmospheric pressure are recorded on a paper chart.
The principle of the barograph is same as that of the aneroid barometer. Whereas the barometer displays the pressure on a dial, the barograph uses the small movements of the box to transmit by a system of levers to a recording arm that has at its extreme end either a scribe or a pen. A scribe records on smoked foil while a pen records on paper using ink, held in a nib. The recording material is mounted on a cylindrical drum which is rotated slowly by a clock. Commonly, the drum makes one revolution per day, per week, or per month, and the rotation rate can often be selected by the user.
MEMS barometers
Microelectromechanical systems (or MEMS) barometers are extremely small devices between 1 and 100 micrometres in size (0.001 to 0.1 mm). They are created via photolithography or photochemical machining. Typical applications include miniaturized weather stations, electronic barometers and altimeters.
A barometer can also be found in smartphones such as the Samsung Galaxy Nexus, Samsung Galaxy S3-S6, Motorola Xoom, Apple iPhone 6 and newer iPhones, and Timex Expedition WS4 smartwatch, based on MEMS and piezoresistive pressure-sensing technologies. Inclusion of barometers on smartphones was originally intended to provide a faster GPS lock. However, third party researchers were unable to confirm additional GPS accuracy or lock speed due to barometric readings. The researchers suggest that the inclusion of barometers in smartphones may provide a solution for determining a user's elevation, but also suggest that several pitfalls must first be overcome.
More unusual barometers
There are many other more unusual types of barometer. From variations on the storm barometer, such as the Collins Patent Table Barometer, to more traditional-looking designs such as Hooke's Otheometer and the Ross Sympiesometer. Some, such as the Shark Oil barometer, work only in a certain temperature range, achieved in warmer climates.
Applications
Barometric pressure and the pressure tendency (the change of pressure over time) have been used in weather forecasting since the late 19th century. When used in combination with wind observations, reasonably accurate short-term forecasts can be made. Simultaneous barometric readings from across a network of weather stations allow maps of air pressure to be produced, which were the first form of the modern weather map when created in the 19th century. Isobars, lines of equal pressure, when drawn on such a map, give a contour map showing areas of high and low pressure. Localized high atmospheric pressure acts as a barrier to approaching weather systems, diverting their course. Atmospheric lift caused by low-level wind convergence into the surface brings clouds and sometimes precipitation. The larger the change in pressure, especially if more than 3.5 hPa (0.1 inHg), the greater the change in weather that can be expected. If the pressure drop is rapid, a low pressure system is approaching, and there is a greater chance of rain. Rapid pressure rises, such as in the wake of a cold front, are associated with improving weather conditions, such as clearing skies.
With falling air pressure, gases trapped within the coal in deep mines can escape more freely. Thus low pressure increases the risk of firedamp accumulating. Collieries therefore keep track of the pressure. In the case of the Trimdon Grange colliery disaster of 1882 the mines inspector drew attention to the records and in the report stated "the conditions of atmosphere and temperature may be taken to have reached a dangerous point".
Aneroid barometers are used in scuba diving. A submersible pressure gauge is used to keep track of the contents of the diver's air tank. Another gauge is used to measure the hydrostatic pressure, usually expressed as a depth of sea water. Either or both gauges may be replaced with electronic variants or a dive computer.
Compensations
Temperature
The density of mercury will change with increase or decrease in temperature, so a reading must be adjusted for the temperature of the instrument. For this purpose a mercury thermometer is usually mounted on the instrument. Temperature compensation of an aneroid barometer is accomplished by including a bi-metal element in the mechanical linkages. Aneroid barometers sold for domestic use typically have no compensation under the assumption that they will be used within a controlled room temperature range.
Altitude
As the air pressure decreases at altitudes above sea level (and increases below sea level) the uncorrected reading of the barometer will depend on its location. The reading is then adjusted to an equivalent sea-level pressure for purposes of reporting. For example, if a barometer located at sea level and under fair weather conditions is moved to an altitude of 1,000 feet (305 m), about 1 inch of mercury (~35 hPa) must be added on to the reading. The barometer readings at the two locations should be the same if there are negligible changes in time, horizontal distance, and temperature. If this were not done, there would be a false indication of an approaching storm at the higher elevation.
Aneroid barometers have a mechanical adjustment that allows the equivalent sea level pressure to be read directly and without further adjustment if the instrument is not moved to a different altitude. Setting an aneroid barometer is similar to resetting an analog clock that is not at the correct time. Its dial is rotated so that the current atmospheric pressure from a known accurate and nearby barometer (such as the local weather station) is displayed. No calculation is needed, as the source barometer reading has already been converted to equivalent sea-level pressure, and this is transferred to the barometer being set—regardless of its altitude. Though somewhat rare, a few aneroid barometers intended for monitoring the weather are calibrated to manually adjust for altitude. In this case, knowing either the altitude or the current atmospheric pressure would be sufficient for future accurate readings.
The table below shows examples for three locations in the city of San Francisco, California. Note the corrected barometer readings are identical, and based on equivalent sea-level pressure. (Assume a temperature of 15 °C.)
In 1787, during a scientific expedition on Mont Blanc, De Saussure undertook research and executed physical experiments on the boiling point of water at different heights. He calculated the height at each of his experiments by measuring how long it took an alcohol burner to boil an amount of water, and by these means he determined the height of the mountain to be 4775 metres. (This later turned out to be 32 metres less than the actual height of 4807 metres). For these experiments De Saussure brought specific scientific equipment, such as a barometer and thermometer. His calculated boiling temperature of water at the top of the mountain was fairly accurate, only off by 0.1 kelvin.
Based on his findings, the altimeter could be developed as a specific application of the barometer. In the mid-19th century, this method was used by explorers.
Equation
When atmospheric pressure is measured by a barometer, the pressure is also referred to as the "barometric pressure". Assume a barometer with a cross-sectional area A, a height h, filled with mercury from the bottom at Point B to the top at Point C. The pressure at the bottom of the barometer, Point B, is equal to the atmospheric pressure. The pressure at the very top, Point C, can be taken as zero because there is only mercury vapour above this point and its pressure is very low relative to the atmospheric pressure. Therefore, one can find the atmospheric pressure using the barometer and this equation:
Patm = ρgh
where ρ is the density of mercury, g is the gravitational acceleration, and h is the height of the mercury column above the free surface area. The physical dimensions (length of tube and cross-sectional area of the tube) of the barometer itself have no effect on the height of the fluid column in the tube.
In thermodynamic calculations, a commonly used pressure unit is the "standard atmosphere". This is the pressure resulting from a column of mercury of 760 mm in height at 0 °C. For the density of mercury, use ρHg = 13,595 kg/m3 and for gravitational acceleration use g = 9.807 m/s2.
If water were used (instead of mercury) to meet the standard atmospheric pressure, a water column of roughly 10.3 m (33.8 ft) would be needed.
Standard atmospheric pressure as a function of elevation:
Note: 1 torr = 133.3 Pa = 0.03937 inHg
| Technology | Measuring instruments | null |
47490 | https://en.wikipedia.org/wiki/Biodegradation | Biodegradation | Biodegradation is the breakdown of organic matter by microorganisms, such as bacteria and fungi. It is generally assumed to be a natural process, which differentiates it from composting. Composting is a human-driven process in which biodegradation occurs under a specific set of circumstances.
The process of biodegradation is threefold: first an object undergoes biodeterioration, which is the mechanical weakening of its structure; then follows biofragmentation, which is the breakdown of materials by microorganisms; and finally assimilation, which is the incorporation of the old material into new cells.
In practice, almost all chemical compounds and materials are subject to biodegradation, the key element being time. Things like vegetables may degrade within days, while glass and some plastics take many millennia to decompose. A standard for biodegradability used by the European Union is that greater than 90% of the original material must be converted into , water and minerals by biological processes within 6 months.
Mechanisms
The process of biodegradation can be divided into three stages: biodeterioration, biofragmentation, and assimilation. Biodeterioration is sometimes described as a surface-level degradation that modifies the mechanical, physical and chemical properties of the material. This stage occurs when the material is exposed to abiotic factors in the outdoor environment and allows for further degradation by weakening the material's structure. Some abiotic factors that influence these initial changes are compression (mechanical), light, temperature and chemicals in the environment. While biodeterioration typically occurs as the first stage of biodegradation, it can in some cases be parallel to biofragmentation. Hueck, however, defined Biodeterioration as the undesirable action of living organisms on Man's materials, involving such things as breakdown of stone facades of buildings, corrosion of metals by microorganisms or merely the esthetic changes induced on man-made structures by the growth of living organisms.
Biofragmentation of a polymer is the lytic process in which bonds within a polymer are cleaved, generating oligomers and monomers in its place. The steps taken to fragment these materials also differ based on the presence of oxygen in the system. The breakdown of materials by microorganisms when oxygen is present is aerobic digestion, and the breakdown of materials when oxygen is not present is anaerobic digestion. The main difference between these processes is that anaerobic reactions produce methane, while aerobic reactions do not (however, both reactions produce carbon dioxide, water, some type of residue, and a new biomass). In addition, aerobic digestion typically occurs more rapidly than anaerobic digestion, while anaerobic digestion does a better job reducing the volume and mass of the material. Due to anaerobic digestion's ability to reduce the volume and mass of waste materials and produce a natural gas, anaerobic digestion technology is widely used for waste management systems and as a source of local, renewable energy.
In the assimilation stage, the resulting products from biofragmentation are then integrated into microbial cells. Some of the products from fragmentation are easily transported within the cell by membrane carriers. However, others still have to undergo biotransformation reactions to yield products that can then be transported inside the cell. Once inside the cell, the products enter catabolic pathways that either lead to the production of adenosine triphosphate (ATP) or elements of the cells structure.
Aerobic biodegradation equation
C + O → C + C + CO + HO
Anaerobic biodegradation equation
C → C + C + CO + CH + HO
Factors affecting biodegradation rate
In practice, almost all chemical compounds and materials are subject to biodegradation processes. The significance, however, is in the relative rates of such processes, such as days, weeks, years or centuries. A number of factors determine the rate at which this degradation of organic compounds occurs. Factors include light, water, oxygen and temperature. The degradation rate of many organic compounds is limited by their bioavailability, which is the rate at which a substance is absorbed into a system or made available at the site of physiological activity, as compounds must be released into solution before organisms can degrade them. The rate of biodegradation can be measured in a number of ways. Respirometry tests can be used for aerobic microbes. First one places a solid waste sample in a container with microorganisms and soil, and then aerates the mixture. Over the course of several days, microorganisms digest the sample bit by bit and produce carbon dioxide – the resulting amount of CO2 serves as an indicator of degradation. Biodegradability can also be measured by anaerobic microbes and the amount of methane or alloy that they are able to produce.
It's important to note factors that affect biodegradation rates during product testing to ensure that the results produced are accurate and reliable. Several materials will test as being biodegradable under optimal conditions in a lab for approval but these results may not reflect real world outcomes where factors are more variable. For example, a material may have tested as biodegrading at a high rate in the lab may not degrade at a high rate in a landfill because landfills often lack light, water, and microbial activity that are necessary for degradation to occur. Thus, it is very important that there are standards for plastic biodegradable products, which have a large impact on the environment. The development and use of accurate standard test methods can help ensure that all plastics that are being produced and commercialized will actually biodegrade in natural environments. One test that has been developed for this purpose is DINV 54900.
Plastics
The term Biodegradable Plastics refers to materials that maintain their mechanical strength during practical use but break down into low-weight compounds and non-toxic byproducts after their use. This breakdown is made possible through an attack of microorganisms on the material, which is typically a non-water-soluble polymer. Such materials can be obtained through chemical synthesis, fermentation by microorganisms, and from chemically modified natural products.
Plastics biodegrade at highly variable rates. PVC-based plumbing is selected for handling sewage because PVC resists biodegradation. Some packaging materials on the other hand are being developed that would degrade readily upon exposure to the environment. Examples of synthetic polymers that biodegrade quickly include polycaprolactone, other polyesters and aromatic-aliphatic esters, due to their ester bonds being susceptible to attack by water. A prominent example is poly-3-hydroxybutyrate, the renewably derived polylactic acid. Others are the cellulose-based cellulose acetate and celluloid (cellulose nitrate).
Under low oxygen conditions plastics break down more slowly. The breakdown process can be accelerated in specially designed compost heap. Starch-based plastics will degrade within two to four months in a home compost bin, while polylactic acid is largely undecomposed, requiring higher temperatures. Polycaprolactone and polycaprolactone-starch composites decompose slower, but the starch content accelerates decomposition by leaving behind a porous, high surface area polycaprolactone. Nevertheless, it takes many months.
In 2016, a bacterium named Ideonella sakaiensis was found to biodegrade PET. In 2020, the PET degrading enzyme of the bacterium, PETase, has been genetically modified and combined with MHETase to break down PET faster, and also degrade PEF. In 2021, researchers reported that a mix of microorganisms from cow stomachs could break down three types of plastics.
Many plastic producers have gone so far even to say that their plastics are compostable, typically listing corn starch as an ingredient. However, these claims are questionable because the plastics industry operates under its own definition of compostable:
"that which is capable of undergoing biological decomposition in a compost site such that the material is not visually distinguishable and breaks down into carbon dioxide, water, inorganic compounds and biomass at a rate consistent with known compostable materials." (Ref: ASTM D 6002)
The term "composting" is often used informally to describe the biodegradation of packaging materials. Legal definitions exist for compostability, the process that leads to compost. Four criteria are offered by the European Union:
Chemical composition: volatile matter and heavy metals as well as fluorine should be limited.
Biodegradability: the conversion of >90% of the original material into , water and minerals by biological processes within 6 months.
Disintegrability: at least 90% of the original mass should be decomposed into particles that are able to pass through a 2x2 mm sieve.
Quality: absence of toxic substances and other substances that impede composting.
Biodegradable technology
Biodegradable technology is established technology with some applications in product packaging, production, and medicine. The chief barrier to widespread implementation is the trade-off between biodegradability and performance. For example, lactide-based plastics are inferior packaging properties in comparison to traditional materials.
Oxo-biodegradation is defined by CEN (the European Standards Organisation) as "degradation resulting from oxidative and cell-mediated phenomena, either simultaneously or successively." While sometimes described as "oxo-fragmentable," and "oxo-degradable" these terms describe only the first or oxidative phase and should not be used for material which degrades by the process of oxo-biodegradation defined by CEN: the correct description is "oxo-biodegradable." Oxo-biodegradable formulations accelerate the biodegradation process but it takes considerable skill and experience to balance the ingredients within the formulations so as to provide the product with a useful life for a set period, followed by degradation and biodegradation.
Biodegradable technology is especially utilized by the bio-medical community. Biodegradable polymers are classified into three groups:
medical, ecological, and dual application, while in terms of origin they are divided into two groups: natural and synthetic. The Clean Technology Group is exploiting the use of supercritical carbon dioxide, which under high pressure at room temperature is a solvent that can use biodegradable plastics to make polymer drug coatings. The polymer (meaning a material composed of molecules with repeating structural units that form a long chain) is used to encapsulate a drug prior to injection in the body and is based on lactic acid, a compound normally produced in the body, and is thus able to be excreted naturally. The coating is designed for controlled release over a period of time, reducing the number of injections required and maximizing the therapeutic benefit. Professor Steve Howdle states that biodegradable polymers are particularly attractive for use in drug delivery, as once introduced into the body they require no retrieval or further manipulation and are degraded into soluble, non-toxic by-products. Different polymers degrade at different rates within the body and therefore polymer selection can be tailored to achieve desired release rates.
Other biomedical applications include the use of biodegradable, elastic shape-memory polymers. Biodegradable implant materials can now be used for minimally invasive surgical procedures through degradable thermoplastic polymers. These polymers are now able to change their shape with increase of temperature, causing shape memory capabilities as well as easily degradable sutures. As a result, implants can now fit through small incisions, doctors can easily perform complex deformations, and sutures and other material aides can naturally biodegrade after a completed surgery.
Biodegradation vs. composting
There is no universal definition for biodegradation and there are various definitions of composting, which has led to much confusion between the terms. They are often lumped together; however, they do not have the same meaning. Biodegradation is the naturally-occurring breakdown of materials by microorganisms such as bacteria and fungi or other biological activity. Composting is a human-driven process in which biodegradation occurs under a specific set of circumstances. The predominant difference between the two is that one process is naturally-occurring and one is human-driven.
Biodegradable material is capable of decomposing without an oxygen source (anaerobically) into carbon dioxide, water, and biomass, but the timeline is not very specifically defined. Similarly, compostable material breaks down into carbon dioxide, water, and biomass; however, compostable material also breaks down into inorganic compounds. The process for composting is more specifically defined, as it is controlled by humans. Essentially, composting is an accelerated biodegradation process due to optimized circumstances. Additionally, the end product of composting not only returns to its previous state, but also generates and adds beneficial microorganisms to the soil called humus. This organic matter can be used in gardens and on farms to help grow healthier plants in the future. Composting more consistently occurs within a shorter time frame since it is a more defined process and is expedited by human intervention. Biodegradation can occur in different time frames under different circumstances, but is meant to occur naturally without human intervention.
Even within composting, there are different circumstances under which this can occur. The two main types of composting are at-home versus commercial. Both produce healthy soil to be reused – the main difference lies in what materials are able to go into the process. At-home composting is mostly used for food scraps and excess garden materials, such as weeds. Commercial composting is capable of breaking down more complex plant-based products, such as corn-based plastics and larger pieces of material, like tree branches. Commercial composting begins with a manual breakdown of the materials using a grinder or other machine to initiate the process. Because at-home composting usually occurs on a smaller scale and does not involve large machinery, these materials would not fully decompose in at-home composting. Furthermore, one study has compared and contrasted home and industrial composting, concluding that there are advantages and disadvantages to both.
The following studies provide examples in which composting has been defined as a subset of biodegradation in a scientific context. The first study, "Assessment of Biodegradability of Plastics Under Simulated Composting Conditions in a Laboratory Test Setting," clearly examines composting as a set of circumstances that falls under the category of degradation. Additionally, this next study looked at the biodegradation and composting effects of chemically and physically crosslinked polylactic acid. Notably discussing composting and biodegrading as two distinct terms. The third and final study reviews European standardization of biodegradable and compostable material in the packaging industry, again using the terms separately.
The distinction between these terms is crucial because waste management confusion leads to improper disposal of materials by people on a daily basis. Biodegradation technology has led to massive improvements in how we dispose of waste; there now exist trash, recycling, and compost bins in order to optimize the disposal process. However, if these waste streams are commonly and frequently confused, then the disposal process is not at all optimized. Biodegradable and compostable materials have been developed to ensure more of human waste is able to breakdown and return to its previous state, or in the case of composting even add nutrients to the ground. When a compostable product is thrown out as opposed to composted and sent to a landfill, these inventions and efforts are wasted. Therefore, it is important for citizens to understand the difference between these terms so that materials can be disposed of properly and efficiently.
Environmental and social effects
Plastic pollution from illegal dumping poses health risks to wildlife. Animals often mistake plastics for food, resulting in intestinal entanglement. Slow-degrading chemicals, like polychlorinated biphenyls (PCBs), nonylphenol (NP), and pesticides also found in plastics, can release into environments and subsequently also be ingested by wildlife.
These chemicals also play a role in human health, as consumption of tainted food (in processes called biomagnification and bioaccumulation) has been linked to issues such as cancers, neurological dysfunction, and hormonal changes. A well-known example of biomagnification impacting health in recent times is the increased exposure to dangerously high levels of mercury in fish, which can affect sex hormones in humans.
In efforts to remediate the damages done by slow-degrading plastics, detergents, metals, and other pollutants created by humans, economic costs have become a concern. Marine litter in particular is notably difficult to quantify and review. Researchers at the World Trade Institute estimate that cleanup initiatives' cost (specifically in ocean ecosystems) has hit close to thirteen billion dollars a year. The main concern stems from marine environments, with the biggest cleanup efforts centering around garbage patches in the ocean. The Great Pacific Garbage Patch, a garbage patch the size of Mexico, is located in the Pacific Ocean. It is estimated to be upwards of a million square miles in size. While the patch contains more obvious examples of litter (plastic bottles, cans, and bags), tiny microplastics are nearly impossible to clean up. National Geographic reports that even more non-biodegradable materials are finding their way into vulnerable environments – nearly thirty-eight million pieces a year.
Materials that have not degraded can also serve as shelter for invasive species, such as tube worms and barnacles. When the ecosystem changes in response to the invasive species, resident species and the natural balance of resources, genetic diversity, and species richness is altered. These factors may support local economies in way of hunting and aquaculture, which suffer in response to the change. Similarly, coastal communities which rely heavily on ecotourism lose revenue thanks to a buildup of pollution, as their beaches or shores are no longer desirable to travelers. The World Trade Institute also notes that the communities who often feel most of the effects of poor biodegradation are poorer countries without the means to pay for their cleanup. In a positive feedback loop effect, they in turn have trouble controlling their own pollution sources.
Etymology of "biodegradable"
The first known use of biodegradable in a biological context was in 1959 when it was employed to describe the breakdown of material into innocuous components by microorganisms. Now biodegradable is commonly associated with environmentally friendly products that are part of the earth's innate cycles like the carbon cycle and capable of decomposing back into natural elements.
| Biology and health sciences | Ecology | Biology |
47501 | https://en.wikipedia.org/wiki/Brightness%20temperature | Brightness temperature | Brightness temperature or radiance temperature is a measure of the intensity of electromagnetic energy coming from a source. In particular, it is the temperature at which a black body would have to be in order to duplicate the observed intensity of a grey body object at a frequency .
This concept is used in radio astronomy, planetary science, materials science and climatology.
The brightness temperature provides "a more physically recognizable way to describe intensity".
When the electromagnetic radiation observed is thermal radiation emitted by an object simply by virtue of its temperature, then the actual temperature of the object will always be equal to or higher than the brightness temperature. Since the emissivity is limited by 1, the brightness temperature is a lower bound of the object’s actual temperature.
For radiation emitted by a non-thermal source such as a pulsar, synchrotron, maser, or a laser, the brightness temperature may be far higher than the actual temperature of the source. In this case, the brightness temperature is simply a measure of the intensity of the radiation as it would be measured at the origin of that radiation.
In some applications, the brightness temperature of a surface is determined by an optical measurement, for example using a pyrometer, with the intention of determining the real temperature. As detailed below, the real temperature of a surface can in some cases be calculated by dividing the brightness temperature by the emissivity of the surface. Since the emissivity is a value between 0 and 1, the real temperature will be greater than or equal to the brightness temperature. At high frequencies (short wavelengths) and low temperatures, the conversion must proceed through Planck's law.
The brightness temperature is not a temperature as ordinarily understood. It characterizes radiation, and depending on the mechanism of radiation can differ considerably from the physical temperature of a radiating body (though it is theoretically possible to construct a device which will heat up by a source of radiation with some brightness temperature to the actual temperature equal to brightness temperature).
Nonthermal sources can have very high brightness temperatures. In pulsars the brightness temperature can reach 1030 K. For the radiation of a helium–neon laser with a power of 1 mW, a frequency spread Δf = 1 GHz, an output aperture of 1 mm, and a beam dispersion half-angle of 0.56 mrad, the brightness temperature would be .
For a black body, Planck's law gives:
where (the Intensity or Brightness) is the amount of energy emitted per unit surface area per unit time per unit solid angle and in the frequency range between and ; is the temperature of the black body; is the Planck constant; is frequency; is the speed of light; and is the Boltzmann constant.
For a grey body the spectral radiance is a portion of the black body radiance, determined by the emissivity .
That makes the reciprocal of the brightness temperature:
At low frequency and high temperatures, when , we can use the Rayleigh–Jeans law:
so that the brightness temperature can be simply written as:
In general, the brightness temperature is a function of , and only in the case of blackbody radiation it is the same at all frequencies. The brightness temperature can be used to calculate the spectral index of a body, in the case of non-thermal radiation.
Calculating by frequency
The brightness temperature of a source with known spectral radiance can be expressed as:
When we can use the Rayleigh–Jeans law:
For narrowband radiation with very low relative spectral linewidth and known radiance we can calculate the brightness temperature as:
Calculating by wavelength
Spectral radiance of black-body radiation is expressed by wavelength as:
So, the brightness temperature can be calculated as:
For long-wave radiation the brightness temperature is:
For almost monochromatic radiation, the brightness temperature can be expressed by the radiance and the coherence length :
In oceanography
In oceanography, the microwave brightness temperature, as measured by satellites looking at the ocean surface, depends on salinity as well as on the temperature and roughness (e.g. from wind-driven waves) of the water.
| Physical sciences | Radio astronomy | Astronomy |
47503 | https://en.wikipedia.org/wiki/Carbon%20cycle | Carbon cycle | The carbon cycle is that part of the biogeochemical cycle by which carbon is exchanged among the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere of Earth. Other major biogeochemical cycles include the nitrogen cycle and the water cycle. Carbon is the main component of biological compounds as well as a major component of many rocks such as limestone. The carbon cycle comprises a sequence of events that are key to making Earth capable of sustaining life. It describes the movement of carbon as it is recycled and reused throughout the biosphere, as well as long-term processes of carbon sequestration (storage) to and release from carbon sinks.
To describe the dynamics of the carbon cycle, a distinction can be made between the fast and slow carbon cycle. The fast cycle is also referred to as the biological carbon cycle. Fast cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles (also called deep carbon cycle) can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere.
Humans have disturbed the carbon cycle for many centuries. They have done so by modifying land use and by mining and burning carbon from ancient organic remains (coal, petroleum and gas). Carbon dioxide in the atmosphere has increased nearly 52% over pre-industrial levels by 2020, resulting in global warming. The increased carbon dioxide has also caused a reduction in the ocean's pH value and is fundamentally altering marine chemistry. Carbon dioxide is critical for photosynthesis.
Main compartments of the Carbon Cycle
The carbon cycle was first described by Antoine Lavoisier and Joseph Priestley, and popularised by Humphry Davy. The global carbon cycle is now usually divided into the following major reservoirs of carbon (also called carbon pools) interconnected by pathways of exchange:
Atmosphere
Terrestrial biosphere
Ocean, including dissolved inorganic carbon and living and non-living marine biota
Sediments, including fossil fuels, freshwater systems, and non-living organic material.
Earth's interior (mantle and crust). These carbon stores interact with the other components through geological processes.
The carbon exchanges between reservoirs occur as the result of various chemical, physical, geological, and biological processes. The ocean contains the largest active pool of carbon near the surface of the Earth.
The natural flows of carbon between the atmosphere, ocean, terrestrial ecosystems, and sediments are fairly balanced; so carbon levels would be roughly stable without human influence.
Atmosphere
Carbon in the Earth's atmosphere exists in two main forms: carbon dioxide and methane. Both of these gases absorb and retain heat in the atmosphere and are partially responsible for the greenhouse effect. Methane produces a larger greenhouse effect per volume as compared to carbon dioxide, but it exists in much lower concentrations and is more short-lived than carbon dioxide. Thus, carbon dioxide contributes more to the global greenhouse effect than methane.
Carbon dioxide is removed from the atmosphere primarily through photosynthesis and enters the terrestrial and oceanic biospheres. Carbon dioxide also dissolves directly from the atmosphere into bodies of water (ocean, lakes, etc.), as well as dissolving in precipitation as raindrops fall through the atmosphere. When dissolved in water, carbon dioxide reacts with water molecules and forms carbonic acid, which contributes to ocean acidity. It can then be absorbed by rocks through weathering. It also can acidify other surfaces it touches or be washed into the ocean.
Human activities over the past two centuries have increased the amount of carbon in the atmosphere by nearly 50% as of year 2020, mainly in the form of carbon dioxide, both by modifying ecosystems' ability to extract carbon dioxide from the atmosphere and by emitting it directly, e.g., by burning fossil fuels and manufacturing concrete.
In the far future (2 to 3 billion years), the rate at which carbon dioxide is absorbed into the soil via the carbonate–silicate cycle will likely increase due to expected changes in the sun as it ages. The expected increased luminosity of the Sun will likely speed up the rate of surface weathering. This will eventually cause most of the carbon dioxide in the atmosphere to be squelched into the Earth's crust as carbonate. Once the concentration of carbon dioxide in the atmosphere falls below approximately 50 parts per million (tolerances vary among species), C3 photosynthesis will no longer be possible. This has been predicted to occur 600 million years from the present, though models vary.
Once the oceans on the Earth evaporate in about 1.1 billion years from now, plate tectonics will very likely stop due to the lack of water to lubricate them. The lack of volcanoes pumping out carbon dioxide will cause the carbon cycle to end between 1 billion and 2 billion years into the future.
Terrestrial biosphere
The terrestrial biosphere includes the organic carbon in all land-living organisms, both alive and dead, as well as carbon stored in soils. About 500 gigatons of carbon are stored above ground in plants and other living organisms, while soil holds approximately 1,500 gigatons of carbon. Most carbon in the terrestrial biosphere is organic carbon, while about a third of soil carbon is stored in inorganic forms, such as calcium carbonate. Organic carbon is a major component of all organisms living on Earth. Autotrophs extract it from the air in the form of carbon dioxide, converting it to organic carbon, while heterotrophs receive carbon by consuming other organisms.
Because carbon uptake in the terrestrial biosphere is dependent on biotic factors, it follows a diurnal and seasonal cycle. In CO2 measurements, this feature is apparent in the Keeling curve. It is strongest in the northern hemisphere because this hemisphere has more land mass than the southern hemisphere and thus more room for ecosystems to absorb and emit carbon.
Carbon leaves the terrestrial biosphere in several ways and on different time scales. The combustion or respiration of organic carbon releases it rapidly into the atmosphere. It can also be exported into the ocean through rivers or remain sequestered in soils in the form of inert carbon. Carbon stored in soil can remain there for up to thousands of years before being washed into rivers by erosion or released into the atmosphere through soil respiration. Between 1989 and 2008 soil respiration increased by about 0.1% per year. In 2008, the global total of CO2 released by soil respiration was roughly 98 billion tonnes, about 3 times more carbon than humans are now putting into the atmosphere each year by burning fossil fuel (this does not represent a net transfer of carbon from soil to atmosphere, as the respiration is largely offset by inputs to soil carbon). There are a few plausible explanations for this trend, but the most likely explanation is that increasing temperatures have increased rates of decomposition of soil organic matter, which has increased the flow of CO2. The length of carbon sequestering in soil is dependent on local climatic conditions and thus changes in the course of climate change.
Ocean
The ocean can be conceptually divided into a surface layer within which water makes frequent (daily to annual) contact with the atmosphere, and a deep layer below the typical mixed layer depth of a few hundred meters or less, within which the time between consecutive contacts may be centuries. The dissolved inorganic carbon (DIC) in the surface layer is exchanged rapidly with the atmosphere, maintaining equilibrium. Partly because its concentration of DIC is about 15% higher but mainly due to its larger volume, the deep ocean contains far more carbon—it is the largest pool of actively cycled carbon in the world, containing 50 times more than the atmosphere—but the timescale to reach equilibrium with the atmosphere is hundreds of years: the exchange of carbon between the two layers, driven by thermohaline circulation, is slow.
Carbon enters the ocean mainly through the dissolution of atmospheric carbon dioxide, a small fraction of which is converted into carbonate. It can also enter the ocean through rivers as dissolved organic carbon. It is converted by organisms into organic carbon through photosynthesis and can either be exchanged throughout the food chain or precipitated into the oceans' deeper, more carbon-rich layers as dead soft tissue or in shells as calcium carbonate. It circulates in this layer for long periods of time before either being deposited as sediment or, eventually, returned to the surface waters through thermohaline circulation.
Oceans are basic (with a current pH value of 8.1 to 8.2). The increase in atmospheric CO2 shifts the pH of the ocean towards neutral in a process called ocean acidification. Oceanic absorption of CO2 is one of the most important forms of carbon sequestering. The projected rate of pH reduction could slow the biological precipitation of calcium carbonates, thus decreasing the ocean's capacity to absorb CO2.
Geosphere
The geologic component of the carbon cycle operates slowly in comparison to the other parts of the global carbon cycle. It is one of the most important determinants of the amount of carbon in the atmosphere, and thus of global temperatures.
Most of the Earth's carbon is stored inertly in the Earth's lithosphere. Much of the carbon stored in the Earth's mantle was stored there when the Earth formed. Some of it was deposited in the form of organic carbon from the biosphere. Of the carbon stored in the geosphere, about 80% is limestone and its derivatives, which form from the sedimentation of calcium carbonate stored in the shells of marine organisms. The remaining 20% is stored as kerogens formed through the sedimentation and burial of terrestrial organisms under high heat and pressure. Organic carbon stored in the geosphere can remain there for millions of years.
Carbon can leave the geosphere in several ways. Carbon dioxide is released during the metamorphism of carbonate rocks when they are subducted into the Earth's mantle. This carbon dioxide can be released into the atmosphere and ocean through volcanoes and hotspots. It can also be removed by humans through the direct extraction of kerogens in the form of fossil fuels. After extraction, fossil fuels are burned to release energy and emit the carbon they store into the atmosphere.
Types of dynamic
There is a fast and a slow carbon cycle. The fast cycle operates in the biosphere and the slow cycle operates in rocks. The fast or biological cycle can complete within years, moving carbon from atmosphere to biosphere, then back to the atmosphere. The slow or geological cycle may extend deep into the mantle and can take millions of years to complete, moving carbon through the Earth's crust between rocks, soil, ocean and atmosphere.
The fast carbon cycle involves relatively short-term biogeochemical processes between the environment and living organisms in the biosphere (see diagram at start of article). It includes movements of carbon between the atmosphere and terrestrial and marine ecosystems, as well as soils and seafloor sediments. The fast cycle includes annual cycles involving photosynthesis and decadal cycles involving vegetative growth and decomposition. The reactions of the fast carbon cycle to human activities will determine many of the more immediate impacts of climate change.
The slow (or deep) carbon cycle involves medium to long-term geochemical processes belonging to the rock cycle (see diagram on the right). The exchange between the ocean and atmosphere can take centuries, and the weathering of rocks can take millions of years. Carbon in the ocean precipitates to the ocean floor where it can form sedimentary rock and be subducted into the Earth's mantle. Mountain building processes result in the return of this geologic carbon to the Earth's surface. There the rocks are weathered and carbon is returned to the atmosphere by degassing and to the ocean by rivers. Other geologic carbon returns to the ocean through the hydrothermal emission of calcium ions. In a given year between 10 and 100 million tonnes of carbon moves around this slow cycle. This includes volcanoes returning geologic carbon directly to the atmosphere in the form of carbon dioxide. However, this is less than one percent of the carbon dioxide put into the atmosphere by burning fossil fuels.
Processes within fast carbon cycle
Terrestrial carbon in the water cycle
The movement of terrestrial carbon in the water cycle is shown in the diagram on the right and explained below:
Atmospheric particles act as cloud condensation nuclei, promoting cloud formation.
Raindrops absorb organic and inorganic carbon through particle scavenging and adsorption of organic vapors while falling toward Earth.
Burning and volcanic eruptions produce highly condensed polycyclic aromatic molecules (i.e. black carbon) that is returned to the atmosphere along with greenhouse gases such as CO2.
Terrestrial plants fix atmospheric CO2 through photosynthesis, returning a fraction back to the atmosphere through respiration. Lignin and celluloses represent as much as 80% of the organic carbon in forests and 60% in pastures.
Litterfall and root organic carbon mix with sedimentary material to form organic soils where plant-derived and petrogenic organic carbon is both stored and transformed by microbial and fungal activity.
Water absorbs plant and settled aerosol-derived dissolved organic carbon (DOC) and dissolved inorganic carbon (DIC) as it passes over forest canopies (i.e. throughfall) and along plant trunks/stems (i.e. stemflow). Biogeochemical transformations take place as water soaks into soil solution and groundwater reservoirs and overland flow occurs when soils are completely saturated, or rainfall occurs more rapidly than saturation into soils.
Organic carbon derived from the terrestrial biosphere and in situ primary production is decomposed by microbial communities in rivers and streams along with physical decomposition (i.e. photo-oxidation), resulting in a flux of CO2 from rivers to the atmosphere that are the same order of magnitude as the amount of carbon sequestered annually by the terrestrial biosphere. Terrestrially-derived macromolecules such as lignin and black carbon are decomposed into smaller components and monomers, ultimately being converted to CO2, metabolic intermediates, or biomass.
Lakes, reservoirs, and floodplains typically store large amounts of organic carbon and sediments, but also experience net heterotrophy in the water column, resulting in a net flux of CO2 to the atmosphere that is roughly one order of magnitude less than rivers. Methane production is also typically high in the anoxic sediments of floodplains, lakes, and reservoirs.
Primary production is typically enhanced in river plumes due to the export of fluvial nutrients. Nevertheless, estuarine waters are a source of CO2 to the atmosphere, globally.
Coastal marshes both store and export blue carbon. Marshes and wetlands are suggested to have an equivalent flux of CO2 to the atmosphere as rivers, globally.
Continental shelves and the open ocean typically absorb CO2 from the atmosphere.
The marine biological pump sequesters a small but significant fraction of the absorbed CO2 as organic carbon in marine sediments (see below).
Terrestrial runoff to the ocean
Terrestrial and marine ecosystems are chiefly connected through riverine transport, which acts as the main channel through which erosive terrestrially derived substances enter into oceanic systems. Material and energy exchanges between the terrestrial biosphere and the lithosphere as well as organic carbon fixation and oxidation processes together regulate ecosystem carbon and dioxygen (O2) pools.
Riverine transport, being the main connective channel of these pools, will act to transport net primary productivity (primarily in the form of dissolved organic carbon (DOC) and particulate organic carbon (POC)) from terrestrial to oceanic systems. During transport, part of DOC will rapidly return to the atmosphere through redox reactions, causing "carbon degassing" to occur between land-atmosphere storage layers. The remaining DOC and dissolved inorganic carbon (DIC) are also exported to the ocean. In 2015, inorganic and organic carbon export fluxes from global rivers were assessed as 0.50–0.70 Pg C y−1 and 0.15–0.35 Pg C y−1 respectively. On the other hand, POC can remain buried in sediment over an extensive period, and the annual global terrestrial to oceanic POC flux has been estimated at 0.20 (+0.13,-0.07) Gg C y−1.
Biological pump in the ocean
The ocean biological pump is the ocean's biologically driven sequestration of carbon from the atmosphere and land runoff to the deep ocean interior and seafloor sediments. The biological pump is not so much the result of a single process, but rather the sum of a number of processes each of which can influence biological pumping. The pump transfers about 11 billion tonnes of carbon every year into the ocean's interior. An ocean without the biological pump would result in atmospheric CO2 levels about 400 ppm higher than the present day.
Most carbon incorporated in organic and inorganic biological matter is formed at the sea surface where it can then start sinking to the ocean floor. The deep ocean gets most of its nutrients from the higher water column when they sink down in the form of marine snow. This is made up of dead or dying animals and microbes, fecal matter, sand and other inorganic material.
The biological pump is responsible for transforming dissolved inorganic carbon (DIC) into organic biomass and pumping it in particulate or dissolved form into the deep ocean. Inorganic nutrients and carbon dioxide are fixed during photosynthesis by phytoplankton, which both release dissolved organic matter (DOM) and are consumed by herbivorous zooplankton. Larger zooplankton - such as copepods, egest fecal pellets - which can be reingested, and sink or collect with other organic detritus into larger, more-rapidly-sinking aggregates. DOM is partially consumed by bacteria and respired; the remaining refractory DOM is advected and mixed into the deep sea. DOM and aggregates exported into the deep water are consumed and respired, thus returning organic carbon into the enormous deep ocean reservoir of DIC.
A single phytoplankton cell has a sinking rate around one metre per day. Given that the average depth of the ocean is about four kilometres, it can take over ten years for these cells to reach the ocean floor. However, through processes such as coagulation and expulsion in predator fecal pellets, these cells form aggregates. These aggregates have sinking rates orders of magnitude greater than individual cells and complete their journey to the deep in a matter of days.
About 1% of the particles leaving the surface ocean reach the seabed and are consumed, respired, or buried in the sediments. The net effect of these processes is to remove carbon in organic form from the surface and return it to DIC at greater depths, maintaining a surface-to-deep ocean gradient of DIC. Thermohaline circulation returns deep-ocean DIC to the atmosphere on millennial timescales. The carbon buried in the sediments can be subducted into the earth's mantle and stored for millions of years as part of the slow carbon cycle (see next section).
Viruses as regulators
Viruses act as "regulators" of the fast carbon cycle because they impact the material cycles and energy flows of food webs and the microbial loop. The average contribution of viruses to the Earth ecosystem carbon cycle is 8.6%, of which its contribution to marine ecosystems (1.4%) is less than its contribution to terrestrial (6.7%) and freshwater (17.8%) ecosystems. Over the past 2,000 years, anthropogenic activities and climate change have gradually altered the regulatory role of viruses in ecosystem carbon cycling processes. This has been particularly conspicuous over the past 200 years due to rapid industrialization and the attendant population growth.
Processes within slow carbon cycle
Slow or deep carbon cycling is an important process, though it is not as well-understood as the relatively fast carbon movement through the atmosphere, terrestrial biosphere, ocean, and geosphere. The deep carbon cycle is intimately connected to the movement of carbon in the Earth's surface and atmosphere. If the process did not exist, carbon would remain in the atmosphere, where it would accumulate to extremely high levels over long periods of time. Therefore, by allowing carbon to return to the Earth, the deep carbon cycle plays a critical role in maintaining the terrestrial conditions necessary for life to exist.
Furthermore, the process is also significant simply due to the massive quantities of carbon it transports through the planet. In fact, studying the composition of basaltic magma and measuring carbon dioxide flux out of volcanoes reveals that the amount of carbon in the mantle is actually greater than that on the Earth's surface by a factor of one thousand. Drilling down and physically observing deep-Earth carbon processes is evidently extremely difficult, as the lower mantle and core extend from 660 to 2,891 km and 2,891 to 6,371 km deep into the Earth respectively. Accordingly, not much is conclusively known regarding the role of carbon in the deep Earth. Nonetheless, several pieces of evidence—many of which come from laboratory simulations of deep Earth conditions—have indicated mechanisms for the element's movement down into the lower mantle, as well as the forms that carbon takes at the extreme temperatures and pressures of said layer. Furthermore, techniques like seismology have led to a greater understanding of the potential presence of carbon in the Earth's core.
Carbon in the lower mantle
Carbon principally enters the mantle in the form of carbonate-rich sediments on tectonic plates of ocean crust, which pull the carbon into the mantle upon undergoing subduction. Not much is known about carbon circulation in the mantle, especially in the deep Earth, but many studies have attempted to augment our understanding of the element's movement and forms within the region. For instance, a 2011 study demonstrated that carbon cycling extends all the way to the lower mantle. The study analyzed rare, super-deep diamonds at a site in Juina, Brazil, determining that the bulk composition of some of the diamonds' inclusions matched the expected result of basalt melting and crystallisation under lower mantle temperatures and pressures. Thus, the investigation's findings indicate that pieces of basaltic oceanic lithosphere act as the principle transport mechanism for carbon to Earth's deep interior. These subducted carbonates can interact with lower mantle silicates, eventually forming super-deep diamonds like the one found.
However, carbonates descending to the lower mantle encounter other fates in addition to forming diamonds. In 2011, carbonates were subjected to an environment similar to that of 1800 km deep into the Earth, well within the lower mantle. Doing so resulted in the formations of magnesite, siderite, and numerous varieties of graphite. Other experiments—as well as petrologic observations—support this claim, indicating that magnesite is actually the most stable carbonate phase in most part of the mantle. This is largely a result of its higher melting temperature. Consequently, scientists have concluded that carbonates undergo reduction as they descend into the mantle before being stabilised at depth by low oxygen fugacity environments. Magnesium, iron, and other metallic compounds act as buffers throughout the process. The presence of reduced, elemental forms of carbon like graphite would indicate that carbon compounds are reduced as they descend into the mantle.
Polymorphism alters carbonate compounds' stability at different depths within the Earth. To illustrate, laboratory simulations and density functional theory calculations suggest that tetrahedrally coordinated carbonates are most stable at depths approaching the core–mantle boundary. A 2015 study indicates that the lower mantle's high pressure causes carbon bonds to transition from sp2 to sp3 hybridised orbitals, resulting in carbon tetrahedrally bonding to oxygen. CO3 trigonal groups cannot form polymerisable networks, while tetrahedral CO4 can, signifying an increase in carbon's coordination number, and therefore drastic changes in carbonate compounds' properties in the lower mantle. As an example, preliminary theoretical studies suggest that high pressure causes carbonate melt viscosity to increase; the melts' lower mobility as a result of its increased viscosity causes large deposits of carbon deep into the mantle.
Accordingly, carbon can remain in the lower mantle for long periods of time, but large concentrations of carbon frequently find their way back to the lithosphere. This process, called carbon outgassing, is the result of carbonated mantle undergoing decompression melting, as well as mantle plumes carrying carbon compounds up towards the crust. Carbon is oxidised upon its ascent towards volcanic hotspots, where it is then released as CO2. This occurs so that the carbon atom matches the oxidation state of the basalts erupting in such areas.
Carbon in the core
Although the presence of carbon in the Earth's core is well-constrained, recent studies suggest large inventories of carbon could be stored in this region. Shear (S) waves moving through the inner core travel at about fifty percent of the velocity expected for most iron-rich alloys. Because the core's composition is believed to be an alloy of crystalline iron and a small amount of nickel, this seismic anomaly indicates the presence of light elements, including carbon, in the core. In fact, studies using diamond anvil cells to replicate the conditions in the Earth's core indicate that iron carbide (Fe7C3) matches the inner core's wave speed and density. Therefore, the iron carbide model could serve as an evidence that the core holds as much as 67% of the Earth's carbon. Furthermore, another study found that in the pressure and temperature condition of the Earth's inner core, carbon dissolved in iron and formed a stable phase with the same Fe7C3 composition—albeit with a different structure from the one previously mentioned. In summary, although the amount of carbon potentially stored in the Earth's core is not known, recent studies indicate that the presence of iron carbides can explain some of the geophysical observations.
Human influence on fast carbon cycle
Since the Industrial Revolution, and especially since the end of WWII, human activity has substantially disturbed the global carbon cycle by redistributing massive amounts of carbon from the geosphere. Humans have also continued to shift the natural component functions of the terrestrial biosphere with changes to vegetation and other land use. Man-made (synthetic) carbon compounds have been designed and mass-manufactured that will persist for decades to millennia in air, water, and sediments as pollutants. Climate change is amplifying and forcing further indirect human changes to the carbon cycle as a consequence of various positive and negative feedbacks.
Climate change
Current trends in climate change lead to higher ocean temperatures and acidity, thus modifying marine ecosystems. Also, acid rain and polluted runoff from agriculture and industry change the ocean's chemical composition. Such changes can have dramatic effects on highly sensitive ecosystems such as coral reefs, thus limiting the ocean's ability to absorb carbon from the atmosphere on a regional scale and reducing oceanic biodiversity globally.
The exchanges of carbon between the atmosphere and other components of the Earth system, collectively known as the carbon cycle, currently constitute important negative (dampening) feedbacks on the effect of anthropogenic carbon emissions on climate change. Carbon sinks in the land and the ocean each currently take up about one-quarter of anthropogenic carbon emissions each year.
These feedbacks are expected to weaken in the future, amplifying the effect of anthropogenic carbon emissions on climate change. The degree to which they will weaken, however, is highly uncertain, with Earth system models predicting a wide range of land and ocean carbon uptakes even under identical atmospheric concentration or emission scenarios. Arctic methane emissions indirectly caused by anthropogenic global warming also affect the carbon cycle and contribute to further warming.
Fossil carbon extraction and burning
The largest and one of the fastest growing human impacts on the carbon cycle and biosphere is the extraction and burning of fossil fuels, which directly transfer carbon from the geosphere into the atmosphere. Carbon dioxide is also produced and released during the calcination of limestone for clinker production. Clinker is an industrial precursor of cement.
, about 450 gigatons of fossil carbon have been extracted in total; an amount approaching the carbon contained in all of Earth's living terrestrial biomass. Recent rates of global emissions directly into the atmosphere have exceeded the uptake by vegetation and the oceans. These sinks have been expected and observed to remove about half of the added atmospheric carbon within about a century. Nevertheless, sinks like the ocean have evolving saturation properties, and a substantial fraction (20–35%, based on coupled models) of the added carbon is projected to remain in the atmosphere for centuries to millennia.
Halocarbons
Halocarbons are less prolific compounds developed for diverse uses throughout industry; for example as solvents and refrigerants. Nevertheless, the buildup of relatively small concentrations (parts per trillion) of chlorofluorocarbon, hydrofluorocarbon, and perfluorocarbon gases in the atmosphere is responsible for about 10% of the total direct radiative forcing from all long-lived greenhouse gases (year 2019); which includes forcing from the much larger concentrations of carbon dioxide and methane. Chlorofluorocarbons also cause stratospheric ozone depletion. International efforts are ongoing under the Montreal Protocol and Kyoto Protocol to control rapid growth in the industrial manufacturing and use of these environmentally potent gases. For some applications more benign alternatives such as hydrofluoroolefins have been developed and are being gradually introduced.
Land use changes
Since the invention of agriculture, humans have directly and gradually influenced the carbon cycle over century-long timescales by modifying the mixture of vegetation in the terrestrial biosphere. Over the past several centuries, direct and indirect human-caused land use and land cover change (LUCC) has led to the loss of biodiversity, which lowers ecosystems' resilience to environmental stresses and decreases their ability to remove carbon from the atmosphere. More directly, it often leads to the release of carbon from terrestrial ecosystems into the atmosphere.
Deforestation for agricultural purposes removes forests, which hold large amounts of carbon, and replaces them, generally with agricultural or urban areas. Both of these replacement land cover types store comparatively small amounts of carbon so that the net result of the transition is that more carbon stays in the atmosphere. However, the effects on the atmosphere and overall carbon cycle can be intentionally and/or naturally reversed with reforestation.
| Physical sciences | Earth science basics: General | Earth science |
47510 | https://en.wikipedia.org/wiki/Cirrus%20cloud | Cirrus cloud | Cirrus (cloud classification symbol: Ci) is a genus of high cloud made of ice crystals. Cirrus clouds typically appear delicate and wispy with white strands. In the Earth's atmosphere, cirrus are usually formed when warm, dry air rises, causing water vapor deposition onto mineral dust and metallic particles at high altitudes. Globally, they form anywhere between above sea level, with the higher elevations usually in the tropics and the lower elevations in more polar regions.
Cirrus clouds can form from the tops of thunderstorms and tropical cyclones and sometimes predict the arrival of rain or storms. Although they are a sign that rain and maybe storms are on the way, cirrus themselves drop no more than falling streaks of ice crystals. These crystals dissipate, melt, and evaporate as they fall through warmer and drier air and never reach ground. Cirrus clouds warm the earth, potentially contributing to climate change. A warming earth will likely produce more cirrus clouds, potentially resulting in a self-reinforcing loop.
Optical phenomena, such as sun dogs and halos, can be produced by light interacting with ice crystals in cirrus clouds. There are two other high-level cirrus-like clouds called cirrostratus and cirrocumulus. Cirrostratus looks like a sheet of cloud, whereas cirrocumulus looks like a pattern of small cloud tufts. Unlike cirrus and cirrostratus, cirrocumulus clouds contain droplets of supercooled (below freezing point) water.
Cirrus clouds form in the atmospheres of Mars, Jupiter, Saturn, Uranus, and Neptune; and on Titan, one of Saturn's larger moons. Some of these extraterrestrial cirrus clouds are made of ammonia or methane, much like water ice in cirrus on Earth. Some interstellar clouds, made of grains of dust smaller than a thousandth of a millimeter, are also called cirrus.
Description
Cirrus are wispy clouds made of long strands of ice crystals that are described as feathery, hair-like, or layered in appearance. First defined scientifically by Luke Howard in an 1803 paper, their name is derived from the Latin word cirrus, meaning 'curl' or 'fringe'. They are transparent, meaning that the sun can be seen through them. Ice crystals in the clouds cause them to usually appear white, but the rising or setting sun can color them various shades of yellow or red. At dusk, they can appear gray.
Cirrus comes in five visually-distinct species: castellanus, fibratus, floccus, spissatus, and uncinus:
Cirrus castellanus has cumuliform tops caused by high-altitude convection rising up from the main cloud body.
Cirrus fibratus looks striated and is the most common cirrus species.
Cirrus floccus species looks like a series of tufts.
Cirrus spissatus is a particularly dense form of cirrus that often forms from thunderstorms.
Cirrus uncinus clouds are hooked and are the form that is usually called mare's tails.
Each species is divided into up to four varieties: intortus, vertebratus, radiatus, and duplicatus:
Intortus variety has an extremely contorted shape, with Kelvin–Helmholtz waves being a form of cirrus intortus that has been twisted into loops by layers of wind blowing at different speeds, called wind shear.
Radiatus variety has large, radial bands of cirrus clouds that stretch across the sky.
Vertebratus variety occurs when cirrus clouds are arranged side-by-side like ribs.
Duplicatus variety occurs when cirrus clouds are arranged above one another in layers.
Cirrus clouds often produce hair-like filaments called fall streaks, made of heavier ice crystals that fall from the cloud. These are similar to the virga produced in liquid–water clouds. The sizes and shapes of fall streaks are determined by the wind shear.
Cirrus cloud cover varies diurnally. During the day, cirrus cloud cover drops, and during the night, it increases. Based on CALIPSO satellite data, cirrus covers an average of 31% to 32% of the Earth's surface. Cirrus cloud cover varies wildly by location, with some parts of the tropics reaching up to 70% cirrus cloud cover. Polar regions, on the other hand, have significantly less cirrus cloud cover, with some areas having a yearly average of only around 10% coverage. These percentages treat clear days and nights, as well as days and nights with other cloud types, as lack of cirrus cloud cover.
Formation
Cirrus clouds are usually formed as warm, dry air rises, causing water vapor to undergo deposition onto particles, including mostly mineral dust and metallic particles at high altitudes. Particles gathered by research aircraft from cirrus clouds over several locations above North America and Central America included mineral dust (containing aluminum, potassium, calcium, iron, and silicon), metallic particles in elemental, sulfate and oxide forms (containing sodium, potassium, iron, nickel, copper, zinc, tin, silver, molybdenum and lead), possible biological particles (containing oxygen, carbon, nitrogen and phosphorus) and elemental carbon. The authors concluded that mineral dust contributed the largest number of ice nuclei to cirrus cloud formation.
The average cirrus cloud altitude increases as latitude decreases, but the altitude is always capped by the tropopause. These conditions commonly occur at the leading edge of a warm front. Because absolute humidity is low at such high altitudes, this genus tends to be fairly transparent. Cirrus clouds can also form inside fallstreak holes (also called "cavum").
At latitudes of 65° N or S, close to polar regions, cirrus clouds form, on average, only above sea level. In temperate regions, at roughly 45° N or S, their average altitude increases to above sea level. In tropical regions, at roughly 5° N or S, cirrus clouds form above sea level on average. Across the globe, cirrus clouds can form anywhere from above sea level. Cirrus clouds form with a vast range of thicknesses. They can be as little as from top to bottom to as thick as . Cirrus cloud thickness is usually somewhere between those two extremes, with an average thickness of .
The jet stream, a high-level wind band, can stretch cirrus clouds long enough to cross continents. Jet streaks, bands of faster-moving air in the jet stream, can create arcs of cirrus cloud hundreds of kilometers long.
Cirrus cloud formation may be effected by organic aerosols (particles produced by plants) acting as additional nucleation points for ice crystal formation. However, research suggests that cirrus clouds more commonly form on mineral dust or metallic particles rather than on organic ones.
Tropical cyclones
Sheets of cirrus clouds commonly fan out from the eye walls of tropical cyclones. (The eye wall is the ring of storm clouds surrounding the eye of a tropical cyclone.) A large shield of cirrus and cirrostratus typically accompanies the high altitude outflowing winds of tropical cyclones, and these can make the underlying bands of rain—and sometimes even the eye—difficult to detect in satellite photographs.
Thunderstorms
Thunderstorms can form dense cirrus at their tops. As the cumulonimbus cloud in a thunderstorm grows vertically, the liquid water droplets freeze when the air temperature reaches the freezing point. The anvil cloud takes its shape because the temperature inversion at the tropopause prevents the warm, moist air forming the thunderstorm from rising any higher, thus creating the flat top. In the tropics, these thunderstorms occasionally produce copious amounts of cirrus from their anvils. High-altitude winds commonly push this dense mat out into an anvil shape that stretches downwind as much as several kilometers.
Individual cirrus cloud formations can be the remnants of anvil clouds formed by thunderstorms. In the dissipating stage of a cumulonimbus cloud, when the normal column rising up to the anvil has evaporated or dissipated, the mat of cirrus in the anvil is all that is left.
Contrails
Contrails are an artificial type of cirrus cloud formed when water vapor from the exhaust of a jet engine condenses on particles, which come from either the surrounding air or the exhaust itself, and freezes, leaving behind a visible trail. The exhaust can trigger the formation of cirrus by providing ice nuclei when there is an insufficient naturally-occurring supply in the atmosphere. One of the environmental impacts of aviation is that persistent contrails can form into large mats of cirrus, and increased air traffic has been implicated as one possible cause of the increasing frequency and amount of cirrus in Earth's atmosphere.
Use in forecasting
Random, isolated cirrus do not have any particular significance. A large number of cirrus clouds can be a sign of an approaching frontal system or upper air disturbance. The appearance of cirrus signals a change in weather—usually more stormy—in the near future. If the cloud is a cirrus castellanus, there might be instability at the high altitude level. When the clouds deepen and spread, especially when they are of the cirrus radiatus variety or cirrus fibratus species, this usually indicates an approaching weather front. If it is a warm front, the cirrus clouds spread out into cirrostratus, which then thicken and lower into altocumulus and altostratus. The next set of clouds are the rain-bearing nimbostratus clouds. When cirrus clouds precede a cold front, squall line or multicellular thunderstorm, it is because they are blown off the anvil, and the next clouds to arrive are the cumulonimbus clouds. Kelvin-Helmholtz waves indicate extreme wind shear at high levels. When a jet streak creates a large arc of cirrus, weather conditions may be right for the development of winter storms.
Within the tropics, 36 hours prior to the center passage of a tropical cyclone, a veil of white cirrus clouds approaches from the direction of the cyclone. In the mid- to late-19th century, forecasters used these cirrus veils to predict the arrival of hurricanes. In the early 1870s the president of Belén College in Havana, Father Benito Viñes, developed the first hurricane forecasting system; he mainly used the motion of these clouds in formulating his predictions. He would observe the clouds hourly from 4:00 am to 10:00 pm. After accumulating enough information, Viñes began accurately predicting the paths of hurricanes; he summarized his observations in his book Apuntes Relativos a los Huracanes de las Antilles, published in English as Practical Hints in Regard to West Indian Hurricanes.
Effects on climate
Cirrus clouds cover up to 25% of the Earth (up to 70% in the tropics at night) and have a net heating effect. When they are thin and translucent, the clouds efficiently absorb outgoing infrared radiation while only marginally reflecting the incoming sunlight. When cirrus clouds are thick, they reflect only around 9% of the incoming sunlight, but they prevent almost 50% of the outgoing infrared radiation from escaping, thus raising the temperature of the atmosphere beneath the clouds by an average of 10 °C (18 °F)—a process known as the greenhouse effect. Averaged worldwide, cloud formation results in a temperature loss of 5 °C (9 °F) at the earth's surface, mainly the result of stratocumulus clouds.
Cirrus clouds are likely becoming more common due to climate change. As their greenhouse effect is stronger than their reflection of sunlight, this would act as a self-reinforcing feedback. Metallic particles from human sources act as additional nucleation seeds, potentially increasing cirrus cloud cover and thus contributing further to climate change. Aircraft in the upper troposphere can create contrail cirrus clouds if local weather conditions are right. These contrails contribute to climate change.
Cirrus cloud thinning has been proposed as a possible geoengineering approach to reduce climate damage due to carbon dioxide. Cirrus cloud thinning would involve injecting particles into the upper troposphere to reduce the amount of cirrus clouds. The 2021 IPCC Assessment Report expressed low confidence in the cooling effect of cirrus cloud thinning, due to limited understanding.
Cloud properties
Scientists have studied the properties of cirrus using several different methods. Lidar (laser-based radar) gives highly accurate information on the cloud's altitude, length, and width. Balloon-carried hygrometers measure the humidity of the cirrus cloud but are not accurate enough to measure the depth of the cloud. Radar units give information on the altitudes and thicknesses of cirrus clouds. Another data source is satellite measurements from the Stratospheric Aerosol and Gas Experiment program. These satellites measure where infrared radiation is absorbed in the atmosphere, and if it is absorbed at cirrus altitudes, then it is assumed that there are cirrus clouds in that location. NASA's Moderate-Resolution Imaging Spectroradiometer gives information on the cirrus cloud cover by measuring reflected infrared radiation of various specific frequencies during the day. During the night, it determines cirrus cover by detecting the Earth's infrared emissions. The cloud reflects this radiation back to the ground, thus enabling satellites to see the "shadow" it casts into space. Visual observations from aircraft or the ground provide additional information about cirrus clouds. Particle Analysis by Laser Mass Spectrometry (PALMS) is used to identify the type of nucleation seeds that spawned the ice crystals in a cirrus cloud.
Cirrus clouds have an average ice crystal concentration of 300,000 ice crystals per 10 cubic meters (270,000 ice crystals per 10 cubic yards). The concentration ranges from as low as 1 ice crystal per 10 cubic meters to as high as 100 million ice crystals per 10 cubic meters (just under 1 ice crystal per 10 cubic yards to 77 million ice crystals per 10 cubic yards), a difference of eight orders of magnitude. The size of each ice crystal is typically 0.25 millimeters, but they range from as short as 0.01 millimeters up to several millimeters. The ice crystals in contrails can be much smaller than those in naturally-occurring cirrus cloud, being around 0.001 millimeters to 0.1 millimeters in length.
In addition to forming in different sizes, the ice crystals in cirrus clouds can crystallize in different shapes: solid columns, hollow columns, plates, rosettes, and conglomerations of the various other types. The shape of the ice crystals is determined by the air temperature, atmospheric pressure, and ice supersaturation (the amount by which the relative humidity exceeds 100%). Cirrus in temperate regions typically have the various ice crystal shapes separated by type. The columns and plates concentrate near the top of the cloud, whereas the rosettes and conglomerations concentrate near the base. In the northern Arctic region, cirrus clouds tend to be composed of only the columns, plates, and conglomerations, and these crystals tend to be at least four times larger than the minimum size. In Antarctica, cirrus are usually composed of only columns which are much longer than normal.
Cirrus clouds are usually colder than . At temperatures above , most cirrus clouds have relative humidities of roughly 100% (that is they are saturated). Cirrus can supersaturate, with relative humidities over ice that can exceed 200%. Below there are more of both undersaturated and supersaturated cirrus clouds. The more supersaturated clouds are probably young cirrus.
Optical phenomena
Cirrus clouds can produce several optical effects like halos around the Sun and Moon. Halos are caused by interaction of the light with hexagonal ice crystals present in the clouds which, depending on their shape and orientation, can result in a wide variety of white and colored rings, arcs and spots in the sky, including sun dogs, the 46° halo, the 22° halo, and circumhorizontal arcs. Circumhorizontal arcs are only visible when the Sun rises higher than 58° above the horizon, preventing observers at higher latitudes from ever being able to see them.
More rarely, cirrus clouds are capable of producing glories, more commonly associated with liquid water-based clouds such as stratus. A glory is a set of concentric, faintly-colored glowing rings that appear around the shadow of the observer, and are best observed from a high viewpoint or from a plane. Cirrus clouds only form glories when the constituent ice crystals are aspherical; researchers suggest that the ice crystals must be between 0.009 millimeters and 0.015 millimeters in length for a glory to appear.
Relation to other clouds
Cirrus clouds are one of three different genera of high-level clouds, all of which are given the prefix "cirro-". The other two genera are cirrocumulus and cirrostratus. High-level clouds usually form above . Cirrocumulus and cirrostratus are sometimes informally referred to as cirriform clouds because of their frequent association with cirrus.
In the intermediate range, from , are the mid-level clouds, which are given the prefix "alto-". They comprise two genera, altostratus and altocumulus. These clouds are formed from ice crystals, supercooled water droplets, or liquid water droplets.
Low-level clouds usually form below and do not have a prefix. The two genera that are strictly low-level are stratus, and stratocumulus. These clouds are composed of water droplets, except during winter when they are formed of supercooled water droplets or ice crystals if the temperature at cloud level is below freezing. Three additional genera usually form in the low-altitude range, but may be based at higher levels under conditions of very low humidity. They are the genera cumulus, and cumulonimbus, and nimbostratus. These are sometimes classified separately as clouds of vertical development, especially when their tops are high enough to be composed of supercooled water droplets or ice crystals.
Cirrocumulus
Cirrocumulus clouds form in sheets or patches and do not cast shadows. They commonly appear in regular, rippling patterns or in rows of clouds with clear areas between. Cirrocumulus are, like other members of the cumuliform category, formed via convective processes. Significant growth of these patches indicates high-altitude instability and can signal the approach of poorer weather. The ice crystals in the bottoms of cirrocumulus clouds tend to be in the form of hexagonal cylinders. They are not solid, but instead tend to have stepped funnels coming in from the ends. Towards the top of the cloud, these crystals have a tendency to clump together. These clouds do not last long, and they tend to change into cirrus because as the water vapor continues to deposit on the ice crystals, they eventually begin to fall, destroying the upward convection. The cloud then dissipates into cirrus. Cirrocumulus clouds come in four species: stratiformis, lenticularis, castellanus, and floccus. They are iridescent when the constituent supercooled water droplets are all about the same size.
Cirrostratus
Cirrostratus clouds can appear as a milky sheen in the sky or as a striated sheet. They are sometimes similar to altostratus and are distinguishable from the latter because the Sun or Moon is always clearly visible through transparent cirrostratus, in contrast to altostratus which tends to be opaque or translucent. Cirrostratus come in two species, fibratus and nebulosus. The ice crystals in these clouds vary depending upon the height in the cloud. Towards the bottom, at temperatures of around , the crystals tend to be long, solid, hexagonal columns. Towards the top of the cloud, at temperatures of around , the predominant crystal types are thick, hexagonal plates and short, solid, hexagonal columns. These clouds commonly produce halos, and sometimes the halo is the only indication that such clouds are present. They are formed by warm, moist air being lifted slowly to a very high altitude. When a warm front approaches, cirrostratus clouds become thicker and descend forming altostratus clouds, and rain usually begins 12 to 24 hours later.
Other planets
Cirrus clouds have been observed on several other planets. In 2008, the Martian Lander Phoenix took a time-lapse photograph of a group of cirrus clouds moving across the Martian sky using lidar. Near the end of its mission, the Phoenix Lander detected more thin clouds close to the north pole of Mars. Over the course of several days, they thickened, lowered, and eventually began snowing. The total precipitation was only a few thousandths of a millimeter. James Whiteway from York University concluded that "precipitation is a component of the [Martian] hydrologic cycle". These clouds formed during the Martian night in two layers, one around above ground and the other at surface level. They lasted through early morning before being burned away by the Sun. The crystals in these clouds were formed at a temperature of , and they were shaped roughly like ellipsoids 0.127 millimeters long and 0.042 millimeters wide.
On Jupiter, cirrus clouds are composed of ammonia. When Jupiter's South Equatorial Belt disappeared, one hypothesis put forward by Glenn Orten was that a large quantity of ammonia cirrus clouds had formed above it, hiding it from view. NASA's Cassini probe detected these clouds on Saturn and thin water-ice cirrus on Saturn's moon Titan. Cirrus clouds composed of methane ice exist on Uranus. On Neptune, thin wispy clouds which could possibly be cirrus have been detected over the Great Dark Spot. As on Uranus, these are probably methane crystals.
Interstellar cirrus clouds are composed of tiny dust grains smaller than a micrometer and are therefore not true cirrus clouds, which are composed of frozen crystals. They range from a few light years to dozens of light years across. While they are not technically cirrus clouds, the dust clouds are referred to as "cirrus" because of their similarity to the clouds on Earth. They emit infrared radiation, similar to the way cirrus clouds on Earth reflect heat being radiated out into space.
| Physical sciences | Clouds | null |
47512 | https://en.wikipedia.org/wiki/Climate%20variability%20and%20change | Climate variability and change | Climate variability includes all the variations in the climate that last longer than individual weather events, whereas the term climate change only refers to those variations that persist for a longer period of time, typically decades or more. Climate change may refer to any time in Earth's history, but the term is now commonly used to describe contemporary climate change, often popularly referred to as global warming. Since the Industrial Revolution, the climate has increasingly been affected by human activities.
The climate system receives nearly all of its energy from the sun and radiates energy to outer space. The balance of incoming and outgoing energy and the passage of the energy through the climate system is Earth's energy budget. When the incoming energy is greater than the outgoing energy, Earth's energy budget is positive and the climate system is warming. If more energy goes out, the energy budget is negative and Earth experiences cooling.
The energy moving through Earth's climate system finds expression in weather, varying on geographic scales and time. Long-term averages and variability of weather in a region constitute the region's climate. Such changes can be the result of "internal variability", when natural processes inherent to the various parts of the climate system alter the distribution of energy. Examples include variability in ocean basins such as the Pacific decadal oscillation and Atlantic multidecadal oscillation. Climate variability can also result from external forcing, when events outside of the climate system's components produce changes within the system. Examples include changes in solar output and volcanism.
Climate variability has consequences for sea level changes, plant life, and mass extinctions; it also affects human societies.
Terminology
Climate variability is the term to describe variations in the mean state and other characteristics of climate (such as chances or possibility of extreme weather, etc.) "on all spatial and temporal scales beyond that of individual weather events." Some of the variability does not appear to be caused by known systems and occurs at seemingly random times. Such variability is called random variability or noise. On the other hand, periodic variability occurs relatively regularly and in distinct modes of variability or climate patterns.
The term climate change is often used to refer specifically to anthropogenic climate change. Anthropogenic climate change is caused by human activity, as opposed to changes in climate that may have resulted as part of Earth's natural processes. Global warming became the dominant popular term in 1988, but within scientific journals global warming refers to surface temperature increases while climate change includes global warming and everything else that increasing greenhouse gas levels affect.
A related term, climatic change, was proposed by the World Meteorological Organization (WMO) in 1966 to encompass all forms of climatic variability on time-scales longer than 10 years, but regardless of cause. During the 1970s, the term climate change replaced climatic change to focus on anthropogenic causes, as it became clear that human activities had a potential to drastically alter the climate. Climate change was incorporated in the title of the Intergovernmental Panel on Climate Change (IPCC) and the UN Framework Convention on Climate Change (UNFCCC). Climate change is now used as both a technical description of the process, as well as a noun used to describe the problem.
Causes
On the broadest scale, the rate at which energy is received from the Sun and the rate at which it is lost to space determine the equilibrium temperature and climate of Earth. This energy is distributed around the globe by winds, ocean currents, and other mechanisms to affect the climates of different regions.
Factors that can shape climate are called climate forcings or "forcing mechanisms". These include processes such as variations in solar radiation, variations in the Earth's orbit, variations in the albedo or reflectivity of the continents, atmosphere, and oceans, mountain-building and continental drift and changes in greenhouse gas concentrations. External forcing can be either anthropogenic (e.g. increased emissions of greenhouse gases and dust) or natural (e.g., changes in solar output, the Earth's orbit, volcano eruptions). There are a variety of climate change feedbacks that can either amplify or diminish the initial forcing. There are also key thresholds which when exceeded can produce rapid or irreversible change.
Some parts of the climate system, such as the oceans and ice caps, respond more slowly in reaction to climate forcings, while others respond more quickly. An example of fast change is the atmospheric cooling after a volcanic eruption, when volcanic ash reflects sunlight. Thermal expansion of ocean water after atmospheric warming is slow, and can take thousands of years. A combination is also possible, e.g., sudden loss of albedo in the Arctic Ocean as sea ice melts, followed by more gradual thermal expansion of the water.
Climate variability can also occur due to internal processes. Internal unforced processes often involve changes in the distribution of energy in the ocean and atmosphere, for instance, changes in the thermohaline circulation.
Internal variability
Climatic changes due to internal variability sometimes occur in cycles or oscillations. For other types of natural climatic change, we cannot predict when it happens; the change is called random or stochastic. From a climate perspective, the weather can be considered random. If there are little clouds in a particular year, there is an energy imbalance and extra heat can be absorbed by the oceans. Due to climate inertia, this signal can be 'stored' in the ocean and be expressed as variability on longer time scales than the original weather disturbances. If the weather disturbances are completely random, occurring as white noise, the inertia of glaciers or oceans can transform this into climate changes where longer-duration oscillations are also larger oscillations, a phenomenon called red noise. Many climate changes have a random aspect and a cyclical aspect. This behavior is dubbed stochastic resonance. Half of the 2021 Nobel prize on physics was awarded for this work to Klaus Hasselmann jointly with Syukuro Manabe for related work on climate modelling. While Giorgio Parisi who with collaborators introduced the concept of stochastic resonance was awarded the other half but mainly for work on theoretical physics.
Ocean-atmosphere variability
The ocean and atmosphere can work together to spontaneously generate internal climate variability that can persist for years to decades at a time. These variations can affect global average surface temperature by redistributing heat between the deep ocean and the atmosphere and/or by altering the cloud/water vapor/sea ice distribution which can affect the total energy budget of the Earth.
Oscillations and cycles
A climate oscillation or climate cycle is any recurring cyclical oscillation within global or regional climate. They are quasiperiodic (not perfectly periodic), so a Fourier analysis of the data does not have sharp peaks in the spectrum. Many oscillations on different time-scales have been found or hypothesized:
the El Niño–Southern Oscillation (ENSO) – A large scale pattern of warmer (El Niño) and colder (La Niña) tropical sea surface temperatures in the Pacific Ocean with worldwide effects. It is a self-sustaining oscillation, whose mechanisms are well-studied. ENSO is the most prominent known source of inter-annual variability in weather and climate around the world. The cycle occurs every two to seven years, with El Niño lasting nine months to two years within the longer term cycle. The cold tongue of the equatorial Pacific Ocean is not warming as fast as the rest of the ocean, due to increased upwelling of cold waters off the west coast of South America.
the Madden–Julian oscillation (MJO) – An eastward moving pattern of increased rainfall over the tropics with a period of 30 to 60 days, observed mainly over the Indian and Pacific Oceans.
the North Atlantic oscillation (NAO) – Indices of the NAO are based on the difference of normalized sea-level pressure (SLP) between Ponta Delgada, Azores and Stykkishólmur/Reykjavík, Iceland. Positive values of the index indicate stronger-than-average westerlies over the middle latitudes.
the Quasi-biennial oscillation – a well-understood oscillation in wind patterns in the stratosphere around the equator. Over a period of 28 months the dominant wind changes from easterly to westerly and back.
Pacific Centennial Oscillation - a climate oscillation predicted by some climate models
the Pacific decadal oscillation – The dominant pattern of sea surface variability in the North Pacific on a decadal scale. During a "warm", or "positive", phase, the west Pacific becomes cool and part of the eastern ocean warms; during a "cool" or "negative" phase, the opposite pattern occurs. It is thought not as a single phenomenon, but instead a combination of different physical processes.
the Interdecadal Pacific oscillation (IPO) – Basin wide variability in the Pacific Ocean with a period between 20 and 30 years.
the Atlantic multidecadal oscillation – A pattern of variability in the North Atlantic of about 55 to 70 years, with effects on rainfall, droughts and hurricane frequency and intensity.
North African climate cycles – climate variation driven by the North African Monsoon, with a period of tens of thousands of years.
the Arctic oscillation (AO) and Antarctic oscillation (AAO) – The annular modes are naturally occurring, hemispheric-wide patterns of climate variability. On timescales of weeks to months they explain 20–30% of the variability in their respective hemispheres. The Northern Annular Mode or Arctic oscillation (AO) in the Northern Hemisphere, and the Southern Annular Mode or Antarctic oscillation (AAO) in the southern hemisphere. The annular modes have a strong influence on the temperature and precipitation of mid-to-high latitude land masses, such as Europe and Australia, by altering the average paths of storms. The NAO can be considered a regional index of the AO/NAM. They are defined as the first EOF of sea level pressure or geopotential height from 20°N to 90°N (NAM) or 20°S to 90°S (SAM).
Dansgaard–Oeschger cycles – occurring on roughly 1,500-year cycles during the Last Glacial Maximum
Ocean current changes
The oceanic aspects of climate variability can generate variability on centennial timescales due to the ocean having hundreds of times more mass than in the atmosphere, and thus very high thermal inertia. For example, alterations to ocean processes such as thermohaline circulation play a key role in redistributing heat in the world's oceans.
Ocean currents transport a lot of energy from the warm tropical regions to the colder polar regions. Changes occurring around the last ice age (in technical terms, the last glacial period) show that the circulation in the North Atlantic can change suddenly and substantially, leading to global climate changes, even though the total amount of energy coming into the climate system did not change much. These large changes may have come from so called Heinrich events where internal instability of ice sheets caused huge ice bergs to be released into the ocean. When the ice sheet melts, the resulting water is very low in salt and cold, driving changes in circulation.
Life
Life affects climate through its role in the carbon and water cycles and through such mechanisms as albedo, evapotranspiration, cloud formation, and weathering. Examples of how life may have affected past climate include:
glaciation 2.3 billion years ago triggered by the evolution of oxygenic photosynthesis, which depleted the atmosphere of the greenhouse gas carbon dioxide and introduced free oxygen
another glaciation 300 million years ago ushered in by long-term burial of decomposition-resistant detritus of vascular land-plants (creating a carbon sink and forming coal)
termination of the Paleocene–Eocene Thermal Maximum 55 million years ago by flourishing marine phytoplankton
reversal of global warming 49 million years ago by 800,000 years of arctic azolla blooms
global cooling over the past 40 million years driven by the expansion of grass-grazer ecosystems
External climate forcing
Greenhouse gases
Whereas greenhouse gases released by the biosphere is often seen as a feedback or internal climate process, greenhouse gases emitted from volcanoes are typically classified as external by climatologists. Greenhouse gases, such as , methane and nitrous oxide, heat the climate system by trapping infrared light. Volcanoes are also part of the extended carbon cycle. Over very long (geological) time periods, they release carbon dioxide from the Earth's crust and mantle, counteracting the uptake by sedimentary rocks and other geological carbon dioxide sinks.
Since the Industrial Revolution, humanity has been adding to greenhouse gases by emitting CO2 from fossil fuel combustion, changing land use through deforestation, and has further altered the climate with aerosols (particulate matter in the atmosphere), release of trace gases (e.g. nitrogen oxides, carbon monoxide, or methane). Other factors, including land use, ozone depletion, animal husbandry (ruminant animals such as cattle produce methane), and deforestation, also play a role.
The US Geological Survey estimates are that volcanic emissions are at a much lower level than the effects of current human activities, which generate 100–300 times the amount of carbon dioxide emitted by volcanoes. The annual amount put out by human activities may be greater than the amount released by supereruptions, the most recent of which was the Toba eruption in Indonesia 74,000 years ago.
Orbital variations
Slight variations in Earth's motion lead to changes in the seasonal distribution of sunlight reaching the Earth's surface and how it is distributed across the globe. There is very little change to the area-averaged annually averaged sunshine; but there can be strong changes in the geographical and seasonal distribution. The three types of kinematic change are variations in Earth's eccentricity, changes in the tilt angle of Earth's axis of rotation, and precession of Earth's axis. Combined, these produce Milankovitch cycles which affect climate and are notable for their correlation to glacial and interglacial periods, their correlation with the advance and retreat of the Sahara, and for their appearance in the stratigraphic record.
During the glacial cycles, there was a high correlation between concentrations and temperatures. Early studies indicated that concentrations lagged temperatures, but it has become clear that this is not always the case. When ocean temperatures increase, the solubility of decreases so that it is released from the ocean. The exchange of between the air and the ocean can also be impacted by further aspects of climatic change. These and other self-reinforcing processes allow small changes in Earth's motion to have a large effect on climate.
Solar output
The Sun is the predominant source of energy input to the Earth's climate system. Other sources include geothermal energy from the Earth's core, tidal energy from the Moon and heat from the decay of radioactive compounds. Both long term variations in solar intensity are known to affect global climate. Solar output varies on shorter time scales, including the 11-year solar cycle and longer-term modulations. Correlation between sunspots and climate and tenuous at best.
Three to four billion years ago, the Sun emitted only 75% as much power as it does today. If the atmospheric composition had been the same as today, liquid water should not have existed on the Earth's surface. However, there is evidence for the presence of water on the early Earth, in the Hadean and Archean eons, leading to what is known as the faint young Sun paradox. Hypothesized solutions to this paradox include a vastly different atmosphere, with much higher concentrations of greenhouse gases than currently exist. Over the following approximately 4 billion years, the energy output of the Sun increased. Over the next five billion years, the Sun's ultimate death as it becomes a red giant and then a white dwarf will have large effects on climate, with the red giant phase possibly ending any life on Earth that survives until that time.
Volcanism
The volcanic eruptions considered to be large enough to affect the Earth's climate on a scale of more than 1 year are the ones that inject over 100,000 tons of SO2 into the stratosphere. This is due to the optical properties of SO2 and sulfate aerosols, which strongly absorb or scatter solar radiation, creating a global layer of sulfuric acid haze. On average, such eruptions occur several times per century, and cause cooling (by partially blocking the transmission of solar radiation to the Earth's surface) for a period of several years. Although volcanoes are technically part of the lithosphere, which itself is part of the climate system, the IPCC explicitly defines volcanism as an external forcing agent.
Notable eruptions in the historical records are the 1991 eruption of Mount Pinatubo which lowered global temperatures by about 0.5 °C (0.9 °F) for up to three years, and the 1815 eruption of Mount Tambora causing the Year Without a Summer.
At a larger scale—a few times every 50 million to 100 million years—the eruption of large igneous provinces brings large quantities of igneous rock from the mantle and lithosphere to the Earth's surface. Carbon dioxide in the rock is then released into the atmosphere.
Small eruptions, with injections of less than 0.1 Mt of sulfur dioxide into the stratosphere, affect the atmosphere only subtly, as temperature changes are comparable with natural variability. However, because smaller eruptions occur at a much higher frequency, they too significantly affect Earth's atmosphere.
Plate tectonics
Over the course of millions of years, the motion of tectonic plates reconfigures global land and ocean areas and generates topography. This can affect both global and local patterns of climate and atmosphere-ocean circulation.
The position of the continents determines the geometry of the oceans and therefore influences patterns of ocean circulation. The locations of the seas are important in controlling the transfer of heat and moisture across the globe, and therefore, in determining global climate. A recent example of tectonic control on ocean circulation is the formation of the Isthmus of Panama about 5 million years ago, which shut off direct mixing between the Atlantic and Pacific Oceans. This strongly affected the ocean dynamics of what is now the Gulf Stream and may have led to Northern Hemisphere ice cover. During the Carboniferous period, about 300 to 360 million years ago, plate tectonics may have triggered large-scale storage of carbon and increased glaciation. Geologic evidence points to a "megamonsoonal" circulation pattern during the time of the supercontinent Pangaea, and climate modeling suggests that the existence of the supercontinent was conducive to the establishment of monsoons.
The size of continents is also important. Because of the stabilizing effect of the oceans on temperature, yearly temperature variations are generally lower in coastal areas than they are inland. A larger supercontinent will therefore have more area in which climate is strongly seasonal than will several smaller continents or islands.
Other mechanisms
It has been postulated that ionized particles known as cosmic rays could impact cloud cover and thereby the climate. As the sun shields the Earth from these particles, changes in solar activity were hypothesized to influence climate indirectly as well. To test the hypothesis, CERN designed the CLOUD experiment, which showed the effect of cosmic rays is too weak to influence climate noticeably.
Evidence exists that the Chicxulub asteroid impact some 66 million years ago had severely affected the Earth's climate. Large quantities of sulfate aerosols were kicked up into the atmosphere, decreasing global temperatures by up to 26 °C and producing sub-freezing temperatures for a period of 3–16 years. The recovery time for this event took more than 30 years. The large-scale use of nuclear weapons has also been investigated for its impact on the climate. The hypothesis is that soot released by large-scale fires blocks a significant fraction of sunlight for as much as a year, leading to a sharp drop in temperatures for a few years. This possible event is described as nuclear winter.
Humans' use of land impact how much sunlight the surface reflects and the concentration of dust. Cloud formation is not only influenced by how much water is in the air and the temperature, but also by the amount of aerosols in the air such as dust. Globally, more dust is available if there are many regions with dry soils, little vegetation and strong winds.
Evidence and measurement of climate changes
Paleoclimatology is the study of changes in climate through the entire history of Earth. It uses a variety of proxy methods from the Earth and life sciences to obtain data preserved within things such as rocks, sediments, ice sheets, tree rings, corals, shells, and microfossils. It then uses the records to determine the past states of the Earth's various climate regions and its atmospheric system. Direct measurements give a more complete overview of climate variability.
Direct measurements
Climate changes that occurred after the widespread deployment of measuring devices can be observed directly. Reasonably complete global records of surface temperature are available beginning from the mid-late 19th century. Further observations are derived indirectly from historical documents. Satellite cloud and precipitation data has been available since the 1970s.
Historical climatology is the study of historical changes in climate and their effect on human history and development. The primary sources include written records such as sagas, chronicles, maps and local history literature as well as pictorial representations such as paintings, drawings and even rock art. Climate variability in the recent past may be derived from changes in settlement and agricultural patterns. Archaeological evidence, oral history and historical documents can offer insights into past changes in the climate. Changes in climate have been linked to the rise and the collapse of various civilizations.
Proxy measurements
Various archives of past climate are present in rocks, trees and fossils. From these archives, indirect measures of climate, so-called proxies, can be derived. Quantification of climatological variation of precipitation in prior centuries and epochs is less complete but approximated using proxies such as marine sediments, ice cores, cave stalagmites, and tree rings. Stress, too little precipitation or unsuitable temperatures, can alter the growth rate of trees, which allows scientists to infer climate trends by analyzing the growth rate of tree rings. This branch of science studying this called dendroclimatology. Glaciers leave behind moraines that contain a wealth of material—including organic matter, quartz, and potassium that may be dated—recording the periods in which a glacier advanced and retreated.
Analysis of ice in cores drilled from an ice sheet such as the Antarctic ice sheet, can be used to show a link between temperature and global sea level variations. The air trapped in bubbles in the ice can also reveal the CO2 variations of the atmosphere from the distant past, well before modern environmental influences. The study of these ice cores has been a significant indicator of the changes in CO2 over many millennia, and continues to provide valuable information about the differences between ancient and modern atmospheric conditions. The 18O/16O ratio in calcite and ice core samples used to deduce ocean temperature in the distant past is an example of a temperature proxy method.
The remnants of plants, and specifically pollen, are also used to study climatic change. Plant distributions vary under different climate conditions. Different groups of plants have pollen with distinctive shapes and surface textures, and since the outer surface of pollen is composed of a very resilient material, they resist decay. Changes in the type of pollen found in different layers of sediment indicate changes in plant communities. These changes are often a sign of a changing climate. As an example, pollen studies have been used to track changing vegetation patterns throughout the Quaternary glaciations and especially since the last glacial maximum. Remains of beetles are common in freshwater and land sediments. Different species of beetles tend to be found under different climatic conditions. Given the extensive lineage of beetles whose genetic makeup has not altered significantly over the millennia, knowledge of the present climatic range of the different species, and the age of the sediments in which remains are found, past climatic conditions may be inferred.
Analysis and uncertainties
One difficulty in detecting climate cycles is that the Earth's climate has been changing in non-cyclic ways over most paleoclimatological timescales. Currently we are in a period of anthropogenic global warming. In a larger timeframe, the Earth is emerging from the latest ice age, cooling from the Holocene climatic optimum and warming from the "Little Ice Age", which means that climate has been constantly changing over the last 15,000 years or so. During warm periods, temperature fluctuations are often of a lesser amplitude. The Pleistocene period, dominated by repeated glaciations, developed out of more stable conditions in the Miocene and Pliocene climate. Holocene climate has been relatively stable. All of these changes complicate the task of looking for cyclical behavior in the climate.
Positive feedback, negative feedback, and ecological inertia from the land-ocean-atmosphere system often attenuate or reverse smaller effects, whether from orbital forcings, solar variations or changes in concentrations of greenhouse gases. Certain feedbacks involving processes such as clouds are also uncertain; for contrails, natural cirrus clouds, oceanic dimethyl sulfide and a land-based equivalent, competing theories exist concerning effects on climatic temperatures, for example contrasting the Iris hypothesis and CLAW hypothesis.
Impacts
Life
Vegetation
A change in the type, distribution and coverage of vegetation may occur given a change in the climate. Some changes in climate may result in increased precipitation and warmth, resulting in improved plant growth and the subsequent sequestration of airborne CO2. Though an increase in CO2 may benefit plants, some factors can diminish this increase. If there is an environmental change such as drought, increased CO2 concentrations will not benefit the plant. So even though climate change does increase CO2 emissions, plants will often not use this increase as other environmental stresses put pressure on them. However, sequestration of CO2 is expected to affect the rate of many natural cycles like plant litter decomposition rates. A gradual increase in warmth in a region will lead to earlier flowering and fruiting times, driving a change in the timing of life cycles of dependent organisms. Conversely, cold will cause plant bio-cycles to lag.
Larger, faster or more radical changes, however, may result in vegetation stress, rapid plant loss and desertification in certain circumstances. An example of this occurred during the Carboniferous Rainforest Collapse (CRC), an extinction event 300 million years ago. At this time vast rainforests covered the equatorial region of Europe and America. Climate change devastated these tropical rainforests, abruptly fragmenting the habitat into isolated 'islands' and causing the extinction of many plant and animal species.
Wildlife
One of the most important ways animals can deal with climatic change is migration to warmer or colder regions. On a longer timescale, evolution makes ecosystems including animals better adapted to a new climate. Rapid or large climate change can cause mass extinctions when creatures are stretched too far to be able to adapt.
Humanity
Collapses of past civilizations such as the Maya may be related to cycles of precipitation, especially drought, that in this example also correlates to the Western Hemisphere Warm Pool. Around 70 000 years ago the Toba supervolcano eruption created an especially cold period during the ice age, leading to a possible genetic bottleneck in human populations.
Changes in the cryosphere
Glaciers and ice sheets
Glaciers are considered among the most sensitive indicators of a changing climate. Their size is determined by a mass balance between snow input and melt output. As temperatures increase, glaciers retreat unless snow precipitation increases to make up for the additional melt. Glaciers grow and shrink due both to natural variability and external forcings. Variability in temperature, precipitation and hydrology can strongly determine the evolution of a glacier in a particular season.
The most significant climate processes since the middle to late Pliocene (approximately 3 million years ago) are the glacial and interglacial cycles. The present interglacial period (the Holocene) has lasted about 11,700 years. Shaped by orbital variations, responses such as the rise and fall of continental ice sheets and significant sea-level changes helped create the climate. Other changes, including Heinrich events, Dansgaard–Oeschger events and the Younger Dryas, however, illustrate how glacial variations may also influence climate without the orbital forcing.
Sea level change
During the Last Glacial Maximum, some 25,000 years ago, sea levels were roughly 130 m lower than today. The deglaciation afterwards was characterized by rapid sea level change. In the early Pliocene, global temperatures were 1–2˚C warmer than the present temperature, yet sea level was 15–25 meters higher than today.
Sea ice
Sea ice plays an important role in Earth's climate as it affects the total amount of sunlight that is reflected away from the Earth. In the past, the Earth's oceans have been almost entirely covered by sea ice on a number of occasions, when the Earth was in a so-called Snowball Earth state, and completely ice-free in periods of warm climate. When there is a lot of sea ice present globally, especially in the tropics and subtropics, the climate is more sensitive to forcings as the ice–albedo feedback is very strong.
Climate history
Various climate forcings are typically in flux throughout geologic time, and some processes of the Earth's temperature may be self-regulating. For example, during the Snowball Earth period, large glacial ice sheets spanned to Earth's equator, covering nearly its entire surface, and very high albedo created extremely low temperatures, while the accumulation of snow and ice likely removed carbon dioxide through atmospheric deposition. However, the absence of plant cover to absorb atmospheric CO2 emitted by volcanoes meant that the greenhouse gas could accumulate in the atmosphere. There was also an absence of exposed silicate rocks, which use CO2 when they undergo weathering. This created a warming that later melted the ice and brought Earth's temperature back up.
Paleo-eocene thermal maximum
The Paleocene–Eocene Thermal Maximum (PETM) was a time period with more than 5–8 °C global average temperature rise across the event. This climate event occurred at the time boundary of the Paleocene and Eocene geological epochs. During the event large amounts of methane was released, a potent greenhouse gas. The PETM represents a "case study" for modern climate change as in the greenhouse gases were released in a geologically relatively short amount of time. During the PETM, a mass extinction of organisms in the deep ocean took place.
The Cenozoic
Throughout the Cenozoic, multiple climate forcings led to warming and cooling of the atmosphere, which led to the early formation of the Antarctic ice sheet, subsequent melting, and its later reglaciation. The temperature changes occurred somewhat suddenly, at carbon dioxide concentrations of about 600–760 ppm and temperatures approximately 4 °C warmer than today. During the Pleistocene, cycles of glaciations and interglacials occurred on cycles of roughly 100,000 years, but may stay longer within an interglacial when orbital eccentricity approaches zero, as during the current interglacial. Previous interglacials such as the Eemian phase created temperatures higher than today, higher sea levels, and some partial melting of the West Antarctic ice sheet.
Climatological temperatures substantially affect cloud cover and precipitation. At lower temperatures, air can hold less water vapour, which can lead to decreased precipitation. During the Last Glacial Maximum of 18,000 years ago, thermal-driven evaporation from the oceans onto continental landmasses was low, causing large areas of extreme desert, including polar deserts (cold but with low rates of cloud cover and precipitation). In contrast, the world's climate was cloudier and wetter than today near the start of the warm Atlantic Period of 8000 years ago.
The Holocene
The Holocene is characterized by a long-term cooling starting after the Holocene Optimum, when temperatures were probably only just below current temperatures (second decade of the 21st century), and a strong African Monsoon created grassland conditions in the Sahara during the Neolithic Subpluvial. Since that time, several cooling events have occurred, including:
the Piora Oscillation
the Middle Bronze Age Cold Epoch
the Iron Age Cold Epoch
the Little Ice Age
the phase of cooling c. 1940–1970, which led to global cooling hypothesis
In contrast, several warm periods have also taken place, and they include but are not limited to:
a warm period during the apex of the Minoan civilization
the Roman Warm Period
the Medieval Warm Period
Modern warming during the 20th century
Certain effects have occurred during these cycles. For example, during the Medieval Warm Period, the American Midwest was in drought, including the Sand Hills of Nebraska which were active sand dunes. The black death plague of Yersinia pestis also occurred during Medieval temperature fluctuations, and may be related to changing climates.
Solar activity may have contributed to part of the modern warming that peaked in the 1930s. However, solar cycles fail to account for warming observed since the 1980s to the present day. Events such as the opening of the Northwest Passage and recent record low ice minima of the modern Arctic shrinkage have not taken place for at least several centuries, as early explorers were all unable to make an Arctic crossing, even in summer. Shifts in biomes and habitat ranges are also unprecedented, occurring at rates that do not coincide with known climate oscillations .
Modern climate change and global warming
As a consequence of humans emitting greenhouse gases, global surface temperatures have started rising. Global warming is an aspect of modern climate change, a term that also includes the observed changes in precipitation, storm tracks and cloudiness. As a consequence, glaciers worldwide have been found to be shrinking significantly. Land ice sheets in both Antarctica and Greenland have been losing mass since 2002 and have seen an acceleration of ice mass loss since 2009. Global sea levels have been rising as a consequence of thermal expansion and ice melt. The decline in Arctic sea ice, both in extent and thickness, over the last several decades is further evidence for rapid climate change.
Variability between regions
In addition to global climate variability and global climate change over time, numerous climatic variations occur contemporaneously across different physical regions.
The oceans' absorption of about 90% of excess heat has helped to cause land surface temperatures to grow more rapidly than sea surface temperatures. The Northern Hemisphere, having a larger landmass-to-ocean ratio than the Southern Hemisphere, shows greater average temperature increases. Variations across different latitude bands also reflect this divergence in average temperature increase, with the temperature increase of northern extratropics exceeding that of the tropics, which in turn exceeds that of the southern extratropics.
Upper regions of the atmosphere have been cooling contemporaneously with a warming in the lower atmosphere, confirming the action of the greenhouse effect and ozone depletion.
Observed regional climatic variations confirm predictions concerning ongoing changes, for example, by contrasting (smoother) year-to-year global variations with (more volatile) year-to-year variations in localized regions. Conversely, comparing different regions' warming patterns to their respective historical variabilities, allows the raw magnitudes of temperature changes to be placed in the perspective of what is normal variability for each region.
Regional variability observations permit study of regionalized climate tipping points such as rainforest loss, ice sheet and sea ice melt, and permafrost thawing. Such distinctions underlie research into a possible global cascade of tipping points.
| Physical sciences | Climatology | null |
47513 | https://en.wikipedia.org/wiki/Climate%20model | Climate model | Numerical climate models (or climate system models) are mathematical models that can simulate the interactions of important drivers of climate. These drivers are the atmosphere, oceans, land surface and ice. Scientists use climate models to study the dynamics of the climate system and to make projections of future climate and of climate change. Climate models can also be qualitative (i.e. not numerical) models and contain narratives, largely descriptive, of possible futures.
Climate models take account of incoming energy from the Sun as well as outgoing energy from Earth. An imbalance results in a change in temperature. The incoming energy from the Sun is in the form of short wave electromagnetic radiation, chiefly visible and short-wave (near) infrared. The outgoing energy is in the form of long wave (far) infrared electromagnetic energy. These processes are part of the greenhouse effect.
Climate models vary in complexity. For example, a simple radiant heat transfer model treats the Earth as a single point and averages outgoing energy. This can be expanded vertically (radiative-convective models) and horizontally. More complex models are the coupled atmosphere–ocean–sea ice global climate models. These types of models solve the full equations for mass transfer, energy transfer and radiant exchange. In addition, other types of models can be interlinked. For example Earth System Models include also land use as well as land use changes. This allows researchers to predict the interactions between climate and ecosystems.
Climate models are systems of differential equations based on the basic laws of physics, fluid motion, and chemistry. Scientists divide the planet into a 3-dimensional grid and apply the basic equations to those grids. Atmospheric models calculate winds, heat transfer, radiation, relative humidity, and surface hydrology within each grid and evaluate interactions with neighboring points. These are coupled with oceanic models to simulate climate variability and change that occurs on different timescales due to shifting ocean currents and the much larger heat storage capacity of the global ocean. External drivers of change may also be applied. Including an ice-sheet model better accounts for long term effects such as sea level rise.
Uses
There are three major types of institution where climate models are developed, implemented and used:
National meteorological services: Most national weather services have a climatology section.
Universities: Relevant departments include atmospheric sciences, meteorology, climatology, and geography.
National and international research laboratories: Examples include the National Center for Atmospheric Research (NCAR, in Boulder, Colorado, US), the Geophysical Fluid Dynamics Laboratory (GFDL, in Princeton, New Jersey, US), Los Alamos National Laboratory, the Hadley Centre for Climate Prediction and Research (in Exeter, UK), the Max Planck Institute for Meteorology in Hamburg, Germany, or the Laboratoire des Sciences du Climat et de l'Environnement (LSCE), France.
Big climate models are essential but they are not perfect. Attention still needs to be given to the real world (what is happening and why). The global models are essential to assimilate all the observations, especially from space (satellites) and produce comprehensive analyses of what is happening, and then they can be used to make predictions/projections. Simple models have a role to play that is widely abused and fails to recognize the simplifications such as not including a water cycle.
General circulation models (GCMs)
Energy balance models (EBMs)
Simulation of the climate system in full 3-D space and time was impractical prior to the establishment of large computational facilities starting in the 1960s. In order to begin to understand which factors may have changed Earth's paleoclimate states, the constituent and dimensional complexities of the system needed to be reduced. A simple quantitative model that balanced incoming/outgoing energy was first developed for the atmosphere in the late 19th century. Other EBMs similarly seek an economical description of surface temperatures by applying the conservation of energy constraint to individual columns of the Earth-atmosphere system.
Essential features of EBMs include their relative conceptual simplicity and their ability to sometimes produce analytical solutions. Some models account for effects of ocean, land, or ice features on the surface budget. Others include interactions with parts of the water cycle or carbon cycle. A variety of these and other reduced system models can be useful for specialized tasks that supplement GCMs, particularly to bridge gaps between simulation and understanding.
Zero-dimensional models
Zero-dimensional models consider Earth as a point in space, analogous to the pale blue dot viewed by Voyager 1 or an astronomer's view of very distant objects. This dimensionless view while highly limited is still useful in that the laws of physics are applicable in a bulk fashion to unknown objects, or in an appropriate lumped manner if some major properties of the object are known. For example, astronomers know that most planets in our own solar system feature some kind of solid/liquid surface surrounded by a gaseous atmosphere.
Model with combined surface and atmosphere
A very simple model of the radiative equilibrium of the Earth is
where
the left hand side represents the total incoming shortwave power (in Watts) from the Sun
the right hand side represents the total outgoing longwave power (in Watts) from Earth, calculated from the Stefan–Boltzmann law.
The constant parameters include
S is the solar constant – the incoming solar radiation per unit area—about 1367 W·m−2
r is Earth's radius—approximately 6.371×106 m
π is the mathematical constant (3.141...)
is the Stefan–Boltzmann constant—approximately 5.67×10−8 J·K−4·m−2·s−1
The constant can be factored out, giving a nildimensional equation for the equilibrium
where
the left hand side represents the incoming shortwave energy flux from the Sun in W·m−2
the right hand side represents the outgoing longwave energy flux from Earth in W·m−2.
The remaining variable parameters which are specific to the planet include
is Earth's average albedo, measured to be 0.3.
is Earth's average surface temperature, measured as about 288 K as of year 2020
is the effective emissivity of Earth's combined surface and atmosphere (including clouds). It is a quantity between 0 and 1 that is calculated from the equilibrium to be about 0.61. For the zero-dimensional treatment it is equivalent to an average value over all viewing angles.
This very simple model is quite instructive. For example, it shows the temperature sensitivity to changes in the solar constant, Earth albedo, or effective Earth emissivity. The effective emissivity also gauges the strength of the atmospheric greenhouse effect, since it is the ratio of the thermal emissions escaping to space versus those emanating from the surface.
The calculated emissivity can be compared to available data. Terrestrial surface emissivities are all in the range of 0.96 to 0.99 (except for some small desert areas which may be as low as 0.7). Clouds, however, which cover about half of the planet's surface, have an average emissivity of about 0.5 (which must be reduced by the fourth power of the ratio of cloud absolute temperature to average surface absolute temperature) and an average cloud temperature of about . Taking all this properly into account results in an effective earth emissivity of about 0.64 (earth average temperature ).
Models with separated surface and atmospheric layers
Dimensionless models have also been constructed with functionally separated atmospheric layers from the surface. The simplest of these is the zero-dimensional, one-layer model, which may be readily extended to an arbitrary number of atmospheric layers. The surface and atmospheric layer(s) are each characterized by a corresponding temperature and emissivity value, but no thickness. Applying radiative equilibrium (i.e conservation of energy) at the interfaces between layers produces a set of coupled equations which are solvable.
Layered models produce temperatures that better estimate those observed for Earth's surface and atmospheric levels. They likewise further illustrate the radiative heat transfer processes which underlie the greenhouse effect. Quantification of this phenomenon using a version of the one-layer model was first published by Svante Arrhenius in year 1896.
Radiative-convective models
Water vapor is a main determinant of the emissivity of Earth's atmosphere. It both influences the flows of radiation and is influenced by convective flows of heat in a manner that is consistent with its equilibrium concentration and temperature as a function of elevation (i.e. relative humidity distribution). This has been shown by refining the zero dimension model in the vertical to a one-dimensional radiative-convective model which considers two processes of energy transport:
upwelling and downwelling radiative transfer through atmospheric layers that both absorb and emit infrared radiation
upward transport of heat by air and vapor convection, which is especially important in the lower troposphere.
Radiative-convective models have advantages over simpler models and also lay a foundation for more complex models. They can estimate both surface temperature and the temperature variation with elevation in a more realistic manner. They also simulate the observed decline in upper atmospheric temperature and rise in surface temperature when trace amounts of other non-condensible greenhouse gases such as carbon dioxide are included.
Other parameters are sometimes included to simulate localized effects in other dimensions and to address the factors that move energy about Earth. For example, the effect of ice-albedo feedback on global climate sensitivity has been investigated using a one-dimensional radiative-convective climate model.
Higher-dimension models
The zero-dimensional model may be expanded to consider the energy transported horizontally in the atmosphere. This kind of model may well be zonally averaged. This model has the advantage of allowing a rational dependence of local albedo and emissivity on temperature – the poles can be allowed to be icy and the equator warm – but the lack of true dynamics means that horizontal transports have to be specified.
Early examples include research of Mikhail Budyko and William D. Sellers who worked on the Budyko-Sellers model. This work also showed the role of positive feedback in the climate system and has been considered foundational for the energy balance models since its publication in 1969.
Earth systems models of intermediate complexity (EMICs)
Depending on the nature of questions asked and the pertinent time scales, there are, on the one extreme, conceptual, more inductive models, and, on the other extreme, general circulation models operating at the highest spatial and temporal resolution currently feasible. Models of intermediate complexity bridge the gap. One example is the Climber-3 model. Its atmosphere is a 2.5-dimensional statistical-dynamical model with 7.5° × 22.5° resolution and time step of half a day; the ocean is MOM-3 (Modular Ocean Model) with a 3.75° × 3.75° grid and 24 vertical levels.
Box models
Box models are simplified versions of complex systems, reducing them to boxes (or reservoirs) linked by fluxes. The boxes are assumed to be mixed homogeneously. Within a given box, the concentration of any chemical species is therefore uniform. However, the abundance of a species within a given box may vary as a function of time due to the input to (or loss from) the box or due to the production, consumption or decay of this species within the box.
Simple box models, i.e. box model with a small number of boxes whose properties (e.g. their volume) do not change with time, are often useful to derive analytical formulas describing the dynamics and steady-state abundance of a species. More complex box models are usually solved using numerical techniques.
Box models are used extensively to model environmental systems or ecosystems and in studies of ocean circulation and the carbon cycle.
They are instances of a multi-compartment model.
In 1961 Henry Stommel was the first to use a simple 2-box model to study factors that influence ocean circulation.
History
Increase of forecasts confidence over time
The Coupled Model Intercomparison Project (CMIP) has been a leading effort to foster improvements in GCMs and climate change understanding since 1995.
The IPCC stated in 2010 it has increased confidence in forecasts coming from climate models:"There is considerable confidence that climate models provide credible quantitative estimates of future climate change, particularly at continental scales and above. This confidence comes from the foundation of the models in accepted physical principles and from their ability to reproduce observed features of current climate and past climate changes. Confidence in model estimates is higher for some climate variables (e.g., temperature) than for others (e.g., precipitation). Over several decades of development, models have consistently provided a robust and unambiguous picture of significant climate warming in response to increasing greenhouse gases."
Coordination of research
The World Climate Research Programme (WCRP), hosted by the World Meteorological Organization (WMO), coordinates research activities on climate modelling worldwide.
A 2012 U.S. National Research Council report discussed how the large and diverse U.S. climate modeling enterprise could evolve to become more unified. Efficiencies could be gained by developing a common software infrastructure shared by all U.S. climate researchers, and holding an annual climate modeling forum, the report found.
Issues
Electricity consumption
Cloud-resolving climate models are nowadays run on high intensity super-computers which have a high power consumption and thus cause CO2 emissions. They require exascale computing (billion billion – i.e., a quintillion – calculations per second). For example, the Frontier exascale supercomputer consumes 29 MW. It can simulate a year’s worth of climate at cloud resolving scales in a day.
Techniques that could lead to energy savings, include for example: "reducing floating point precision computation; developing machine learning algorithms to avoid unnecessary computations; and creating a new generation of scalable numerical algorithms that would enable higher throughput in terms of simulated years per wall clock day."
Parametrization
| Physical sciences | Climate change | Earth science |
47515 | https://en.wikipedia.org/wiki/Cloud | Cloud | In meteorology, a cloud is an aerosol consisting of a visible mass of miniature liquid droplets, frozen crystals, or other particles, suspended in the atmosphere of a planetary body or similar space. Water or various other chemicals may compose the droplets and crystals. On Earth, clouds are formed as a result of saturation of the air when it is cooled to its dew point, or when it gains sufficient moisture (usually in the form of water vapor) from an adjacent source to raise the dew point to the ambient temperature.
Clouds are seen in the Earth's homosphere, which includes the troposphere, stratosphere, and mesosphere.
Nephology is the science of clouds, which is undertaken in the cloud physics branch of meteorology. There are two methods of naming clouds in their respective layers of the homosphere, Latin and common name.
Genus types in the troposphere, the atmospheric layer closest to Earth's surface, have Latin names because of the universal adoption of Luke Howard's nomenclature that was formally proposed in 1802. It became the basis of a modern international system that divides clouds into five physical forms which can be further divided or classified into altitude levels to derive ten basic genera. The main representative cloud types for each of these forms are stratiform, cumuliform, stratocumuliform, cumulonimbiform, and cirriform. Low-level clouds do not have any altitude-related prefixes. However mid-level stratiform and stratocumuliform types are given the prefix alto- while high-level variants of these same two forms carry the prefix cirro-. In the case of stratocumuliform clouds, the prefix strato- is applied to the low-level genus type but is dropped from the mid- and high-level variants to avoid double-prefixing with alto- and cirro-. Genus types with sufficient vertical extent to occupy more than one level do not carry any altitude-related prefixes. They are classified formally as low- or mid-level depending on the altitude at which each initially forms, and are also more informally characterized as multi-level or vertical. Most of the ten genera derived by this method of classification can be subdivided into species and further subdivided into varieties. Very low stratiform clouds that extend down to the Earth's surface are given the common names fog and mist but have no Latin names.
In the stratosphere and mesosphere, clouds have common names for their main types. They may have the appearance of stratiform veils or sheets, cirriform wisps, or stratocumuliform bands or ripples. They are seen infrequently, mostly in the polar regions of Earth. Clouds have been observed in the atmospheres of other planets and moons in the Solar System and beyond. However, due to their different temperature characteristics, they are often composed of other substances such as methane, ammonia, and sulfuric acid, as well as water.
Tropospheric clouds can have a direct effect on climate change on Earth. They may reflect incoming rays from the Sun which can contribute to a cooling effect where and when these clouds occur, or trap longer wave radiation that reflects up from the Earth's surface which can cause a warming effect. The altitude, form, and thickness of the clouds are the main factors that affect the local heating or cooling of the Earth and the atmosphere. Clouds that form above the troposphere are too scarce and too thin to have any influence on climate change. Clouds are the main uncertainty in climate sensitivity.
Etymology
The origin of the term "cloud" can be found in the Old English words or , meaning a hill or a mass of stone. Around the beginning of the 13th century, the word came to be used as a metaphor for rain clouds, because of the similarity in appearance between a mass of rock and cumulus heap cloud. Over time, the metaphoric usage of the word supplanted the Old English , which had been the literal term for clouds in general.
Homospheric nomenclatures and cross-classification
The table that follows is very broad in scope like the cloud genera template upon which it is partly based. There are some variations in styles of nomenclature between the classification scheme used for the troposphere (strict Latin except for surface-based aerosols) and the higher levels of the homosphere (common terms, some informally derived from Latin). However, the schemes presented here share a cross-classification of physical forms and altitude levels to derive the 10 tropospheric genera, the fog and mist that forms at surface level, and several additional major types above the troposphere. The cumulus genus includes four species that indicate vertical size which can affect the altitude levels.
History of cloud science
Ancient cloud studies were not made in isolation, but were observed in combination with other weather elements and even other natural sciences. Around 340 BC, Greek philosopher Aristotle wrote Meteorologica, a work which represented the sum of knowledge of the time about natural science, including weather and climate. For the first time, precipitation and the clouds from which precipitation fell were called meteors, which originate from the Greek word meteoros, meaning 'high in the sky'. From that word came the modern term meteorology, the study of clouds and weather. Meteorologica was based on intuition and simple observation, but not on what is now considered the scientific method. Nevertheless, it was the first known work that attempted to treat a broad range of meteorological topics in a systematic way, especially the hydrological cycle.
After centuries of speculative theories about the formation and behavior of clouds, the first truly scientific studies were undertaken by Luke Howard in England and Jean-Baptiste Lamarck in France. Howard was a methodical observer with a strong grounding in the Latin language, and used his background to formally classify the various tropospheric cloud types during 1802. He believed that scientific observations of the changing cloud forms in the sky could unlock the key to weather forecasting.
Lamarck had worked independently on cloud classification the same year and had come up with a different naming scheme that failed to make an impression even in his home country of France because it used unusually descriptive and informal French names and phrases for cloud types. His system of nomenclature included 12 categories of clouds, with such names as (translated from French) hazy clouds, dappled clouds, and broom-like clouds. By contrast, Howard used universally accepted Latin, which caught on quickly after it was published in 1803. As a sign of the popularity of the naming scheme, German dramatist and poet Johann Wolfgang von Goethe composed four poems about clouds, dedicating them to Howard.
An elaboration of Howard's system was eventually formally adopted by the International Meteorological Conference in 1891. This system covered only the tropospheric cloud types. However, the discovery of clouds above the troposphere during the late 19th century eventually led to the creation of separate classification schemes that reverted to the use of descriptive common names and phrases that somewhat recalled Lamarck's methods of classification. These very high clouds, although classified by these different methods, are nevertheless broadly similar to some cloud forms identified in the troposphere with Latin names.
Formation
Terrestrial clouds can be found throughout most of the homosphere, which includes the troposphere, stratosphere, and mesosphere. Within these layers of the atmosphere, air can become saturated as a result of being cooled to its dew point or by having moisture added from an adjacent source. In the latter case, saturation occurs when the dew point is raised to the ambient air temperature.
Adiabatic cooling
Adiabatic cooling occurs when one or more of three possible lifting agents – convective, cyclonic/frontal, or orographic – cause a parcel of air containing invisible water vapor to rise and cool to its dew point, the temperature at which the air becomes saturated. The main mechanism behind this process is adiabatic cooling. As the air is cooled to its dew point and becomes saturated, water vapor normally condenses to form cloud drops. This condensation normally occurs on cloud condensation nuclei such as salt or dust particles that are small enough to be held aloft by normal circulation of the air.
One agent is the convective upward motion of air caused by daytime solar heating at surface level. Low level airmass instability allows for the formation of cumuliform clouds in the troposphere that can produce showers if the air is sufficiently moist. On moderately rare occasions, convective lift can be powerful enough to penetrate the tropopause and push the cloud top into the stratosphere.
Frontal and cyclonic lift occur in the troposphere when stable air is forced aloft at weather fronts and around centers of low pressure by a process called convergence. Warm fronts associated with extratropical cyclones tend to generate mostly cirriform and stratiform clouds over a wide area unless the approaching warm airmass is unstable, in which case cumulus congestus or cumulonimbus clouds are usually embedded in the main precipitating cloud layer. Cold fronts are usually faster moving and generate a narrower line of clouds, which are mostly stratocumuliform, cumuliform, or cumulonimbiform depending on the stability of the warm airmass just ahead of the front.
A third source of lift is wind circulation forcing air over a physical barrier such as a mountain (orographic lift). If the air is generally stable, nothing more than lenticular cap clouds form. However, if the air becomes sufficiently moist and unstable, orographic showers or thunderstorms may appear.
Clouds formed by any of these lifting agents are initially seen in the troposphere where these agents are most active. However, water vapor that has been lifted to the top of troposphere can be carried even higher by gravity waves where further condensation can result in the formation of clouds in the stratosphere and mesosphere.
Non-adiabatic cooling
Along with adiabatic cooling that requires a lifting agent, three major nonadiabatic mechanisms exist for lowering the temperature of the air to its dew point. Conductive, radiational, and evaporative cooling require no lifting mechanism and can cause condensation at surface level resulting in the formation of fog.
Adding moisture to the air
Several main sources of water vapor can be added to the air as a way of achieving saturation without any cooling process: evaporation from surface water or moist ground, precipitation or virga, and transpiration from plants.
Tropospheric classification
Classification in the troposphere is based on a hierarchy of categories with physical forms and altitude levels at the top. These are cross-classified into a total of ten genus types, most of which can be divided into species and further subdivided into varieties which are at the bottom of the hierarchy.
Clouds in the troposphere assume five physical forms based on structure and process of formation. These forms are commonly used for the purpose of satellite analysis. They are given below in approximate ascending order of instability or convective activity.
Nonconvective stratiform clouds appear in stable airmass conditions and, in general, have flat, sheet-like structures that can form at any altitude in the troposphere. The stratiform group is divided by altitude range into the genera cirrostratus (high-level), altostratus (mid-level), stratus (low-level), and nimbostratus (multi-level). Fog is commonly considered a surface-based cloud layer. The fog may form at surface level in clear air or it may be the result of a very low stratus cloud subsiding to ground or sea level. Conversely, low stratiform clouds result when advection fog is lifted above surface level during breezy conditions.
Cirriform clouds in the troposphere are of the genus cirrus and have the appearance of detached or semi-merged filaments. They form at high tropospheric altitudes in air that is mostly stable with little or no convective activity, although denser patches may occasionally show buildups caused by limited high-level convection where the air is partly unstable. Clouds resembling cirrus, cirrostratus, and cirrocumulus can be found above the troposphere but are classified separately using common names.
Stratocumuliform clouds both cumuliform and stratiform characteristics in the form of rolls, ripples, or elements. They generally form as a result of limited convection in an otherwise mostly stable airmass topped by an inversion layer. If the inversion layer is absent or higher in the troposphere, increased airmass instability may cause the cloud layers to develop tops in the form of turrets consisting of embedded cumuliform buildups. The stratocumuliform group is divided into cirrocumulus (high-level, strato- prefix dropped), altocumulus (mid-level, strato- prefix dropped), and stratocumulus (low-level).
Cumuliform clouds generally appear in isolated heaps or tufts. They are the product of localized but generally free-convective lift where no inversion layers are in the troposphere to limit vertical growth. In general, small cumuliform clouds tend to indicate comparatively weak instability. Larger cumuliform types are a sign of greater atmospheric instability and convective activity. Depending on their vertical size, clouds of the cumulus genus type may be low-level or multi-level with moderate to towering vertical extent.
Cumulonimbus clouds are largest free-convective clouds, which has a towering vertical extent. They occur in highly unstable air and often have fuzzy outlines at the upper parts of the clouds that sometimes include anvil tops. These clouds are the product of very strong convection that can penetrate the lower stratosphere.
Levels and genera
Tropospheric clouds form in any of three levels (formerly called étages) based on altitude range above the Earth's surface. The grouping of clouds into levels is commonly done for the purposes of cloud atlases, surface weather observations, and weather maps. The base-height range for each level varies depending on the latitudinal geographical zone. Each altitude level comprises two or three genus-types differentiated mainly by physical form.
The standard levels and genus-types are summarised below in approximate descending order of the altitude at which each is normally based. Multi-level clouds with significant vertical extent are separately listed and summarized in approximate ascending order of instability or convective activity.
High-level
High clouds form at altitudes of in the polar regions, in the temperate regions, and in the tropics. All cirriform clouds are classified as high, thus constitute a single genus cirrus (Ci). Stratocumuliform and stratiform clouds in the high altitude range carry the prefix cirro-, yielding the respective genus names cirrocumulus (Cc) and cirrostratus (Cs). If limited-resolution satellite images of high clouds are analyzed without supporting data from direct human observations, distinguishing between individual forms or genus types becomes impossible, and they are collectively identified as high-type (or informally as cirrus-type, though not all high clouds are of the cirrus form or genus).
Genus cirrus (Ci) – these are mostly fibrous wisps of delicate, white, cirriform, ice crystal clouds that show up clearly against the blue sky. Cirrus are generally non-convective except castellanus and floccus subtypes which show limited convection. They often form along a high altitude jetstream and at the very leading edge of a frontal or low-pressure disturbance where they may merge into cirrostratus. This high-level cloud genus does not produce precipitation.
Genus cirrocumulus (Cc) – this is a pure white high stratocumuliform layer of limited convection. It is composed of ice crystals or supercooled water droplets appearing as small unshaded round masses or flakes in groups or lines with ripples like sand on a beach. Cirrocumulus occasionally forms alongside cirrus and may be accompanied or replaced by cirrostratus clouds near the leading edge of an active weather system. This genus-type occasionally produces virga, precipitation that evaporates below the base of the cloud.
Genus cirrostratus (Cs) – cirrostratus is a thin nonconvective stratiform ice crystal veil that typically gives rise to halos caused by refraction of the Sun's rays. The Sun and Moon are visible in clear outline. Cirrostratus does not produce precipitation, but often thickens into altostratus ahead of a warm front or low-pressure area, which sometimes does.
Mid-level
Nonvertical clouds in the middle level are prefixed by alto-, yielding the genus names altocumulus (Ac) for stratocumuliform types and altostratus (As) for stratiform types. These clouds can form as low as above surface at any latitude, but may be based as high as near the poles, at midlatitudes, and in the tropics. As with high clouds, the main genus types are easily identified by the human eye, but distinguishing between them using satellite photography alone is not possible. When the supporting data of human observations are not available, these clouds are usually collectively identified as middle-type on satellite images.
Genus altocumulus (Ac) – This is a midlevel cloud layer of limited convection that is usually appears in the form of irregular patches or more extensive sheets arranged in groups, lines, or waves. Altocumulus may occasionally resemble cirrocumulus, but is usually thicker and composed of a mix of water droplets and ice crystals, so the bases show at least some light-gray shading. Altocumulus can produce virga, very light precipitation that evaporates before reaching the ground.
Genus altostratus (As) – Altostratus is a midlevel opaque or translucent nonconvective veil of gray/blue-gray cloud that often forms along warm fronts and around low-pressure areas. Altostratus is usually composed of water droplets, but may be mixed with ice crystals at higher altitudes. Widespread opaque altostratus can produce light continuous or intermittent precipitation.
Low-level
Low clouds are found from near the surface up to . Genus types in this level either have no prefix or carry one that refers to a characteristic other than altitude. Clouds that form in the low level of the troposphere are generally of larger structure than those that form in the middle and high levels, so they can usually be identified by their forms and genus types using satellite photography alone.
Genus stratocumulus (Sc) – This genus type is a stratocumuliform cloud layer of limited convection, usually in the form of irregular patches or more extensive sheets similar to altocumulus but having larger elements with deeper-gray shading. Stratocumulus is often present during wet weather originating from other rain clouds, but can only produce very light precipitation on its own.
Species cumulus humilis – These are small detached fair-weather cumuliform clouds that have nearly horizontal bases and flattened tops, and do not produce rain showers.
Genus stratus (St) – This is a flat or sometimes ragged nonconvective stratiform type that sometimes resembles elevated fog. Only very weak precipitation can fall from this cloud, usually drizzle or snow grains. When a very low stratus cloud subsides to surface level, it loses its Latin terminology and is given the common name fog if the prevailing surface visibility is less than . If the visibility is 1 km or higher, the visible condensation is termed mist.
Multi-level or moderate vertical
These clouds have low- to mid-level bases that form anywhere from near the surface to about and tops that can extend into the mid-altitude range and sometimes higher in the case of nimbostratus.
Genus nimbostratus (Ns) – This is a diffuse, dark gray, multi-level stratiform layer with great horizontal extent and usually moderate to deep vertical development that looks feebly illuminated from the inside. Nimbostratus normally forms from mid-level altostratus, and develops at least moderate vertical extent when the base subsides into the low level during precipitation that can reach moderate to heavy intensity. It achieves even greater vertical development when it simultaneously grows upward into the high level due to large-scale frontal or cyclonic lift. The nimbo- prefix refers to its ability to produce continuous rain or snow over a wide area, especially ahead of a warm front. This thick cloud layer lacks any towering structure of its own, but may be accompanied by embedded towering cumuliform or cumulonimbiform types. Meteorologists affiliated with the World Meteorological Organization (WMO) officially classify nimbostratus as mid-level for synoptic purposes while informally characterizing it as multi-level. Independent meteorologists and educators appear split between those who largely follow the WMO model and those who classify nimbostratus as low-level, despite its considerable vertical extent and its usual initial formation in the middle altitude range.
Species cumulus mediocris – These cumuliform clouds of free convection have clear-cut, medium-gray, flat bases and white, domed tops in the form of small sproutings and generally do not produce precipitation. They usually form in the low level of the troposphere except during conditions of very low relative humidity, when the clouds bases can rise into the middle-altitude range. Cumulus mediocris is officially classified as low-level and more informally characterized as having moderate vertical extent that can involve more than one altitude level.
Towering vertical
These very large cumuliform and cumulonimbiform types have cloud bases in the same low- to mid-level range as the multi-level and moderate vertical types, but the tops nearly always extend into the high levels. Unlike less vertically developed clouds, they are required to be identified by their standard names or abbreviations in all aviation observations (METARS) and forecasts (TAFS) to warn pilots of possible severe weather and turbulence.
Species cumulus congestus – Increasing airmass instability can cause free-convective cumulus to grow very tall to the extent that the vertical height from base to top is greater than the base-width of the cloud. The cloud base takes on a darker gray coloration and the top commonly resembles a cauliflower. This cloud type can produce moderate to heavy showers and is designated Towering cumulus (Tcu) by the International Civil Aviation Organization (ICAO).
Genus cumulonimbus (Cb) – This genus type is a heavy, towering, cumulonimbiform mass of free-convective cloud with a dark-gray to nearly black base and a very high top in the form of a mountain or huge tower. Cumulonimbus can produce thunderstorms, local very heavy downpours of rain that may cause flash floods, and a variety of types of lightning including cloud-to-ground that can cause wildfires. Other convective severe weather may or may not be associated with thunderstorms and include heavy snow showers, hail, strong wind shear, downbursts, and tornadoes. Of all these possible cumulonimbus-related events, lightning is the only one of these that requires a thunderstorm to be taking place since it is the lightning that creates the thunder. Cumulonimbus clouds can form in unstable airmass conditions, but tend to be more concentrated and intense when they are associated with unstable cold fronts.
Species
Genus types are commonly divided into subtypes called species that indicate specific structural details which can vary according to the stability and windshear characteristics of the atmosphere at any given time and location. Despite this hierarchy, a particular species may be a subtype of more than one genus, especially if the genera are of the same physical form and are differentiated from each other mainly by altitude or level. There are a few species, each of which can be associated with genera of more than one physical form. The species types are grouped below according to the physical forms and genera with which each is normally associated. The forms, genera, and species are listed from left to right in approximate ascending order of instability or convective activity.
Stable or mostly stable
Of the non-convective stratiform group, high-level cirrostratus comprises two species. Cirrostratus nebulosus has a rather diffuse appearance lacking in structural detail. Cirrostratus fibratus is a species made of semi-merged filaments that are transitional to or from cirrus. Mid-level altostratus and multi-level nimbostratus always have a flat or diffuse appearance and are therefore not subdivided into species. Low stratus is of the species nebulosus except when broken up into ragged sheets of stratus fractus (see below).
Cirriform clouds have three non-convective species that can form in stable airmass conditions. Cirrus fibratus comprise filaments that may be straight, wavy, or occasionally twisted by wind shear. The species uncinus is similar but has upturned hooks at the ends. Cirrus spissatus appear as opaque patches that can show light gray shading.
Stratocumuliform genus-types (cirrocumulus, altocumulus, and stratocumulus) that appear in mostly stable air with limited convection have two species each. The stratiformis species normally occur in extensive sheets or in smaller patches where there is only minimal convective activity. Clouds of the lenticularis species tend to have lens-like shapes tapered at the ends. They are most commonly seen as orographic mountain-wave clouds, but can occur anywhere in the troposphere where there is strong wind shear combined with sufficient airmass stability to maintain a generally flat cloud structure. These two species can be found in the high, middle, or low levels of the troposphere depending on the stratocumuliform genus or genera present at any given time.
Ragged
The species fractus shows variable instability because it can be a subdivision of genus-types of different physical forms that have different stability characteristics. This subtype can be in the form of ragged but mostly stable stratiform sheets (stratus fractus) or small ragged cumuliform heaps with somewhat greater instability (cumulus fractus). When clouds of this species are associated with precipitating cloud systems of considerable vertical and sometimes horizontal extent, they are also classified as accessory clouds under the name pannus (see section on supplementary features).
Partly unstable
These species are subdivisions of genus types that can occur in partly unstable air with limited convection. The species castellanus appears when a mostly stable stratocumuliform or cirriform layer becomes disturbed by localized areas of airmass instability, usually in the morning or afternoon. This results in the formation of embedded cumuliform buildups arising from a common stratiform base. Castellanus resembles the turrets of a castle when viewed from the side, and can be found with stratocumuliform genera at any tropospheric altitude level and with limited-convective patches of high-level cirrus. Tufted clouds of the more detached floccus species are subdivisions of genus-types which may be cirriform or stratocumuliform in overall structure. They are sometimes seen with cirrus, cirrocumulus, altocumulus, and stratocumulus.
A newly recognized species of stratocumulus or altocumulus has been given the name volutus, a roll cloud that can occur ahead of a cumulonimbus formation. There are some volutus clouds that form as a consequence of interactions with specific geographical features rather than with a parent cloud. Perhaps the strangest geographically specific cloud of this type is the Morning Glory, a rolling cylindrical cloud that appears unpredictably over the Gulf of Carpentaria in Northern Australia. Associated with a powerful "ripple" in the atmosphere, the cloud may be "surfed" in glider aircraft.
Unstable or mostly unstable
More general airmass instability in the troposphere tends to produce clouds of the more freely convective cumulus genus type, whose species are mainly indicators of degrees of atmospheric instability and resultant vertical development of the clouds. A cumulus cloud initially forms in the low level of the troposphere as a cloudlet of the species humilis that shows only slight vertical development. If the air becomes more unstable, the cloud tends to grow vertically into the species mediocris, then strongly convective congestus, the tallest cumulus species which is the same type that the International Civil Aviation Organization refers to as 'towering cumulus'.
With highly unstable atmospheric conditions, large cumulus may continue to grow into even more strongly convective cumulonimbus calvus (essentially a very tall congestus cloud that produces thunder), then ultimately into the species capillatus when supercooled water droplets at the top of the cloud turn into ice crystals giving it a cirriform appearance.
Varieties
Genus and species types are further subdivided into varieties whose names can appear after the species name to provide a fuller description of a cloud. Some cloud varieties are not restricted to a specific altitude level or form, and can therefore be common to more than one genus or species.
Opacity-based
All cloud varieties fall into one of two main groups. One group identifies the opacities of particular low and mid-level cloud structures and comprises the varieties translucidus (thin translucent), perlucidus (thick opaque with translucent or very small clear breaks), and opacus (thick opaque). These varieties are always identifiable for cloud genera and species with variable opacity. All three are associated with the stratiformis species of altocumulus and stratocumulus. However, only two varieties are seen with altostratus and stratus nebulosus whose uniform structures prevent the formation of a perlucidus variety. Opacity-based varieties are not applied to high clouds because they are always translucent, or in the case of cirrus spissatus, always opaque.
Pattern-based
A second group describes the occasional arrangements of cloud structures into particular patterns that are discernible by a surface-based observer (cloud fields usually being visible only from a significant altitude above the formations). These varieties are not always present with the genera and species with which they are otherwise associated, but only appear when atmospheric conditions favor their formation. Intortus and vertebratus varieties occur on occasion with cirrus fibratus. They are respectively filaments twisted into irregular shapes, and those that are arranged in fishbone patterns, usually by uneven wind currents that favor the formation of these varieties. The variety radiatus is associated with cloud rows of a particular type that appear to converge at the horizon. It is sometimes seen with the fibratus and uncinus species of cirrus, the stratiformis species of altocumulus and stratocumulus, the mediocris and sometimes humilis species of cumulus, and with the genus altostratus.
Another variety, duplicatus (closely spaced layers of the same type, one above the other), is sometimes found with cirrus of both the fibratus and uncinus species, and with altocumulus and stratocumulus of the species stratiformis and lenticularis. The variety undulatus (having a wavy undulating base) can occur with any clouds of the species stratiformis or lenticularis, and with altostratus. It is only rarely observed with stratus nebulosus. The variety lacunosus is caused by localized downdrafts that create circular holes in the form of a honeycomb or net. It is occasionally seen with cirrocumulus and altocumulus of the species stratiformis, castellanus, and floccus, and with stratocumulus of the species stratiformis and castellanus.
Combinations
It is possible for some species to show combined varieties at one time, especially if one variety is opacity-based and the other is pattern-based. An example of this would be a layer of altocumulus stratiformis arranged in seemingly converging rows separated by small breaks. The full technical name of a cloud in this configuration would be altocumulus stratiformis radiatus perlucidus, which would identify respectively its genus, species, and two combined varieties.
Other types
Supplementary features and accessory clouds are not further subdivisions of cloud types below the species and variety level. Rather, they are either hydrometeors or special cloud types with their own Latin names that form in association with certain cloud genera, species, and varieties. Supplementary features, whether in the form of clouds or precipitation, are directly attached to the main genus-cloud. Accessory clouds, by contrast, are generally detached from the main cloud.
Precipitation-based supplementary features
One group of supplementary features are not actual cloud formations, but precipitation that falls when water droplets or ice crystals that make up visible clouds have grown too heavy to remain aloft. Virga is a feature seen with clouds producing precipitation that evaporates before reaching the ground, these being of the genera cirrocumulus, altocumulus, altostratus, nimbostratus, stratocumulus, cumulus, and cumulonimbus.
When the precipitation reaches the ground without completely evaporating, it is designated as the feature praecipitatio. This normally occurs with altostratus opacus, which can produce widespread but usually light precipitation, and with thicker clouds that show significant vertical development. Of the latter, upward-growing cumulus mediocris produces only isolated light showers, while downward growing nimbostratus is capable of heavier, more extensive precipitation. Towering vertical clouds have the greatest ability to produce intense precipitation events, but these tend to be localized unless organized along fast-moving cold fronts. Showers of moderate to heavy intensity can fall from cumulus congestus clouds. Cumulonimbus, the largest of all cloud genera, has the capacity to produce very heavy showers. Low stratus clouds usually produce only light precipitation, but this always occurs as the feature praecipitatio due to the fact this cloud genus lies too close to the ground to allow for the formation of virga.
Cloud-based supplementary features
Incus is the most type-specific supplementary feature, seen only with cumulonimbus of the species capillatus. A cumulonimbus incus cloud top is one that has spread out into a clear anvil shape as a result of rising air currents hitting the stability layer at the tropopause where the air no longer continues to get colder with increasing altitude.
The mamma feature forms on the bases of clouds as downward-facing bubble-like protuberances caused by localized downdrafts within the cloud. It is also sometimes called mammatus, an earlier version of the term used before a standardization of Latin nomenclature brought about by the World Meteorological Organization during the 20th century. The best-known is cumulonimbus with mammatus, but the mamma feature is also seen occasionally with cirrus, cirrocumulus, altocumulus, altostratus, and stratocumulus.
A tuba feature is a cloud column that may hang from the bottom of a cumulus or cumulonimbus. A newly formed or poorly organized column might be comparatively benign, but can quickly intensify into a funnel cloud or tornado.
An arcus feature is a roll cloud with ragged edges attached to the lower front part of cumulus congestus or cumulonimbus that forms along the leading edge of a squall line or thunderstorm outflow. A large arcus formation can have the appearance of a dark menacing arch.
Several new supplementary features have been formally recognized by the World Meteorological Organization (WMO). The feature fluctus can form under conditions of strong atmospheric wind shear when a stratocumulus, altocumulus, or cirrus cloud breaks into regularly spaced crests. This variant is sometimes known informally as a Kelvin–Helmholtz (wave) cloud. This phenomenon has also been observed in cloud formations over other planets and even in the Sun's atmosphere. Another highly disturbed but more chaotic wave-like cloud feature associated with stratocumulus or altocumulus cloud has been given the Latin name asperitas. The supplementary feature cavum is a circular fall-streak hole that occasionally forms in a thin layer of supercooled altocumulus or cirrocumulus. Fall streaks consisting of virga or wisps of cirrus are usually seen beneath the hole as ice crystals fall out to a lower altitude. This type of hole is usually larger than typical lacunosus holes. A murus feature is a cumulonimbus wall cloud with a lowering, rotating cloud base than can lead to the development of tornadoes. A cauda feature is a tail cloud that extends horizontally away from the murus cloud and is the result of air feeding into the storm.
Accessory clouds
Supplementary cloud formations detached from the main cloud are known as accessory clouds. The heavier precipitating clouds, nimbostratus, towering cumulus (cumulus congestus), and cumulonimbus typically see the formation in precipitation of the pannus feature, low ragged clouds of the genera and species cumulus fractus or stratus fractus.
A group of accessory clouds comprise formations that are associated mainly with upward-growing cumuliform and cumulonimbiform clouds of free convection. Pileus is a cap cloud that can form over a cumulonimbus or large cumulus cloud, whereas a velum feature is a thin horizontal sheet that sometimes forms like an apron around the middle or in front of the parent cloud. An accessory cloud recently officially recognized by the World meteorological Organization is the flumen, also known more informally as the beaver's tail. It is formed by the warm, humid inflow of a super-cell thunderstorm, and can be mistaken for a tornado. Although the flumen can indicate a tornado risk, it is similar in appearance to pannus or scud clouds and does not rotate.
Mother clouds
Clouds initially form in clear air or become clouds when fog rises above surface level. The genus of a newly formed cloud is determined mainly by air mass characteristics such as stability and moisture content. If these characteristics change over time, the genus tends to change accordingly. When this happens, the original genus is called a mother cloud. If the mother cloud retains much of its original form after the appearance of the new genus, it is termed a genitus cloud. One example of this is stratocumulus cumulogenitus, a stratocumulus cloud formed by the partial spreading of a cumulus type when there is a loss of convective lift. If the mother cloud undergoes a complete change in genus, it is considered to be a mutatus cloud.
Other genitus and mutatus clouds
The genitus and mutatus categories have been expanded to include certain types that do not originate from pre-existing clouds. The term flammagenitus (Latin for 'fire-made') applies to cumulus congestus or cumulonimbus that are formed by large scale fires or volcanic eruptions. Smaller low-level "pyrocumulus" or "fumulus" clouds formed by contained industrial activity are now classified as cumulus homogenitus (Latin for 'man-made'). Contrails formed from the exhaust of aircraft flying in the upper level of the troposphere can persist and spread into formations resembling cirrus which are designated cirrus homogenitus. If a cirrus homogenitus cloud changes fully to any of the high-level genera, they are termed cirrus, cirrostratus, or cirrocumulus homomutatus. Stratus cataractagenitus (Latin for 'cataract-made') are generated by the spray from waterfalls. Silvagenitus (Latin for 'forest-made') is a stratus cloud that forms as water vapor is added to the air above a forest canopy.
Large scale patterns
Sometimes certain atmospheric processes cause clouds to become organized into patterns that can cover large areas. These patterns are usually difficult to identify from surface level and are best seen from an aircraft or spacecraft.
Stratocumulus fields
Stratocumulus clouds can be organized into "fields" that take on certain specially classified shapes and characteristics. In general, these fields are more discernible from high altitudes than from ground level. They can often be found in the following forms:
Actinoform, which resembles a leaf or a spoked wheel.
Closed cell, which is cloudy in the center and clear on the edges, similar to a filled honeycomb.
Open cell, which resembles an empty honeycomb, with clouds around the edges and clear, open space in the middle.
Vortex streets
These patterns are formed from a phenomenon known as a Kármán vortex which is named after the engineer and fluid dynamicist Theodore von Kármán. Wind driven clouds, usually mid level altocumulus or high level cirrus, can form into parallel rows that follow the wind direction. When the wind and clouds encounter high elevation land features such as a vertically prominent islands, they can form eddies around the high land masses that give the clouds a twisted appearance.
Distribution
Convergence along low-pressure zones
Although the local distribution of clouds can be significantly influenced by topography, the global prevalence of cloud cover in the troposphere tends to vary more by latitude. It is most prevalent in and along low pressure zones of surface tropospheric convergence which encircle the Earth close to the equator and near the 50th parallels of latitude in the northern and southern hemispheres. The adiabatic cooling processes that lead to the creation of clouds by way of lifting agents are all associated with convergence; a process that involves the horizontal inflow and accumulation of air at a given location, as well as the rate at which this happens. Near the equator, increased cloudiness is due to the presence of the low-pressure Intertropical Convergence Zone (ITCZ) where very warm and unstable air promotes mostly cumuliform and cumulonimbiform clouds. Clouds of virtually any type can form along the mid-latitude convergence zones depending on the stability and moisture content of the air. These extratropical convergence zones are occupied by the polar fronts where air masses of polar origin meet and clash with those of tropical or subtropical origin. This leads to the formation of weather-making extratropical cyclones composed of cloud systems that may be stable or unstable to varying degrees according to the stability characteristics of the various airmasses that are in conflict.
Divergence along high pressure zones
Divergence is the opposite of convergence. In the Earth's troposphere, it involves the horizontal outflow of air from the upper part of a rising column of air, or from the lower part of a subsiding column often associated with an area or ridge of high pressure. Cloudiness tends to be least prevalent near the poles and in the subtropics close to the 30th parallels, north and south. The latter are sometimes referred to as the horse latitudes. The presence of a large-scale high-pressure subtropical ridge on each side of the equator reduces cloudiness at these low latitudes. Similar patterns also occur at higher latitudes in both hemispheres.
Luminance, reflectivity, and coloration
The luminance or brightness of a cloud is determined by how light is reflected, scattered, and transmitted by the cloud's particles. Its brightness may also be affected by the presence of haze or photometeors such as halos and rainbows. In the troposphere, dense, deep clouds exhibit a high reflectance (70–95%) throughout the visible spectrum. Tiny particles of water are densely packed and sunlight cannot penetrate far into the cloud before it is reflected out, giving a cloud its characteristic white color, especially when viewed from the top. Cloud droplets tend to scatter light efficiently, so that the intensity of the solar radiation decreases with depth into the gases. As a result, the cloud base can vary from a very light to very-dark-gray depending on the cloud's thickness and how much light is being reflected or transmitted back to the observer. High thin tropospheric clouds reflect less light because of the comparatively low concentration of constituent ice crystals or supercooled water droplets which results in a slightly off-white appearance. However, a thick dense ice-crystal cloud appears brilliant white with pronounced gray shading because of its greater reflectivity.
As a tropospheric cloud matures, the dense water droplets may combine to produce larger droplets. If the droplets become too large and heavy to be kept aloft by the air circulation, they will fall from the cloud as rain. By this process of accumulation, the space between droplets becomes increasingly larger, permitting light to penetrate farther into the cloud. If the cloud is sufficiently large and the droplets within are spaced far enough apart, a percentage of the light that enters the cloud is not reflected back out but is absorbed giving the cloud a darker look. A simple example of this is one's being able to see farther in heavy rain than in heavy fog. This process of reflection/absorption is what causes the range of cloud color from white to black.
Striking cloud colorations can be seen at any altitude, with the color of a cloud usually being the same as the incident light. During daytime when the sun is relatively high in the sky, tropospheric clouds generally appear bright white on top with varying shades of gray underneath. Thin clouds may look white or appear to have acquired the color of their environment or background. Red, orange, and pink clouds occur almost entirely at sunrise/sunset and are the result of the scattering of sunlight by the atmosphere. When the Sun is just below the horizon, low-level clouds are gray, middle clouds appear rose-colored, and high clouds are white or off-white. Clouds at night are black or dark gray in a moonless sky, or whitish when illuminated by the Moon. They may also reflect the colors of large fires, city lights, or auroras that might be present.
A cumulonimbus cloud that appears to have a greenish or bluish tint is a sign that it contains extremely high amounts of water; hail or rain which scatter light in a way that gives the cloud a blue color. A green colorization occurs mostly late in the day when the sun is comparatively low in the sky and the incident sunlight has a reddish tinge that appears green when illuminating a very tall bluish cloud. Supercell type storms are more likely to be characterized by this but any storm can appear this way. Coloration such as this does not directly indicate that it is a severe thunderstorm, it only confirms its potential. Since a green/blue tint signifies copious amounts of water, a strong updraft to support it, high winds from the storm raining out, and wet hail; all elements that improve the chance for it to become severe, can all be inferred from this. In addition, the stronger the updraft is, the more likely the storm is to undergo tornadogenesis and to produce large hail and high winds.
Yellowish clouds may be seen in the troposphere in the late spring through early fall months during forest fire season. The yellow color is due to the presence of pollutants in the smoke. Yellowish clouds are caused by the presence of nitrogen dioxide and are sometimes seen in urban areas with high air pollution levels.
Effects
Tropospheric clouds exert numerous influences on Earth's troposphere and climate. First and foremost, they are the source of precipitation, thereby greatly influencing the distribution and amount of precipitation. Because of their differential buoyancy relative to surrounding cloud-free air, clouds can be associated with vertical motions of the air that may be convective, frontal, or cyclonic. The motion is upward if the clouds are less dense because condensation of water vapor releases heat, warming the air and thereby decreasing its density. This can lead to downward motion because lifting of the air results in cooling that increases its density. All of these effects are subtly dependent on the vertical temperature and moisture structure of the atmosphere and result in major redistribution of heat that affect the Earth's climate.
The complexity and diversity of clouds in the troposphere is a major reason for difficulty in quantifying the effects of clouds on climate and climate change. On the one hand, white cloud tops promote cooling of Earth's surface by reflecting shortwave radiation (visible and near infrared) from the Sun, diminishing the amount of solar radiation that is absorbed at the surface, enhancing the Earth's albedo. Most of the sunlight that reaches the ground is absorbed, warming the surface, which emits radiation upward at longer, infrared, wavelengths. At these wavelengths, however, water in the clouds acts as an efficient absorber. The water reacts by radiating, also in the infrared, both upward and downward, and the downward longwave radiation results in increased warming at the surface. This is analogous to the greenhouse effect of greenhouse gases and water vapor.
High-level genus-types particularly show this duality with both short-wave albedo cooling and long-wave greenhouse warming effects. On the whole, ice-crystal clouds in the upper troposphere (cirrus) tend to favor net warming. However, the cooling effect is dominant with mid-level and low clouds, especially when they form in extensive sheets. Measurements by NASA indicate that on the whole, the effects of low and mid-level clouds that tend to promote cooling outweigh the warming effects of high layers and the variable outcomes associated with vertically developed clouds.
As difficult as it is to evaluate the influences of current clouds on current climate, it is even more problematic to predict changes in cloud patterns and properties in a future, warmer climate, and the resultant cloud influences on future climate. In a warmer climate more water would enter the atmosphere by evaporation at the surface; as clouds are formed from water vapor, cloudiness would be expected to increase. But in a warmer climate, higher temperatures would tend to evaporate clouds. Both of these statements are considered accurate, and both phenomena, known as cloud feedbacks, are found in climate model calculations. Broadly speaking, if clouds, especially low clouds, increase in a warmer climate, the resultant cooling effect leads to a negative feedback in climate response to increased greenhouse gases. But if low clouds decrease, or if high clouds increase, the feedback is positive. Differing amounts of these feedbacks are the principal reason for differences in climate sensitivities of current global climate models. As a consequence, much research has focused on the response of low and vertical clouds to a changing climate. Leading global models produce quite different results, however, with some showing increasing low clouds and others showing decreases. For these reasons the role of tropospheric clouds in regulating weather and climate remains a leading source of uncertainty in global warming projections.
Stratospheric classification and distribution
Polar stratospheric clouds (PSC's) are found in the lowest part of the stratosphere. Moisture is scarce above the troposphere, so nacreous and non-nacreous clouds at this altitude range are restricted to polar regions in the winter where and when the air is coldest.
PSC's show some variation in structure according to their chemical makeup and atmospheric conditions, but are limited to a single very high range of altitude of about Accordingly, they are classified as a singular type with no differentiated altitude levels, genus types, species, or varieties. There is no Latin nomenclature in the manner of tropospheric clouds, but rather descriptive names of several general forms using common English.
Supercooled nitric acid and water PSC's, sometimes known as type 1, typically have a stratiform appearance resembling cirrostratus or haze, but because they are not frozen into crystals, do not show the pastel colors of the nacreous types. This type of PSC has been identified as a cause of ozone depletion in the stratosphere. The frozen nacreous types are typically very thin with mother-of-pearl colorations and an undulating cirriform or lenticular (stratocumuliform) appearance. These are sometimes known as type 2.
Mesospheric classification and distribution
Noctilucent clouds are the highest in the atmosphere and are found near the top of the mesosphere at about or roughly ten times the altitude of tropospheric high clouds. They are given this Latin derived name because of their illumination well after sunset and before sunrise. They typically have a bluish or silvery white coloration that can resemble brightly illuminated cirrus. Noctilucent clouds may occasionally take on more of a red or orange hue. They are not common or widespread enough to have a significant effect on climate. However, an increasing frequency of occurrence of noctilucent clouds since the 19th century may be the result of climate change.
Ongoing research indicates that convective lift in the mesosphere is strong enough during the polar summer to cause adiabatic cooling of small amount of water vapor to the point of saturation. This tends to produce the coldest temperatures in the entire atmosphere just below the mesopause. There is evidence that smoke particles from burnt-up meteors provide much of the condensation nuclei required for the formation of noctilucent cloud.
Noctilucent clouds have four major types based on physical structure and appearance. Type I veils are very tenuous and lack well-defined structure, somewhat like cirrostratus fibratus or poorly defined cirrus. Type II bands are long streaks that often occur in groups arranged roughly parallel to each other. They are usually more widely spaced than the bands or elements seen with cirrocumulus clouds. Type III billows are arrangements of closely spaced, roughly parallel short streaks that mostly resemble cirrus. Type IV whirls are partial or, more rarely, complete rings of cloud with dark centers.
Distribution in the mesosphere is similar to the stratosphere except at much higher altitudes. Because of the need for maximum cooling of the water vapor to produce noctilucent clouds, their distribution tends to be restricted to polar regions of Earth. Sightings are rare more than 45 degrees south of the north pole or north of the south pole.
Extraterrestrial
Cloud cover has been seen on most other planets in the Solar System. Venus's thick clouds are composed of sulfur dioxide (due to volcanic activity) and appear to be almost entirely stratiform. They are arranged in three main layers at altitudes of 45 to 65 km that obscure the planet's surface and can produce virga. No embedded cumuliform types have been identified, but broken stratocumuliform wave formations are sometimes seen in the top layer that reveal more continuous layer clouds underneath. On Mars, noctilucent, cirrus, cirrocumulus and stratocumulus composed of water-ice have been detected mostly near the poles. Water-ice fogs have also been detected on Mars.
Both Jupiter and Saturn have an outer cirriform cloud deck composed of ammonia, an intermediate stratiform haze-cloud layer made of ammonium hydrosulfide, and an inner deck of cumulus water clouds. Embedded cumulonimbus are known to exist near the Great Red Spot on Jupiter. The same category-types can be found covering Uranus and Neptune, but are all composed of methane. Saturn's moon Titan has cirrus clouds believed to be composed largely of methane. The Cassini–Huygens Saturn mission uncovered evidence of polar stratospheric clouds and a methane cycle on Titan, including lakes near the poles and fluvial channels on the surface of the moon.
Some planets outside the Solar System are known to have atmospheric clouds. In October 2013, the detection of high altitude optically thick clouds in the atmosphere of exoplanet Kepler-7b was announced, and, in December 2013, in the atmospheres of GJ 436 b and GJ 1214 b.
In culture and religion
Clouds play an important mythical or non-scientific role in various cultures and religious traditions. The ancient Akkadians believed that the clouds (in meteorology, probably the supplementary feature mamma) were the breasts of the sky goddess Antu and that rain was milk from her breasts. In , Yahweh is described as guiding the Israelites through the desert in the form of a "pillar of cloud" by day and a "pillar of fire" by night. In Mandaeism, uthras (celestial beings) are also occasionally mentioned as being in anana ("clouds"; e.g., in Right Ginza Book 17, Chapter 1), which can also be interpreted as female consorts.
The Cloud of Unknowing is a 14th-century work of Christian mysticism that advises a contemplative practice focused on experiencing God through love and "unknowing."
In the ancient Greek comedy The Clouds, written by Aristophanes and first performed at the City Dionysia in 423 BC, the philosopher Socrates declares that the Clouds are the only true deities and tells the main character Strepsiades not to worship any deities other than the Clouds, but to pay homage to them alone. In the play, the Clouds change shape to reveal the true nature of whoever is looking at them, turning into centaurs at the sight of a long-haired politician, wolves at the sight of the embezzler Simon, deer at the sight of the coward Cleonymus, and mortal women at the sight of the effeminate informer Cleisthenes. They are hailed the source of inspiration to comic poets and philosophers; they are masters of rhetoric, regarding eloquence and sophistry alike as their "friends".
In China, clouds are symbols of luck and happiness. Overlapping clouds (in meteorology, probably duplicatus clouds) are thought to imply eternal happiness and clouds of different colors are said to indicate "multiplied blessings".
Informal cloud watching or cloud gazing is a popular activity involving watching the clouds and looking for shapes in them, a form of pareidolia.
| Physical sciences | Earth science | null |
47521 | https://en.wikipedia.org/wiki/Condensation | Condensation | Condensation is the change of the state of matter from the gas phase into the liquid phase, and is the reverse of vaporization. The word most often refers to the water cycle. It can also be defined as the change in the state of water vapor to liquid water when in contact with a liquid or solid surface or cloud condensation nuclei within the atmosphere. When the transition happens from the gaseous phase into the solid phase directly, the change is called deposition.
Initiation
Condensation is initiated by the formation of atomic/molecular clusters of that species within its gaseous volume—like rain drop or snow flake formation within clouds—or at the contact between such gaseous phase and a liquid or solid surface. In clouds, this can be catalyzed by water-nucleating proteins, produced by atmospheric microbes, which are capable of binding gaseous or liquid water molecules.
Reversibility scenarios
A few distinct reversibility scenarios emerge here with respect to the nature of the surface.
absorption into the surface of a liquid (either of the same substance or one of its solvents)—is reversible as evaporation.
adsorption (as dew droplets) onto solid surface at pressures and temperatures higher than the species' triple point—also reversible as evaporation.
adsorption onto solid surface (as supplemental layers of solid) at pressures and temperatures lower than the species' triple point—is reversible as sublimation.
Most common scenarios
Condensation commonly occurs when a vapor is cooled and/or compressed to its saturation limit when the molecular density in the gas phase reaches its maximal threshold. Vapor cooling and compressing equipment that collects condensed liquids is called a "condenser".
Measurement
Psychrometry measures the rates of condensation through evaporation into the air moisture at various atmospheric pressures and temperatures. Water is the product of its vapor condensation—condensation is the process of such phase conversion.
Applications of condensation
Condensation is a crucial component of distillation, an important laboratory and industrial chemistry application.
Because condensation is a naturally occurring phenomenon, it can often be used to generate water in large quantities for human use. Many structures are made solely for the purpose of collecting water from condensation, such as air wells and fog fences. Such systems can often be used to retain soil moisture in areas where active desertification is occurring—so much so that some organizations educate people living in affected areas about water condensers to help them deal effectively with the situation.
It is also a crucial process in forming particle tracks in a cloud chamber. In this case, ions produced by an incident particle act as nucleation centers for the condensation of the vapor producing the visible "cloud" trails.
Commercial applications of condensation, by consumers as well as industry, include power generation, water desalination, thermal management, refrigeration, and air conditioning.
Biological adaptation
Numerous living beings use water made accessible by condensation. A few examples of these are the Australian thorny devil, the darkling beetles of the Namibian coast, and the coast redwoods of the West Coast of the United States.
Condensation in building construction
Condensation in building construction is an unwanted phenomenon as it may cause dampness, mold health issues, wood rot, corrosion, weakening of mortar and masonry walls, and energy penalties due to increased heat transfer. To alleviate these issues, the indoor air humidity needs to be lowered, or air ventilation in the building needs to be improved. This can be done in a number of ways, for example opening windows, turning on extractor fans, using dehumidifiers, drying clothes outside and covering pots and pans whilst cooking. Air conditioning or ventilation systems can be installed that help remove moisture from the air, and move air throughout a building. The amount of water vapor that can be stored in the air can be increased simply by increasing the temperature. However, this can be a double edged sword as most condensation in the home occurs when warm, moisture heavy air comes into contact with a cool surface. As the air is cooled, it can no longer hold as much water vapor. This leads to deposition of water on the cool surface. This is very apparent when central heating is used in combination with single glazed windows in winter.
Interstructure condensation may be caused by thermal bridges, insufficient or lacking insulation, damp proofing or insulated glazing.
Table
| Physical sciences | Phase transitions | null |
47526 | https://en.wikipedia.org/wiki/Convection | Convection | Convection is single or multiphase fluid flow that occurs spontaneously through the combined effects of material property heterogeneity and body forces on a fluid, most commonly density and gravity (see buoyancy). When the cause of the convection is unspecified, convection due to the effects of thermal expansion and buoyancy can be assumed. Convection may also take place in soft solids or mixtures where particles can flow.
Convective flow may be transient (such as when a multiphase mixture of oil and water separates) or steady state (see convection cell). The convection may be due to gravitational, electromagnetic or fictitious body forces. Heat transfer by natural convection plays a role in the structure of Earth's atmosphere, its oceans, and its mantle. Discrete convective cells in the atmosphere can be identified by clouds, with stronger convection resulting in thunderstorms. Natural convection also plays a role in stellar physics. Convection is often categorised or described by the main effect causing the convective flow; for example, thermal convection.
Convection cannot take place in most solids because neither bulk current flows nor significant diffusion of matter can take place.
Granular convection is a similar phenomenon in granular material instead of fluids.
Advection is fluid motion created by velocity instead of thermal gradients.
Convective heat transfer is the intentional use of convection as a method for heat transfer. Convection is a process in which heat is carried from place to place by the bulk movement of a fluid and gases.
History
In the 1830s, in The Bridgewater Treatises, the term convection is attested in a scientific sense. In treatise VIII by William Prout, in the book on chemistry, it says:
[...] This motion of heat takes place in three ways, which a common fire-place very well illustrates. If, for instance, we place a thermometer directly before a fire, it soon begins to rise, indicating an increase of temperature. In this case the heat has made its way through the space between the fire and the thermometer, by the process termed radiation. If we place a second thermometer in contact with any part of the grate, and away from the direct influence of the fire, we shall find that this thermometer also denotes an increase of temperature; but here the heat must have travelled through the metal of the grate, by what is termed conduction. Lastly, a third thermometer placed in the chimney, away from the direct influence of the fire, will also indicate a considerable increase of temperature; in this case a portion of the air, passing through and near the fire, has become heated, and has carried up the chimney the temperature acquired from the fire. There is at present no single term in our language employed to denote this third mode of the propagation of heat; but we venture to propose for that purpose, the term convection, [in footnote: [Latin] Convectio, a carrying or conveying] which not only expresses the leading fact, but also accords very well with the two other terms.
Later, in the same treatise VIII, in the book on meteorology, the concept of convection is also applied to "the process by which heat is communicated through water".
Terminology
Today, the word convection has different but related usages in different scientific or engineering contexts or applications.
In fluid mechanics, convection has a broader sense: it refers to the motion of fluid driven by density (or other property) difference.
In thermodynamics, convection often refers to heat transfer by convection, where the prefixed variant Natural Convection is used to distinguish the fluid mechanics concept of Convection (covered in this article) from convective heat transfer.
Some phenomena which result in an effect superficially similar to that of a convective cell may also be (inaccurately) referred to as a form of convection; for example, thermo-capillary convection and granular convection.
Mechanisms
Convection may happen in fluids at all scales larger than a few atoms. There are a variety of circumstances in which the forces required for convection arise, leading to different types of convection, described below. In broad terms, convection arises because of body forces acting within the fluid, such as gravity.
Natural convection
Natural convection is a flow whose motion is caused by some parts of a fluid being heavier than other parts. In most cases this leads to natural circulation: the ability of a fluid in a system to circulate continuously under gravity, with transfer of heat energy.
The driving force for natural convection is gravity. In a column of fluid, pressure increases with depth from the weight of the overlying fluid. The pressure at the bottom of a submerged object then exceeds that at the top, resulting in a net upward buoyancy force equal to the weight of the displaced fluid. Objects of higher density than that of the displaced fluid then sink. For example, regions of warmer low-density air rise, while those of colder high-density air sink. This creates a circulating flow: convection.
Gravity drives natural convection. Without gravity, convection does not occur, so there is no convection in free-fall (inertial) environments, such as that of the orbiting International Space Station. Natural convection can occur when there are hot and cold regions of either air or water, because both water and air become less dense as they are heated. But, for example, in the world's oceans it also occurs due to salt water being heavier than fresh water, so a layer of salt water on top of a layer of fresher water will also cause convection.
Natural convection has attracted a great deal of attention from researchers because of its presence both in nature and engineering applications. In nature, convection cells formed from air raising above sunlight-warmed land or water are a major feature of all weather systems. Convection is also seen in the rising plume of hot air from fire, plate tectonics, oceanic currents (thermohaline circulation) and sea-wind formation (where upward convection is also modified by Coriolis forces). In engineering applications, convection is commonly visualized in the formation of microstructures during the cooling of molten metals, and fluid flows around shrouded heat-dissipation fins, and solar ponds. A very common industrial application of natural convection is free air cooling without the aid of fans: this can happen on small scales (computer chips) to large scale process equipment.
Natural convection will be more likely and more rapid with a greater variation in density between the two fluids, a larger acceleration due to gravity that drives the convection or a larger distance through the convecting medium. Natural convection will be less likely and less rapid with more rapid diffusion (thereby diffusing away the thermal gradient that is causing the convection) or a more viscous (sticky) fluid.
The onset of natural convection can be determined by the Rayleigh number (Ra).
Differences in buoyancy within a fluid can arise for reasons other than temperature variations, in which case the fluid motion is called gravitational convection (see below). However, all types of buoyant convection, including natural convection, do not occur in microgravity environments. All require the presence of an environment which experiences g-force (proper acceleration).
The difference of density in the fluid is the key driving mechanism. If the differences of density are caused by heat, this force is called as "thermal head" or "thermal driving head." A fluid system designed for natural circulation will have a heat source and a heat sink. Each of these is in contact with some of the fluid in the system, but not all of it. The heat source is positioned lower than the heat sink.
Most fluids expand when heated, becoming less dense, and contract when cooled, becoming denser. At the heat source of a system of natural circulation, the heated fluid becomes lighter than the fluid surrounding it, and thus rises. At the heat sink, the nearby fluid becomes denser as it cools, and is drawn downward by gravity. Together, these effects create a flow of fluid from the heat source to the heat sink and back again.
Gravitational or buoyant convection
Gravitational convection is a type of natural convection induced by buoyancy variations resulting from material properties other than temperature. Typically this is caused by a variable composition of the fluid. If the varying property is a concentration gradient, it is known as solutal convection. For example, gravitational convection can be seen in the diffusion of a source of dry salt downward into wet soil due to the buoyancy of fresh water in saline.
Variable salinity in water and variable water content in air masses are frequent causes of convection in the oceans and atmosphere which do not involve heat, or else involve additional compositional density factors other than the density changes from thermal expansion (see thermohaline circulation). Similarly, variable composition within the Earth's interior which has not yet achieved maximal stability and minimal energy (in other words, with densest parts deepest) continues to cause a fraction of the convection of fluid rock and molten metal within the Earth's interior (see below).
Gravitational convection, like natural thermal convection, also requires a g-force environment in order to occur.
Solid-state convection in ice
Ice convection on Pluto is believed to occur in a soft mixture of nitrogen ice and carbon monoxide ice. It has also been proposed for Europa, and other bodies in the outer Solar System.
Thermomagnetic convection
Thermomagnetic convection can occur when an external magnetic field is imposed on a ferrofluid with varying magnetic susceptibility. In the presence of a temperature gradient this results in a nonuniform magnetic body force, which leads to fluid movement. A ferrofluid is a liquid which becomes strongly magnetized in the presence of a magnetic field.
Combustion
In a zero-gravity environment, there can be no buoyancy forces, and thus no convection possible, so flames in many circumstances without gravity smother in their own waste gases. Thermal expansion and chemical reactions resulting in expansion and contraction gases allows for ventilation of the flame, as waste gases are displaced by cool, fresh, oxygen-rich gas. moves in to take up the low pressure zones created when flame-exhaust water condenses.
Examples and applications
Systems of natural circulation include tornadoes and other weather systems, ocean currents, and household ventilation. Some solar water heaters use natural circulation. The Gulf Stream circulates as a result of the evaporation of water. In this process, the water increases in salinity and density. In the North Atlantic Ocean, the water becomes so dense that it begins to sink down.
Convection occurs on a large scale in atmospheres, oceans, planetary mantles, and it provides the mechanism of heat transfer for a large fraction of the outermost interiors of the Sun and all stars. Fluid movement during convection may be invisibly slow, or it may be obvious and rapid, as in a hurricane. On astronomical scales, convection of gas and dust is thought to occur in the accretion disks of black holes, at speeds which may closely approach that of light.
Demonstration experiments
Thermal convection in liquids can be demonstrated by placing a heat source (for example, a Bunsen burner) at the side of a container with a liquid. Adding a dye to the water (such as food colouring) will enable visualisation of the flow.
Another common experiment to demonstrate thermal convection in liquids involves submerging open containers of hot and cold liquid coloured with dye into a large container of the same liquid without dye at an intermediate temperature (for example, a jar of hot tap water coloured red, a jar of water chilled in a fridge coloured blue, lowered into a clear tank of water at room temperature).
A third approach is to use two identical jars, one filled with hot water dyed one colour, and cold water of another colour. One jar is then temporarily sealed (for example, with a piece of card), inverted and placed on top of the other. When the card is removed, if the jar containing the warmer liquid is placed on top no convection will occur. If the jar containing colder liquid is placed on top, a convection current will form spontaneously.
Convection in gases can be demonstrated using a candle in a sealed space with an inlet and exhaust port. The heat from the candle will cause a strong convection current which can be demonstrated with a flow indicator, such as smoke from another candle, being released near the inlet and exhaust areas respectively.
Double diffusive convection
Convection cells
A convection cell, also known as a Bénard cell, is a characteristic fluid flow pattern in many convection systems. A rising body of fluid typically loses heat because it encounters a colder surface. In liquid, this occurs because it exchanges heat with colder liquid through direct exchange. In the example of the Earth's atmosphere, this occurs because it radiates heat. Because of this heat loss the fluid becomes denser than the fluid underneath it, which is still rising. Since it cannot descend through the rising fluid, it moves to one side. At some distance, its downward force overcomes the rising force beneath it, and the fluid begins to descend. As it descends, it warms again and the cycle repeats itself. Additionally, convection cells can arise due to density variations resulting from differences in the composition of electrolytes.
Atmospheric convection
Atmospheric circulation
Atmospheric circulation is the large-scale movement of air, and is a means by which thermal energy is distributed on the surface of the Earth, together with the much slower (lagged) ocean circulation system. The large-scale structure of the atmospheric circulation varies from year to year, but the basic climatological structure remains fairly constant.
Latitudinal circulation occurs because incident solar radiation per unit area is highest at the heat equator, and decreases as the latitude increases, reaching minima at the poles. It consists of two primary convection cells, the Hadley cell and the polar vortex, with the Hadley cell experiencing stronger convection due to the release of latent heat energy by condensation of water vapor at higher altitudes during cloud formation.
Longitudinal circulation, on the other hand, comes about because the ocean has a higher specific heat capacity than land (and also thermal conductivity, allowing the heat to penetrate further beneath the surface ) and thereby absorbs and releases more heat, but the temperature changes less than land. This brings the sea breeze, air cooled by the water, ashore in the day, and carries the land breeze, air cooled by contact with the ground, out to sea during the night. Longitudinal circulation consists of two cells, the Walker circulation and El Niño / Southern Oscillation.
Weather
Some more localized phenomena than global atmospheric movement are also due to convection, including wind and some of the hydrologic cycle. For example, a foehn wind is a down-slope wind which occurs on the downwind side of a mountain range. It results from the adiabatic warming of air which has dropped most of its moisture on windward slopes. Because of the different adiabatic lapse rates of moist and dry air, the air on the leeward slopes becomes warmer than at the same height on the windward slopes.
A thermal column (or thermal) is a vertical section of rising air in the lower altitudes of the Earth's atmosphere. Thermals are created by the uneven heating of the Earth's surface from solar radiation. The Sun warms the ground, which in turn warms the air directly above it. The warmer air expands, becoming less dense than the surrounding air mass, and creating a thermal low. The mass of lighter air rises, and as it does, it cools by expansion at lower air pressures. It stops rising when it has cooled to the same temperature as the surrounding air. Associated with a thermal is a downward flow surrounding the thermal column. The downward moving exterior is caused by colder air being displaced at the top of the thermal. Another convection-driven weather effect is the sea breeze.
Warm air has a lower density than cool air, so warm air rises within cooler air, similar to hot air balloons. Clouds form as relatively warmer air carrying moisture rises within cooler air. As the moist air rises, it cools, causing some of the water vapor in the rising packet of air to condense. When the moisture condenses, it releases energy known as latent heat of condensation which allows the rising packet of air to cool less than its surrounding air, continuing the cloud's ascension. If enough instability is present in the atmosphere, this process will continue long enough for cumulonimbus clouds to form, which support lightning and thunder. Generally, thunderstorms require three conditions to form: moisture, an unstable airmass, and a lifting force (heat).
All thunderstorms, regardless of type, go through three stages: the developing stage, the mature stage, and the dissipation stage. The average thunderstorm has a diameter. Depending on the conditions present in the atmosphere, these three stages take an average of 30 minutes to go through.
Oceanic circulation
Solar radiation affects the oceans: warm water from the Equator tends to circulate toward the poles, while cold polar water heads towards the Equator. The surface currents are initially dictated by surface wind conditions. The trade winds blow westward in the tropics, and the westerlies blow eastward at mid-latitudes. This wind pattern applies a stress to the subtropical ocean surface with negative curl across the Northern Hemisphere, and the reverse across the Southern Hemisphere. The resulting Sverdrup transport is equatorward. Because of conservation of potential vorticity caused by the poleward-moving winds on the subtropical ridge's western periphery and the increased relative vorticity of poleward moving water, transport is balanced by a narrow, accelerating poleward current, which flows along the western boundary of the ocean basin, outweighing the effects of friction with the cold western boundary current which originates from high latitudes. The overall process, known as western intensification, causes currents on the western boundary of an ocean basin to be stronger than those on the eastern boundary.
As it travels poleward, warm water transported by strong warm water current undergoes evaporative cooling. The cooling is wind driven: wind moving over water cools the water and also causes evaporation, leaving a saltier brine. In this process, the water becomes saltier and denser and decreases in temperature. Once sea ice forms, salts are left out of the ice, a process known as brine exclusion. These two processes produce water that is denser and colder. The water across the northern Atlantic Ocean becomes so dense that it begins to sink down through less salty and less dense water. (This open ocean convection is not unlike that of a lava lamp.) This downdraft of heavy, cold and dense water becomes a part of the North Atlantic Deep Water, a south-going stream.
Mantle convection
Mantle convection is the slow creeping motion of Earth's rocky mantle caused by convection currents carrying heat from the interior of the Earth to the surface. It is one of 3 driving forces that causes tectonic plates to move around the Earth's surface.
The Earth's surface is divided into a number of tectonic plates that are continuously being created and consumed at their opposite plate boundaries. Creation (accretion) occurs as mantle is added to the growing edges of a plate. This hot added material cools down by conduction and convection of heat. At the consumption edges of the plate, the material has thermally contracted to become dense, and it sinks under its own weight in the process of subduction at an ocean trench. This subducted material sinks to some depth in the Earth's interior where it is prohibited from sinking further. The subducted oceanic crust triggers volcanism.
Convection within Earth's mantle is the driving force for plate tectonics. Mantle convection is the result of a thermal gradient: the lower mantle is hotter than the upper mantle, and is therefore less dense. This sets up two primary types of instabilities. In the first type, plumes rise from the lower mantle, and corresponding unstable regions of lithosphere drip back into the mantle. In the second type, subducting oceanic plates (which largely constitute the upper thermal boundary layer of the mantle) plunge back into the mantle and move downwards towards the core-mantle boundary. Mantle convection occurs at rates of centimeters per year, and it takes on the order of hundreds of millions of years to complete a cycle of convection.
Neutrino flux measurements from the Earth's core (see kamLAND) show the source of about two-thirds of the heat in the inner core is the radioactive decay of 40K, uranium and thorium. This has allowed plate tectonics on Earth to continue far longer than it would have if it were simply driven by heat left over from Earth's formation; or with heat produced from gravitational potential energy, as a result of physical rearrangement of denser portions of the Earth's interior toward the center of the planet (that is, a type of prolonged falling and settling).
Stack effect
The Stack effect or chimney effect is the movement of air into and out of buildings, chimneys, flue gas stacks, or other containers due to buoyancy. Buoyancy occurs due to a difference in indoor-to-outdoor air density resulting from temperature and moisture differences. The greater the thermal difference and the height of the structure, the greater the buoyancy force, and thus the stack effect. The stack effect helps drive natural ventilation and infiltration. Some cooling towers operate on this principle; similarly the solar updraft tower is a proposed device to generate electricity based on the stack effect.
Stellar physics
The convection zone of a star is the range of radii in which energy is transported outward from the core region primarily by convection rather than radiation. This occurs at radii which are sufficiently opaque that convection is more efficient than radiation at transporting energy.
Granules on the photosphere of the Sun are the visible tops of convection cells in the photosphere, caused by convection of plasma in the photosphere. The rising part of the granules is located in the center where the plasma is hotter. The outer edge of the granules is darker due to the cooler descending plasma. A typical granule has a diameter on the order of 1,000 kilometers and each lasts 8 to 20 minutes before dissipating. Below the photosphere is a layer of much larger "supergranules" up to 30,000 kilometers in diameter, with lifespans of up to 24 hours.
Water convection at freezing temperatures
Water is a fluid that does not obey the Boussinesq approximation. This is because its density varies nonlinearly with temperature, which causes its thermal expansion coefficient to be inconsistent near freezing temperatures. The density of water reaches a maximum at 4 °C and decreases as the temperature deviates. This phenomenon is investigated by experiment and numerical methods. Water is initially stagnant at 10 °C within a square cavity. It is differentially heated between the two vertical walls, where the left and right walls are held at 10 °C and 0 °C, respectively. The density anomaly manifests in its flow pattern. As the water is cooled at the right wall, the density increases, which accelerates the flow downward. As the flow develops and the water cools further, the decrease in density causes a recirculation current at the bottom right corner of the cavity.
Another case of this phenomenon is the event of super-cooling, where the water is cooled to below freezing temperatures but does not immediately begin to freeze. Under the same conditions as before, the flow is developed. Afterward, the temperature of the right wall is decreased to −10 °C. This causes the water at that wall to become supercooled, create a counter-clockwise flow, and initially overpower the warm current. This plume is caused by a delay in the nucleation of the ice. Once ice begins to form, the flow returns to a similar pattern as before and the solidification propagates gradually until the flow is redeveloped.
Nuclear reactors
In a nuclear reactor, natural circulation can be a design criterion. It is achieved by reducing turbulence and friction in the fluid flow (that is, minimizing head loss), and by providing a way to remove any inoperative pumps from the fluid path. Also, the reactor (as the heat source) must be physically lower than the steam generators or turbines (the heat sink). In this way, natural circulation will ensure that the fluid will continue to flow as long as the reactor is hotter than the heat sink, even when power cannot be supplied to the pumps. Notable examples are the S5G
and S8G United States Naval reactors, which were designed to operate at a significant fraction of full power under natural circulation, quieting those propulsion plants. The S6G reactor cannot operate at power under natural circulation, but can use it to maintain emergency cooling while shut down.
By the nature of natural circulation, fluids do not typically move very fast, but this is not necessarily bad, as high flow rates are not essential to safe and effective reactor operation. In modern design nuclear reactors, flow reversal is almost impossible. All nuclear reactors, even ones designed to primarily use natural circulation as the main method of fluid circulation, have pumps that can circulate the fluid in the case that natural circulation is not sufficient.
Mathematical models of convection
A number of dimensionless terms have been derived to describe and predict convection, including the Archimedes number, Grashof number, Richardson number, and the Rayleigh number.
In cases of mixed convection (natural and forced occurring together) one would often like to know how much of the convection is due to external constraints, such as the fluid velocity in the pump, and how much is due to natural convection occurring in the system.
The relative magnitudes of the Grashof number and the square of the Reynolds number determine which form of convection dominates. If , forced convection may be neglected, whereas if , natural convection may be neglected. If the ratio, known as the Richardson number, is approximately one, then both forced and natural convection need to be taken into account.
Onset
The onset of natural convection is determined by the Rayleigh number (Ra). This dimensionless number is given by
where
is the difference in density between the two parcels of material that are mixing
is the local gravitational acceleration
is the characteristic length-scale of convection: the depth of the boiling pot, for example
is the diffusivity of the characteristic that is causing the convection, and
is the dynamic viscosity.
Natural convection will be more likely and/or more rapid with a greater variation in density between the two fluids, a larger acceleration due to gravity that drives the convection, and/or a larger distance through the convecting medium. Convection will be less likely and/or less rapid with more rapid diffusion (thereby diffusing away the gradient that is causing the convection) and/or a more viscous (sticky) fluid.
For thermal convection due to heating from below, as described in the boiling pot above, the equation is modified for thermal expansion and thermal diffusivity. Density variations due to thermal expansion are given by:
where
is the reference density, typically picked to be the average density of the medium,
is the coefficient of thermal expansion, and
is the temperature difference across the medium.
The general diffusivity, , is redefined as a thermal diffusivity, .
Inserting these substitutions produces a Rayleigh number that can be used to predict thermal convection.
Turbulence
The tendency of a particular naturally convective system towards turbulence relies on the Grashof number (Gr).
In very sticky, viscous fluids (large ν), fluid motion is restricted, and natural convection will be non-turbulent.
Following the treatment of the previous subsection, the typical fluid velocity is of the order of , up to a numerical factor depending on the geometry of the system. Therefore, Grashof number can be thought of as Reynolds number with the velocity of natural convection replacing the velocity in Reynolds number's formula. However In practice, when referring to the Reynolds number, it is understood that one is considering forced convection, and the velocity is taken as the velocity dictated by external constraints (see below).
Behavior
The Grashof number can be formulated for natural convection occurring due to a concentration gradient, sometimes termed thermo-solutal convection. In this case, a concentration of hot fluid diffuses into a cold fluid, in much the same way that ink poured into a container of water diffuses to dye the entire space. Then:
Natural convection is highly dependent on the geometry of the hot surface, various correlations exist in order to determine the heat transfer coefficient.
A general correlation that applies for a variety of geometries is
The value of f4(Pr) is calculated using the following formula
Nu is the Nusselt number and the values of Nu0 and the characteristic length used to calculate Re are listed below (see also Discussion):
Warning: The values indicated for the Horizontal cylinder are wrong; see discussion.
Natural convection from a vertical plate
One example of natural convection is heat transfer from an isothermal vertical plate immersed in a fluid, causing the fluid to move parallel to the plate. This will occur in any system wherein the density of the moving fluid varies with position. These phenomena will only be of significance when the moving fluid is minimally affected by forced convection.
When considering the flow of fluid is a result of heating, the following correlations can be used, assuming the fluid is an ideal diatomic, has adjacent to a vertical plate at constant temperature and the flow of the fluid is completely laminar.
Num = 0.478(Gr0.25)
Mean Nusselt number = Num = hmL/k
where
hm = mean coefficient applicable between the lower edge of the plate and any point in a distance L (W/m2. K)
L = height of the vertical surface (m)
k = thermal conductivity (W/m. K)
Grashof number = Gr =
where
g = gravitational acceleration (m/s2)
L = distance above the lower edge (m)
ts = temperature of the wall (K)
t∞ = fluid temperature outside the thermal boundary layer (K)
v = kinematic viscosity of the fluid (m2/s)
T = absolute temperature (K)
When the flow is turbulent different correlations involving the Rayleigh Number (a function of both the Grashof number and the Prandtl number) must be used.
Note that the above equation differs from the usual expression for Grashof number because the value has been replaced by its approximation , which applies for ideal gases only (a reasonable approximation for air at ambient pressure).
Pattern formation
Convection, especially Rayleigh–Bénard convection, where the convecting fluid is contained by two rigid horizontal plates, is a convenient example of a pattern-forming system.
When heat is fed into the system from one direction (usually below), at small values it merely diffuses (conducts) from below upward, without causing fluid flow. As the heat flow is increased, above a critical value of the Rayleigh number, the system undergoes a bifurcation from the stable conducting state to the convecting state, where bulk motion of the fluid due to heat begins. If fluid parameters other than density do not depend significantly on temperature, the flow profile is symmetric, with the same volume of fluid rising as falling. This is known as Boussinesq convection.
As the temperature difference between the top and bottom of the fluid becomes higher, significant differences in fluid parameters other than density may develop in the fluid due to temperature. An example of such a parameter is viscosity, which may begin to significantly vary horizontally across layers of fluid. This breaks the symmetry of the system, and generally changes the pattern of up- and down-moving fluid from stripes to hexagons, as seen at right. Such hexagons are one example of a convection cell.
As the Rayleigh number is increased even further above the value where convection cells first appear, the system may undergo other bifurcations, and other more complex patterns, such as spirals, may begin to appear.
| Physical sciences | Fluid mechanics | null |
47527 | https://en.wikipedia.org/wiki/Cryosphere | Cryosphere | The cryosphere is an umbrella term for those portions of Earth's surface where water is in solid form. This includes sea ice, ice on lakes or rivers, snow, glaciers, ice caps, ice sheets, and frozen ground (which includes permafrost). Thus, there is a overlap with the hydrosphere. The cryosphere is an integral part of the global climate system. It also has important feedbacks on the climate system. These feedbacks come from the cryosphere's influence on surface energy and moisture fluxes, clouds, the water cycle, atmospheric and oceanic circulation.
Through these feedback processes, the cryosphere plays a significant role in the global climate and in climate model response to global changes. Approximately 10% of the Earth's surface is covered by ice, but this is rapidly decreasing. Current reductions in the cryosphere (caused by climate change) are measurable in ice sheet melt, glaciers decline, sea ice decline, permafrost thaw and snow cover decrease.
Definition and terminology
The cryosphere describes those portions of Earth's surface where water is in solid form. Frozen water is found on the Earth's surface primarily as snow cover, freshwater ice in lakes and rivers, sea ice, glaciers, ice sheets, and frozen ground and permafrost (permanently frozen ground).
The cryosphere is one of five components of the climate system. The others are the atmosphere, the hydrosphere, the lithosphere and the biosphere.
The term cryosphere comes from the Greek word kryos, meaning cold, frost or ice and the Greek word sphaira, meaning globe or ball.
Cryospheric sciences is an umbrella term for the study of the cryosphere. As an interdisciplinary Earth science, many disciplines contribute to it, most notably geology, hydrology, and meteorology and climatology; in this sense, it is comparable to glaciology.
The term deglaciation describes the retreat of cryospheric features.
Properties and interactions
There are several fundamental physical properties of snow and ice that modulate energy exchanges between the surface and the atmosphere. The most important properties are the surface reflectance (albedo), the ability to transfer heat (thermal diffusivity), and the ability to change state (latent heat). These physical properties, together with surface roughness, emissivity, and dielectric characteristics, have important implications for observing snow and ice from space. For example, surface roughness is often the dominant factor determining the strength of radar backscatter. Physical properties such as crystal structure, density, length, and liquid water content are important factors affecting the transfers of heat and water and the scattering of microwave energy.
Residence time and extent
The residence time of water in each of the cryospheric sub-systems varies widely. Snow cover and freshwater ice are essentially seasonal, and most sea ice, except for ice in the central Arctic, lasts only a few years if it is not seasonal. A given water particle in glaciers, ice sheets, or ground ice, however, may remain frozen for 10–100,000 years or longer, and deep ice in parts of East Antarctica may have an age approaching 1 million years.
Most of the world's ice volume is in Antarctica, principally in the East Antarctic Ice Sheet. In terms of areal extent, however, Northern Hemisphere winter snow and ice extent comprise the largest area, amounting to an average 23% of hemispheric surface area in January. The large areal extent and the important climatic roles of snow and ice is related to their unique physical properties. This also indicates that the ability to observe and model snow and ice-cover extent, thickness, and physical properties (radiative and thermal properties) is of particular significance for climate research.
Surface reflectance
The surface reflectance of incoming solar radiation is important for the surface energy balance (SEB). It is the ratio of reflected to incident solar radiation, commonly referred to as albedo. Climatologists are primarily interested in albedo integrated over the shortwave portion of the electromagnetic spectrum (~300 to 3500 nm), which coincides with the main solar energy input. Typically, albedo values for non-melting snow-covered surfaces are high (~80–90%) except in the case of forests.
The higher albedos for snow and ice cause rapid shifts in surface reflectivity in autumn and spring in high latitudes, but the overall climatic significance of this increase is spatially and temporally modulated by cloud cover. (Planetary albedo is determined principally by cloud cover, and by the small amount of total solar radiation received in high latitudes during winter months.) Summer and autumn are times of high-average cloudiness over the Arctic Ocean so the albedo feedback associated with the large seasonal changes in sea-ice extent is greatly reduced. It was found that snow cover exhibited the greatest influence on Earth's radiative balance in the spring (April to May) period when incoming solar radiation was greatest over snow-covered areas.
Thermal properties of cryospheric elements
The thermal properties of cryospheric elements also have important climatic consequences. Snow and ice have much lower thermal diffusivities than air. Thermal diffusivity is a measure of the speed at which temperature waves can penetrate a substance. Snow and ice are many orders of magnitude less efficient at diffusing heat than air. Snow cover insulates the ground surface, and sea ice insulates the underlying ocean, decoupling the surface-atmosphere interface with respect to both heat and moisture fluxes. The flux of moisture from a water surface is eliminated by even a thin skin of ice, whereas the flux of heat through thin ice continues to be substantial until it attains a thickness in excess of 30 to 40 cm. However, even a small amount of snow on top of the ice will dramatically reduce the heat flux and slow down the rate of ice growth. The insulating effect of snow also has major implications for the hydrological cycle. In non-permafrost regions, the insulating effect of snow is such that only near-surface ground freezes and deep-water drainage is uninterrupted.
While snow and ice act to insulate the surface from large energy losses in winter, they also act to retard warming in the spring and summer because of the large amount of energy required to melt ice (the latent heat of fusion, 3.34 x 105 J/kg at 0 °C). However, the strong static stability of the atmosphere over areas of extensive snow or ice tends to confine the immediate cooling effect to a relatively shallow layer, so that associated atmospheric anomalies are usually short-lived and local to regional in scale. In some areas of the world such as Eurasia, however, the cooling associated with a heavy snowpack and moist spring soils is known to play a role in modulating the summer monsoon circulation.
Climate change feedback mechanisms
There are numerous cryosphere-climate feedbacks in the global climate system. These operate over a wide range of spatial and temporal scales from local seasonal cooling of air temperatures to hemispheric-scale variations in ice sheets over time scales of thousands of years. The feedback mechanisms involved are often complex and incompletely understood. For example, Curry et al. (1995) showed that the so-called "simple" sea ice-albedo feedback involved complex interactions with lead fraction, melt ponds, ice thickness, snow cover, and sea-ice extent.
The role of snow cover in modulating the monsoon is just one example of a short-term cryosphere-climate feedback involving the land surface and the atmosphere.
Components
Glaciers and ice sheets
Ice sheets and glaciers are flowing ice masses that rest on solid land. They are controlled by snow accumulation, surface and basal melt, calving into surrounding oceans or lakes and internal dynamics. The latter results from gravity-driven creep flow ("glacial flow") within the ice body and sliding on the underlying land, which leads to thinning and horizontal spreading. Any imbalance of this dynamic equilibrium between mass gain, loss and transport due to flow results in either growing or shrinking ice bodies.Relationships between global climate and changes in ice extent are complex. The mass balance of land-based glaciers and ice sheets is determined by the accumulation of snow, mostly in winter, and warm-season ablation due primarily to net radiation and turbulent heat fluxes to melting ice and snow from warm-air advection Where ice masses terminate in the ocean, iceberg calving is the major contributor to mass loss. In this situation, the ice margin may extend out into deep water as a floating ice shelf, such as that in the Ross Sea.
Sea ice
Sea ice covers much of the polar oceans and forms by freezing of sea water. Satellite data since the early 1970s reveal considerable seasonal, regional, and interannual variability in the sea ice covers of both hemispheres. Seasonally, sea-ice extent in the Southern Hemisphere varies by a factor of 5, from a minimum of 3–4 million km2 in February to a maximum of 17–20 million km2 in September. The seasonal variation is much less in the Northern Hemisphere where the confined nature and high latitudes of the Arctic Ocean result in a much larger perennial ice cover, and the surrounding land limits the equatorward extent of wintertime ice. Thus, the seasonal variability in Northern Hemisphere ice extent varies by only a factor of 2, from a minimum of 7–9 million km2 in September to a maximum of 14–16 million km2 in March.
The ice cover exhibits much greater regional-scale interannual variability than it does hemispherical. For instance, in the region of the Sea of Okhotsk and Japan, maximum ice extent decreased from 1.3 million km2 in 1983 to 0.85 million km2 in 1984, a decrease of 35%, before rebounding the following year to 1.2 million km2. The regional fluctuations in both hemispheres are such that for any several-year period of the satellite record some regions exhibit decreasing ice coverage while others exhibit increasing ice cover.
Frozen ground and permafrost
Snow cover
Most of the Earth's snow-covered area is located in the Northern Hemisphere, and varies seasonally from 46.5 million km2 in January to 3.8 million km2 in August.
Snow cover is an extremely important storage component in the water balance, especially seasonal snowpacks in mountainous areas of the world. Though limited in extent, seasonal snowpacks in the Earth's mountain ranges account for the major source of the runoff for stream flow and groundwater recharge over wide areas of the midlatitudes. For example, over 85% of the annual runoff from the Colorado River basin originates as snowmelt. Snowmelt runoff from the Earth's mountains fills the rivers and recharges the aquifers that over a billion people depend on for their water resources.
Furthermore, over 40% of the world's protected areas are in mountains, attesting to their value both as unique ecosystems needing protection and as recreation areas for humans.
Ice on lakes and rivers
Ice forms on rivers and lakes in response to seasonal cooling. The sizes of the ice bodies involved are too small to exert anything other than localized climatic effects. However, the freeze-up/break-up processes respond to large-scale and local weather factors, such that considerable interannual variability exists in the dates of appearance and disappearance of the ice. Long series of lake-ice observations can serve as a proxy climate record, and the monitoring of freeze-up and break-up trends may provide a convenient integrated and seasonally-specific index of climatic perturbations. Information on river-ice conditions is less useful as a climatic proxy because ice formation is strongly dependent on river-flow regime, which is affected by precipitation, snow melt, and watershed runoff as well as being subject to human interference that directly modifies channel flow, or that indirectly affects the runoff via land-use practices.
Lake freeze-up depends on the heat storage in the lake and therefore on its depth, the rate and temperature of any inflow, and water-air energy fluxes. Information on lake depth is often unavailable, although some indication of the depth of shallow lakes in the Arctic can be obtained from airborne radar imagery during late winter (Sellman et al. 1975) and spaceborne optical imagery during summer (Duguay and Lafleur 1997). The timing of breakup is modified by snow depth on the ice as well as by ice thickness and freshwater inflow.
Changes caused by climate change
Ice sheet melt
Decline of glaciers
Sea ice decline
Permafrost thaw
Snow cover decrease
Studies in 2021 found that Northern Hemisphere snow cover has been decreasing since 1978, along with snow depth. Paleoclimate observations show that such changes are unprecedented over the last millennia in Western North America.
North American winter snow cover increased during the 20th century, largely in response to an increase in precipitation.
Because of its close relationship with hemispheric air temperature, snow cover is an important indicator of climate change.
Global warming is expected to result in major changes to the partitioning of snow and rainfall, and to the timing of snowmelt, which will have important implications for water use and management. These changes also involve potentially important decadal and longer time-scale feedbacks to the climate system through temporal and spatial changes in soil moisture and runoff to the oceans.(Walsh 1995). Freshwater fluxes from the snow cover into the marine environment may be important, as the total flux is probably of the same magnitude as desalinated ridging and rubble areas of sea ice. In addition, there is an associated pulse of precipitated pollutants which accumulate over the Arctic winter in snowfall and are released into the ocean upon ablation of the sea ice.
| Physical sciences | Water: General | null |
47530 | https://en.wikipedia.org/wiki/Cumulonimbus%20cloud | Cumulonimbus cloud | Cumulonimbus () is a dense, towering, vertical cloud, typically forming from water vapor condensing in the lower troposphere that builds upward carried by powerful buoyant air currents. Above the lower portions of the cumulonimbus the water vapor becomes ice crystals, such as snow and graupel, the interaction of which can lead to hail and to lightning formation, respectively.
When causing thunderstorms, these clouds may be called thunderheads. Cumulonimbus can form alone, in clusters, or along squall lines. These clouds are capable of producing lightning and other dangerous severe weather, such as tornadoes, hazardous winds, and large hailstones. Cumulonimbus progress from overdeveloped cumulus congestus clouds and may further develop as part of a supercell. Cumulonimbus is abbreviated as Cb.
Appearance
Towering cumulonimbus clouds are typically accompanied by smaller cumulus clouds. The cumulonimbus base may extend several kilometres (miles) across, or be as small as several tens of metres (yards) across, and occupy low to upper altitudes within the troposphere - formed at altitude from approximately . Normal peaks usually reach to as much as , with unusually high ones typically topping out around and extreme instances claimed to be as high as or more. Well-developed cumulonimbus clouds are characterized by a flat, anvil shaped top (anvil dome), caused by wind shear or inversion at the equilibrium level near the tropopause. The shelf of the anvil may precede the main cloud's vertical component for many kilometres (miles), and be accompanied by lightning. Occasionally, rising air parcels surpass the equilibrium level (due to momentum) and form an overshooting top culminating at the maximum parcel level. When vertically developed, this largest of all clouds usually extends through all three cloud regions. Even the smallest cumulonimbus cloud dwarfs its neighbors in comparison.
Species
Cumulonimbus calvus: cloud with puffy top, similar to cumulus congestus which it develops from; under the correct conditions it can become a cumulonimbus capillatus.
Cumulonimbus capillatus: cloud with cirrus-like, fibrous-edged top.
Types
Cumulonimbus flammagenitus (pyrocumulonimbus): rapidly growing cloud forming from non-atmospheric heat and condensation nuclei sources such as wildfires and volcanic eruptions.
Supplementary features
Accessory clouds
Arcus (including roll and shelf clouds): low, horizontal cloud formation associated with the leading edge of thunderstorm outflow.
Pannus: accompanied by a lower layer of fractus species cloud forming in precipitation.
Pileus (species calvus only): small cap-like cloud over parent cumulonimbus.
Velum: a thin horizontal sheet that forms around the middle of a cumulonimbus.
Supplementary features
Incus (species capillatus only): cumulonimbus with flat anvil-like cirriform top caused by wind shear where the rising air currents hit the inversion layer at the tropopause.
Mamma or mammatus: consisting of bubble-like protrusions on the underside.
Tuba: column hanging from the cloud base which can develop into a funnel cloud or tornado. They are known to drop very low, sometimes just above ground level.
Flanking line is a line of small cumulonimbus or cumulus generally associated with severe thunderstorms.
An overshooting top is a dome that rises above the thunderstorm; it is associated with severe weather.
Precipitation-based supplementary features
Rain: precipitation that reaches the ground as liquid, often in a precipitation shaft.
Virga: precipitation that evaporates before reaching the ground.
Effects
Cumulonimbus storm cells can produce torrential rain of a convective nature (often in the form of a rain shaft) and flash flooding, as well as straight-line winds. Most storm cells die after about 20 minutes, when the precipitation causes more downdraft than updraft, causing the energy to dissipate. If there is sufficient instability and moisture in the atmosphere, however (on a hot summer day, for example), the outflowing moisture and gusts from one storm cell can lead to new cells forming just a few kilometres (miles) from the former one a few tens of minutes later or in some cases hundreds of kilometres (miles) away many hours later. This process cause thunderstorm formation (and decay) to last for several hours or even over multiple days. Cumulonimbus clouds can also occur as dangerous winter storms called "thundersnow" which are associated with particularly intense snowfall rates and with blizzard conditions when accompanied by strong winds that further reduce visibility. However, cumulonimbus clouds are most common in tropical regions and are also frequent in moist environments during the warm season in the middle latitudes. A dust storm caused by a cumulonimbus downburst is a haboob.
Hazards to aviation
Cumulonimbus are a notable hazard to aviation due most importantly to potent wind currents but also reduced visibility and lightning, as well as icing and hail if flying inside the cloud. Within and in the vicinity of thunderstorms there is significant turbulence and clear-air turbulence (particularly downwind), respectively. Wind shear within and under a cumulonimbus is often intense with downbursts being responsible for many accidents in earlier decades before training and technological detection and nowcasting measures were implemented. A small form of downburst, the microburst, is the most often implicated in crashes because of their rapid onset and swift changes in wind and aerodynamic conditions over short distances. Most downbursts are associated with visible precipitation shafts, however, dry microbursts are generally invisible to the naked eye. At least one fatal commercial airline accident was associated with flying through a tornado.
Life cycle or stages
In general, cumulonimbus require moisture, an unstable air mass, and a lifting force in order to form. Cumulonimbus typically go through three stages: the developing stage, the mature stage (where the main cloud may reach supercell status in favorable conditions), and the dissipation stage. The average thunderstorm has a diameter and a height of approximately . Depending on the conditions present in the atmosphere, these three stages take an average of 30 minutes to go through.
Cloud types
Clouds form when the dew point temperature of water is reached in the presence of condensation nuclei in the troposphere. The atmosphere is a dynamic system, and the local conditions of turbulence, uplift, and other parameters give rise to many types of clouds. Various types of cloud occur frequently enough to have been categorized. Furthermore, some atmospheric processes can make the clouds organize in distinct patterns such as wave clouds or actinoform clouds. These are large-scale structures and are not always readily identifiable from a single point of view.
| Physical sciences | Clouds | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.