id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
34527 | https://en.wikipedia.org/wiki/Zodiacal%20light | Zodiacal light | The zodiacal light (also called false dawn when seen before sunrise) is a faint glow of diffuse sunlight scattered by interplanetary dust. Brighter around the Sun, it appears in a particularly dark night sky to extend from the Sun's direction in a roughly triangular shape along the zodiac, and appears with less intensity and visibility along the whole ecliptic as the zodiacal band. Zodiacal light spans the entire sky and contributes to the natural light of a clear and moonless night sky. A related phenomenon is gegenschein (or counterglow), sunlight backscattered from the interplanetary dust, which appears directly opposite to the Sun as a faint but slightly brighter oval glow.
Zodiacal light contributes to the natural light of the sky, though since zodiacal light is very faint, it is often outshone and rendered invisible by moonlight or light pollution.
The interplanetary dust in the Solar System forms a thick, pancake-shaped cloud called the zodiacal cloud which straddles the ecliptic plane. The particle sizes range from 10 to 300 micrometres, implying masses from one nanogram to tens of micrograms.
The Pioneer 10 and Helios spacecraft observations in the 1970s revealed zodiacal light to be scattered by the interplanetary dust cloud in the Solar System.
Analysis of images of impact debris from the Juno spacecraft shows that the distribution of the dust extends from Earth's orbit to the 4:1 orbital resonance with Jupiter at , and suggests that the dust is from Mars. However, no other dedicated dust instrumentation on Pioneer 10, Pioneer 11, Galileo, Ulysses, nor Cassini found an indication that Mars is a significant source of dust besides comets and asteroids.
Viewing
In the mid-latitudes, the zodiacal light is best observed in the western sky in the spring after the evening twilight has completely disappeared, or in the eastern sky in the autumn just before the morning twilight appears. The zodiacal light appears as a column, brighter at the horizon and tilted at the angle of the ecliptic. The light scattered from extremely small dust particles is strongly forward scattering, although the zodiacal light actually extends all the way around the sky, hence it is brightest when observing at a small angle with the Sun. This is why it is most clearly visible near sunrise or sunset when the Sun is blocked, but the dust particles nearest the line of sight to the Sun are not. The dust band that causes the zodiacal light is uniform across the whole ecliptic.
The dust further from the ecliptic is almost undetectable except when viewed at a small angle with the Sun. Thus it is possible to see more of the width at small angles toward the Sun, and it appears wider near the horizon, closer to the Sun under the horizon.
Origin
The source of the dust has been long debated. Until recently, it was thought that the dust originated from the tails of active comets and from collisions between asteroids in the asteroid belt. Many of our meteor showers have no known active comet parent bodies. Over 85 percent of the dust is attributed to occasional fragmentations of Jupiter-family comets that are nearly dormant. Jupiter-family comets have orbital periods of less than 20 years and are considered dormant when not actively outgassing, but may do so in the future. The first fully dynamical model of the zodiacal cloud demonstrated that only if the dust was released in orbits that approach Jupiter, is it stirred up enough to explain the thickness of the zodiacal dust cloud. The dust in meteoroid streams is much larger, 300 to 10,000 micrometres in diameter, and falls apart into smaller zodiacal dust grains over time.
The Poynting–Robertson effect forces the dust into more circular (but still elongated) orbits, while spiralling slowly into the Sun. Hence a continuous source of new particles is needed to maintain the zodiacal cloud. Cometary dust and dust generated by collisions among the asteroids are believed to be mostly responsible for the maintenance of the dust cloud producing the zodiacal light and the gegenschein.
Particles can be reduced in size by collisions or by space weathering. When ground down to sizes less than 10 micrometres, the grains are removed from the inner Solar System by solar radiation pressure. The dust is then replenished by the infall from comets. Zodiacal dust around nearby stars is called exozodiacal dust; it is a potentially important source of noise in attempts to directly image extrasolar planets. It has been pointed out that this exozodiacal dust, or hot debris disks, can be an indicator of planets, as planets tend to scatter the comets to the inner Solar System.
In 2015, new results from the secondary ion dust spectrometer COSIMA on board the ESA/Rosetta orbiter confirmed that the parent bodies of interplanetary dust are most probably Jupiter-family comets such as comet 67P/Churyumov–Gerasimenko. Data from the Juno mission indicate that the dust close to Earth has a local origin in the inner Solar System, best fitting the planet Mars as a source.
Appearance
Zodiacal light is produced by sunlight reflecting off dust particles in the Solar System known as cosmic dust. Consequently, its spectrum is the same as the solar spectrum. The material producing the zodiacal light is located in a lens-shaped volume of space centered on the sun and extending well out beyond the orbit of Earth. This material is known as the interplanetary dust cloud. Since most of the material is located near the plane of the Solar System, the zodiacal light is seen along the ecliptic. The amount of material needed to produce the observed zodiacal light is quite small. If it were in the form of 1 mm particles, each with the same albedo (reflecting power) as the Moon, each particle would be 8 km from its neighbors. The gegenschein may be caused by particles directly opposite the Sun as seen from Earth, which would be in full phase.
According to Nesvorný and Jenniskens, when the dust grains are as small as about 150 micrometres in size, they will hit the Earth at an average speed of 14.5 km/s, many as slowly as 12 km/s. If so, they pointed out, this comet dust can survive entry in partially molten form, accounting for the unusual attributes of the micrometeorites collected in Antarctica, which do not resemble the larger meteorites known to originate from asteroids. In recent years, observations by a variety of spacecraft have shown significant structure in the zodiacal light including dust bands associated with debris from particular asteroid families and several cometary trails.
Cultural significance
According to Alexander von Humboldt's Kosmos, Mesoamericans were aware of the zodiacal light before 1500. It was perhaps first reported in print by Joshua Childrey in 1661. The phenomenon was investigated by the astronomer Giovanni Domenico Cassini in 1683. According to some sources, he explained it by dust particles around the Sun. Other sources state that it was first explained this way by Nicolas Fatio de Duillier, in 1684, whom Cassini advised to study the zodiacal light.
Importance to Islam
The Islamic prophet Muhammad described zodiacal light in reference to the timing of the five daily prayers, calling it the "false dawn" ( ). Muslim oral tradition preserves numerous sayings, or hadith, in which Muhammad describes the difference between the light of false dawn, appearing in the sky long after sunset, and the light of the first band of horizontal light at sunrise, the "true dawn" ( ). According to the vast majority of Muslim scholars, astronomical dawn is considered the true dawn. Practitioners of Islam use Muhammad's descriptions of zodiacal light to avoid errors in determining the timing of fasting and daily prayers.
Brian May
In 2007, Brian May, lead guitarist with the band Queen, completed his thesis, A Survey of Radial Velocities in the Zodiacal Dust Cloud, thirty-six years after abandoning it to pursue a career in music. He was able to submit it only because of the minimal amount of research on the topic undertaken during the intervening years. May described the subject as being one that became "trendy" again in the 2000s.
Other planets
Other planets, like Venus or Mercury, have shown to have rings of interplanetary dust in their orbital spaces.
| Physical sciences | Celestial mechanics | Astronomy |
1197569 | https://en.wikipedia.org/wiki/Air%20purifier | Air purifier | An air purifier or air cleaner is a device which removes contaminants from the air in a room to improve indoor air quality. These devices are commonly marketed as being beneficial to allergy sufferers and asthmatics, and at reducing or eliminating second-hand tobacco smoke.
The commercially graded air purifiers are manufactured as either small stand-alone units or larger units that can be affixed to an air handler unit (AHU) or to an HVAC unit found in the medical, industrial, and commercial industries. Air purifiers may also be used in industry to remove impurities from air before processing. Pressure swing adsorbers or other adsorption techniques are typically used for this.
History
In 1830, a patent was awarded to Charles Anthony Deane for a device comprising a copper helmet with an attached flexible collar and garment. A long leather hose attached to the rear of the helmet was to be used to supply air, the original concept being that it would be pumped using a double bellows. A short pipe allowed breathed air to escape. The garment was to be constructed from leather or airtight cloth, secured by straps.
In the 1860s, John Stenhouse filed two patents applying the absorbent properties of wood charcoal to air purification (patents 19 July 1860 and 21 May 1867), thereby creating the first practical respirator.
In 1871, the physicist John Tyndall wrote about his invention, a fireman's respirator, as a result of a combination of protective features of the Stenhouse's respirator and other breathing devices. This invention was later described in 1875.
In the 1950s, HEPA filters were commercialized as highly efficient air filters, after being put to use in the 1940s in the United States' Manhattan Project to control airborne radioactive contaminants.
The first residential HEPA filter was reportedly sold in 1963 by brothers Manfred and Klaus Hammes in Germany, who created the Incen Air Corporation which was the precursor to the IQAir corporation.
Use and benefits
Dust, pollen, pet dander, mold spores, and dust mite feces can act as allergens, triggering allergies in sensitive people. Smoke particles and volatile organic compounds (VOCs) can pose a risk to health. Exposure to various components such as VOCs increases the likelihood of experiencing symptoms of sick building syndrome.
COVID-19
Joseph Allen, director of the Healthy Buildings program at Harvard's School of Public Health, recommends that school classrooms use an air purifier with a HEPA filter as a way to reduce transmission of COVID-19 virus, saying "Portables with a high-efficiency HEPA filter and sized for the appropriate room can capture 99.97 percent of airborne particles."
One fluid dynamic modelling study from January 2021 suggests that operating air purifiers or air ventilation systems in confined spaces, such as an elevator, during their occupancy by multiple people leads to air circulation effects that could, theoretically, enhance viral transmission. However, real-life testing of portable HEPA/UV air filters in COVID-19 wards in hospital demonstrated complete elimination of air-borne SARS-CoV-2. This report also showed a significant reduction in other bacteria, fungal and viral bioaerosol, suggesting that portable filters such as this may be able to prevent not only nosocomial spread of COVID-19 but also other hospital-acquired infections. The Addenbrooke's Air Disinfection Study (AAirDS) undertook a quasi-experimental study comparing paired wards with and without air purifying devices. The researchers found an association between the deployment of air purifying devices and reduced nosocomial transmission of SARS-CoV-2 but the size of the effect and uncertainty around it were high. Acceptability of the devices in the hospital environment was imperfect, and as other restrictions such as masking and room occupancy were reduced so too did compliance with the air purifying devices.
Purifying techniques
There are two types of air purifying technologies, active and passive. Active air purifiers release negatively charged ions into the air, causing pollutants to stick to surfaces, while passive air purification units use air filters to remove pollutants. Passive purifiers are more efficient since all the dust and particulate matter is permanently removed from the air and collected in the filters.
Several different processes of varying effectiveness can be used to purify air. As of 2005, the most common methods were high-efficiency particulate air (HEPA) filters and ultraviolet germicidal irradiation (UVGI).
Filtration
Air filter purification traps airborne particles by size exclusion. Air is forced through a filter and particles are physically captured by the filter. Various filters exist notably including:
High-efficiency particulate arrestance (HEPA) filters remove at least 99.97% of 0.3-micrometer particles and are usually more effective at removing larger and smaller particles. HEPA purifiers, which filter all the air going into a clean room, must be arranged so that no air bypasses the HEPA filter. In dusty environments, a HEPA filter may follow an easily cleaned conventional filter (prefilter) which removes coarser impurities so that the HEPA filter needs cleaning or replacing less frequently. HEPA filters do not generate ozone or harmful byproducts in the course of operation.
Filter HVAC at MERV 14 or above are rated to remove airborne particles of 0.3 micrometers or larger. A high-efficiency MERV 14 filter has a capture rate of at least 75% for particles between 0.3 and 1.0 micrometers. Although the capture rate of a MERV filter is lower than that of a HEPA filter, a central air system can move significantly more air in the same period of time. Using a high-grade MERV filter can be more effective than using a high-powered HEPA machine at a fraction of the initial capital expenditure. Unfortunately, most furnace filters are slid in place without an airtight seal, which allows air to pass around the filters. This problem is worse for the higher-efficiency MERV filters because of the increase in air resistance. Higher-efficiency MERV filters are usually denser and increase air resistance in the central system, requiring a greater air pressure drop and consequently increasing energy costs.
There is ongoing research to enable viable and effective biocide treated air filters (i.e. air filters coated with antimicrobial agents) for preventing the spread of airborne pathogens.
Other methods
Ultraviolet germicidal irradiation - UVGI can be used to sterilize air that passes UV lamps via forced air. Air purification UVGI systems can be freestanding units with shielded UV lamps that use a fan to force air past the UV light. Other systems are installed in forced air systems so that the circulation for the premises moves micro-organisms past the lamps. Key to this form of sterilization is the placement of the UV lamps and a good filtration system to remove the dead micro-organisms. For example, forced air systems by design impede line-of-sight, thus creating areas of the environment that will be shaded from the UV light. However, a UV lamp placed at the coils and drain pan of the cooling system will keep micro-organisms from forming in these naturally damp places. The most effective method for treating the air rather than the coils is in-line duct systems, these systems are placed in the center of the duct and parallel to the airflow.
Activated carbon is a porous material that can adsorb volatile chemicals on a molecular basis, but does not remove larger particles. The adsorption process when using activated carbon must reach equilibrium thus it may be difficult to completely remove contaminants. Activated carbon is merely a process of changing contaminants from a gaseous phase to a solid phase, when aggravated or disturbed contaminants can be regenerated in indoor air sources. Activated carbon can be used at room temperature and has a long history of commercial use. It is normally used in conjunction with other filter technology, especially with HEPA. Other materials can also absorb chemicals but at a higher cost.
Polarized-media electronic air cleaners use active electronically enhanced media to combine elements of both electronic air cleaners and passive mechanical filters. Most polarized-media electronic air cleaners use safe 24-volt DC voltage to establish the polarizing electric field. Most airborne particles have a charge and many are even bi-polar. As airborne particles pass through the electric field the polarized field re-orients the particle to adhere to a disposable fiber media pad. Ultra-fine particles (UFPs) that are not collected on their initial pass through the media pad are polarized and agglomerate to other particles, odor and VOC molecules and are collected on subsequent passes. The collection efficiency varies significantly by particle size. The efficiency of polarized-media electronic air cleaners increases as they load, providing high-efficiency filtration, with air resistance typically equal to or less than passive filters. Polarized-media technology is non-ionizing, which means no ozone is produced.
Photocatalytic oxidation (PCO) is an emerging technology in the HVAC industry. In addition to the prospect of Indoor Air Quality (IAQ) benefits, it has the added potential for limiting the introduction of unconditioned air to the building space, thereby presenting an opportunity to achieve energy savings over previous prescriptive designs. As of May 2009 there was no more disputable concern raised by the Lawrence Berkeley National Laboratory data that PCO may significantly increase the amount of formaldehyde in real indoor environments. As with other advanced technologies, sound engineering principles and practices should be employed by the HVAC designer to ensure proper application of the technology. Photocatalytic oxidation systems are able to completely oxidize and degrade organic contaminants. For example, Volatile Organic Compounds found low concentrations within a few hundred ppmv or less are the most likely to be completely oxidized. PCO uses short-wave ultraviolet light (UVC), commonly used for sterilization, to energize the catalyst (usually titanium dioxide (TiO2)) and oxidize bacteria and viruses. PCO in-duct units can be mounted to an existing forced-air HVAC system. PCO is not a filtering technology, as it does not trap or remove particles. Like polarized electric media, the effectiveness of PCO approaches are highly dependent on particle size, and system geometries must be tailored accordingly. PCO systems are sometimes coupled with other filtering technologies for air purification. UV sterilization bulbs must be replaced about once a year; manufacturers may require periodic replacement as a condition of warranty. Photocatalytic Oxidation systems often have high commercial costs.
A related technology relevant to air purification is photoelectrochemical oxidation (PECO) Photoelectrochemical oxidation. While technically a type of PCO, PECO involves electrochemical interactions among the catalyst material and reactive species (e.g., through emplacement of cathodic materials) to improve quantum efficiency; in this way, it is possible to use lower energy UVA radiation as the light source and yet achieve improved effectiveness.
Ionizer purifiers use charged electrical surfaces or needles to generate electrically charged air or gas ions. These ions attach to airborne particles which are then electrostatically attracted to a charged collector plate. This mechanism produces trace amounts of ozone and other oxidants as by-products. Most ionizers produce less than 0.05 ppm of ozone, an industrial safety standard. There are two major subdivisions: the fanless ionizer and fan-based ionizer. Fanless ionizers are noiseless and use little power, but are less efficient at air purification. Fan-based ionizers clean and distribute air much faster. Permanently mounted home and industrial ionizer purifiers are called electrostatic precipitators.
Plasma air purifiers are a form of ionizing air purifier. Instead of precipitating particles on a plate, they are primarily intended to destroy volatile organic compounds, bacteria, and viruses by chemical reactions with generated ions. While promising in laboratory conditions, their usefulness and safety has not been established in air purification.
Far-UVC air purification systems (under development).
Immobilized cell technology removes microfine particulate matter from the air by attracting charged particulates to a bio-reactive mass, or bioreactor, which enzymatically renders them inert.
Ozone generators are designed to produce ozone and are sometimes sold as whole-house air cleaners. Unlike ionizers, ozone generators are intended to produce significant amounts of ozone, a strong oxidant gas which can oxidize many other chemicals. The only safe use of ozone generators is in unoccupied rooms, utilising "shock treatment" commercial ozone generators that produce over 3000 mg of ozone per hour. Restoration contractors use these types of ozone generators to remove smoke odors after fire damage, musty smells after flooding, mold (including toxic molds), and the stench caused by decaying flesh which cannot be removed by bleach or anything else except for ozone. However, it is not healthy to breathe ozone gas, and one should use extreme caution when buying a room air purifier that also produces ozone.
Titanium dioxide (TiO2) technology - nanoparticles of TiO2, together with calcium carbonate to neutralize any acidic gasses that may be adsorbed, is mixed into slightly porous paint. Photocatalysis initiates the decomposition of airborne contaminants at the surface.
Thermodynamic sterilization (TSS) - This technology uses heat sterilization via a ceramic core with microcapillaries, which are heated to . It is claimed that 99.9% of microbiological particles - bacteria, viruses, dust mite allergens, mold and fungus spores - are incinerated. The air passes through the ceramic core by the natural process of air convection, and is then cooled using heat transfer plates and released. TSS is not a filtering technology, as it does not trap or remove particles. TSS is claimed not to emit harmful by-products (although the byproducts of partial thermal decomposition are not addressed) and also reduces the concentration of ozone in the atmosphere.
Reactive Oxygen Species (ROS) Technology also known as "ROS Purifier" - There are 7 airborne ROS. Some are short-lived and some are long-lived. The five short-lived ones are going to be hydroxyl radical, Singlet Oxygen (dioxidene), Superoxide, Atomic Oxygen, Peroxynitrite (peroxynitrite). The two long-lived ROS ones are Hydrogen Peroxide – Gas Phased, and Ozone. Due to the long-lived Hydrogen Peroxide (gas phased) and with low levels of Ozone (30ppb - 50 ppb) It is very effective in killing pathogens that include mold, bacteria, viruses, and germs in the air and on surfaces and provide odor control. Unlike Ozone generators that produce a high amount of ozone that is used as "shock treatment" is only effective in empty rooms without people present whereas ROS (Reactive Oxygen Species) purifiers can be effective safely 24/7 while people are present when ozone is (30ppb - 50 ppb). ROS (Reactive Oxygen Species) has very effective long-distance surface treatment due to its output of ozone (30ppb - 50ppb) and Hydrogen Peroxide it has, unlike Titanium dioxide that produces 2 ROS which are Hydroxyl radicals and superoxide that are a very short distance on surface treatment.
Consumer concerns
Other aspects of some air cleaners are hazardous gaseous by-products from ozone-generating units, noise level, frequency of filter replacement, electrical consumption, and visual appeal. Ozone production is typical for air ionizing purifiers. A high concentration of ozone is dangerous, although most air ionizers produce low amounts, low rates of ozone reduce the effectiveness. A build up can cause detrimental health effects especially for vulnerable people. The noise level of a purifier can often be obtained through a customer service department and is usually reported in decibels (dB). The noise levels for most purifiers can vary and may be dependent on fan speed. Frequency of filter replacement and electrical consumption are the major operation costs for any purifier. There are many types of filters; some can be cleaned by water, by hand or by vacuum cleaner, while others need to be replaced every few months or years. Sometimes suitable filters are only sold by the manufacturer for a high cost, some have DRM control so only replacement filters authorised by the manufactuere can be used. In the United States, some purifiers are certified as Energy Star and are energy efficient.
HEPA technology is used in portable air purifiers as it removes common airborne allergens. The US Department of Energy has requirements manufacturers must pass to meet HEPA requirements. The HEPA specification requires removal of at least 99.97% of 0.3 micrometers airborne pollutants. Products that claim to be "HEPA-type", "HEPA-like", or "99% HEPA" do not satisfy these requirements and may not have been tested in independent laboratories.
Air purifiers may be rated on a variety of factors, including Clean Air Delivery Rate (which determines how well air has been purified); efficient area coverage; air changes per hour; energy usage; and the cost of the replacement filters. Two other important factors to consider are the length that the filters are expected to last (measured in months or years) and the noise produced (measured in decibels) by the various settings that the purifier runs on. This information is available from most manufacturers.
Potential ozone hazards
As with other health-related appliances, there is controversy surrounding the claims of certain companies, especially involving ionic air purifiers. Many air purifiers generate some ozone, an energetic allotrope of three oxygen atoms, and in the presence of humidity, small amounts of NOx. Because of the nature of the ionization process, ionic air purifiers tend to generate the most ozone. This is a serious concern because ozone is a criteria air pollutant regulated by health-related US federal and state standards. In a controlled experiment, in many cases, ozone concentrations were well in excess of public and/or industrial safety levels established by US Environmental Protection Agency, particularly in poorly ventilated rooms.
Ozone can damage the lungs, causing chest pain, coughing, shortness of breath and throat irritation. It can also worsen chronic respiratory diseases such as asthma and compromise the ability of the body to fight respiratory infections—even in healthy people. People who have asthma and allergy are most prone to the adverse effects of high levels of ozone. For example, increasing ozone concentrations to unsafe levels can increase the risk of asthma attacks.
Due to the below average performance and potential health risks, Consumer Reports has advised against using ozone producing air purifiers. Some manufacturers falsely claim outdoor and indoor ozone are different. Claims that these devices restore a hypothesized ionic balance are unsupported by science.
Ozone generators are used by cleanup contractors on unoccupied rooms to oxidize and permanently remove smoke, mold, and odor damage, and are considered a valuable and effective industrial tool. However, these machines can produce undesirable by-products.
In September 2007, the California Air Resources Board announced a ban of indoor air cleaning devices which produce ozone above a legal limit. This law, which took effect in 2010, requires testing and certification of all types of indoor air cleaning devices to verify that they do not emit excessive ozone.
Industry and markets
As of 2015, the United States residential air purifier total addressable market was estimated at $2 billion per year.
| Technology | Food, water and health | null |
1198013 | https://en.wikipedia.org/wiki/Alunite | Alunite | Alunite is a hydroxylated aluminium potassium sulfate mineral, formula KAl3(SO4)2(OH)6. It was first observed in the 15th century at Tolfa, near Rome, where it was mined for the manufacture of alum. First called aluminilite by J.C. Delamétherie in 1797, this name was contracted by François Beudant three decades later to alunite.
Alunite crystals morphologically are rhombohedra with interfacial angles of 90° 50', causing them to resemble cubes. Crystal symmetry is trigonal. Minute glistening crystals have also been found loose in cavities in altered rhyolite. Alunite varies in color from white to yellow gray. The hardness on the Mohs scale is 4 and the specific gravity is between 2.6 and 2.8. It is insoluble in water or weak acids, but soluble in sulfuric acid.
Sodium can substitute for potassium in the mineral, and when the sodium content is high, is called natroalunite.
Alunite is an analog of jarosite, where aluminium replaces Fe3+. Alunite occurs as a secondary mineral on iron sulfate ores.
Alunite occurs as veins and replacement masses in trachyte, rhyolite, and similar potassium rich volcanic rocks. It is formed by the action of sulfuric acid bearing solutions on these rocks during the oxidation and leaching of metal sulfide deposits. Alunite also is found near volcanic fumaroles. The white, finely granular masses closely resemble finely granular limestone, dolomite, anhydrite, and magnesite in appearance. The more compact kinds from Hungary are so hard and tough that they have been used for millstones.
Historically extensive deposits were mined in Tuscany and Hungary, and at Bulahdelah, New South Wales, Australia. It is currently mined at Tolfa, Italy. In the United States it is found in the San Juan district of Colorado; Goldfield, Nevada; the ghost town of Alunite, Utah near Marysvale; and Red Mountain near Patagonia, Arizona. The Arizona occurrence lies appropriately above a canyon named Alum Gulch. Alunite is mined as an ore of both potassium and aluminium at Marysvale. Some of the ore deposits were located by airborne and satellite multispectral imaging.
An article in the May/June 2019 issue of Archaeology magazine states that in China, in Henan province, an assortment of ceramic objects and jars were found, dating back 2000 years. In one of the jars, a mixture of alunite and potassium nitrate was found. The mixture was then thought to be a "mixture of immortality" mentioned in ancient Chinese texts. Obviously, this does not appear to have succeeded.
| Physical sciences | Minerals | Earth science |
1199325 | https://en.wikipedia.org/wiki/Hamadryas%20baboon | Hamadryas baboon | The hamadryas baboon (Papio hamadryas ; gawina; , Al Robah) is a species of baboon within the Old World monkey family. It is the northernmost of all the baboons, being native to the Horn of Africa and the southwestern region of the Arabian Peninsula. These regions provide habitats with the advantage for this species of fewer natural predators than central or southern Africa where other baboons reside. The hamadryas baboon was a sacred animal to the ancient Egyptians and appears in various roles in ancient Egyptian religion, hence its alternative name of 'sacred baboon'.
Description
Apart from the striking sexual dimorphism (males are nearly twice as large as females, which is common to most baboons) this species also shows differences in coloration among adults. Adult males have a pronounced cape (mane and mantle), silver-white in color, which they develop around the age of ten, while the females are capeless and brown all over. Their faces range in color from reddish to tan to a dark brown.
Males may have a body measurement of up to and weigh ; females weigh and have a body length of . The tail adds a further to the length, and ends in a small tuft. Infants are very dark brown or black in coloration and lighten after about one year. Hamadryas baboons reach sexual maturity at about four years for females and between five and seven years for males.
Behaviour and ecology
The hamadryas baboon's range extends from the Red Sea in Eritrea to Ethiopia, Djibouti and Somalia. It is also native to the Sarawat Mountains of southwestern Arabia, in both Yemen and Saudi Arabia. It is locally extinct in Egypt. The hamadryas baboon lives in arid areas, savannas, and rocky areas, requiring cliffs for sleeping and finding water. Like all baboons, the hamadryas baboon is omnivorous and is adapted to its relatively dry habitat. During the wet seasons, the baboon feeds on a variety of foods, including blossoms, seeds, grasses, wild roots, bark and leaves from acacia trees. During the dry season, the baboons eat leaves of the Dobera glabra and sisal leaves. Hamadryas baboons also eat insects, spiders, worms, scorpions, reptiles, birds, and small mammals, including antelope.
The baboon's drinking activities also depend on the season. During the wet seasons, the baboon do not have to go far to find pools of water. During the dry seasons, they frequent up to three permanent waterholes. Baboons rest at the waterholes during midafternoon and also dig drinking holes only a short distance from natural waterholes.
Group organization
The hamadryas baboon has an unusual four-level social system called a multilevel society. Most social interaction occurs within small groups called one-male units or harems containing one male and up to 10 females, which the males lead and guard. A harem often includes a younger "follower" male that may be related to the leader. Two or more harems unite repeatedly to form clans. Within clans, males are close relatives of one another and have an age-related dominance hierarchy. Bands are the next level. Two to four clans form bands of up to 400 individuals which usually travel and sleep as a group. Males rarely leave their bands, and females are occasionally transferred or traded between bands by males. Bands may fight with one another over food or territory, and the adult male leaders of the units are the usual combatants. Bands also contain solitary males that are not harem leaders or followers and move freely within the band. Several bands may come together to form a troop, usually at sleeping cliffs.
Group behavior
The hamadryas baboon is unusual among baboon and macaque species in that its society is strictly patriarchal. The males limit the movements of the females, herding them with visual threats and grabbing or biting any that wander too far away. Males sometimes raid harems for females, resulting in aggressive fights. Many males succeed in taking a female from another's harem, called a "takeover". Visual threats are usually accompanied by these aggressive fights. This would include a quick flashing of the eyelids accompanied by a yawn to show off the teeth. As in many species, infant baboons are taken by the males as hostages during fights. However, males within the same clan tend to be related and respect the social bonds of their kin. In addition, females demonstrate definite preferences for certain males, and rival males heed these preferences. The less a female favors her harem males, the more likely she will be successfully taken by a rival. Young males, often "follower" males, may start their own harems by maneuvering immature females into following them. The male may also abduct a young female by force. Either way, the male will mate with the female when she matures. Aging males often lose their females to followers and soon lose weight and their hair color changes to brown like a female. While males in most other baboon species are transferred away from their male relatives and into different troops, male hamadryas baboons remain in their natal clans or bands and have associations with their male kin.
Hamadryas baboons have traditionally been thought of having a female transfer society with females being moved away from their relatives of the same sex. However, later studies show female baboons retain close associations with at least some female kin. Females can spend about as much time with other females as they do with the harem males, and some females will even interact with each other outside of their harems. In addition, it is not uncommon for females of the same natal group to end up in the same harem. Females can still associate and help their extended families despite their interactions being controlled by the harem males.
Females within a harem do not display any dominance relationships as seen in many other baboon and macaque species. The harem males suppress aggression between the females and prevent any dominance hierarchies from arising. Despite this, some social differences between the females occur. Some females are more socially active and have a stronger social bond with the harem male. These females, known as the "central females", stay in closer proximity to the harem male than the other females. Females that spend most of their time farther from the harem male are called "peripheral females".
Reproduction and parenting
Like other baboons, the hamadryas baboon breeds aseasonally. The dominant male of a one-male unit does most of the mating, though other males may occasionally sneak in copulations, as well. Females do most of the parenting. They nurse and groom the infant and one female in a unit may groom an infant that is not hers. Like all baboons, hamadryas baboons are intrigued by infants and give much attention to them. Dominant male baboons prevent other males from coming into close contact with their infants. They also protect the young from predators. The dominant male tolerates the young and will carry and play with them. When a new male takes over a female, she develops sexual swellings which may be an adaptation that functions to prevent the new male from killing the offspring of the previous male. When males reach puberty, they show a playful interest in young infants. They will kidnap the infants by luring them away from their harems and inviting them to ride on their backs. This is more often done by "follower" males. This kidnapping can lead to dehydration or starvation for the infant. The harem leader would retrieve the infants from their kidnappers, which is mostly an act to protect their offspring.
Thermoregulation
Because bipedalism is thought to help reduce thermoregulatory stress, research has investigated how baboons deal with water restriction and thermal loads as quadrupeds. Using implanted data loggers and simulated desert conditions, researchers found baboon internal temperatures increased significantly with water deprivation. When water was given to baboons, their internal temperatures dropped quickly. Therefore, it seems that access to water helps baboons maintain homeothermy and that water restrictions are a major threat to this species. However, baboons can maintain their plasma volume during water deprivation due to an increase in blood colloid osmotic pressure (COP). Hamadryas baboons do this by increasing albumin synthesis. This helps baboons retain fluids when their bodies are experiencing dehydration.
In culture
Hamadryas baboons often appear in ancient Egyptian art, as they were considered sacred to Thoth, a major and powerful deity with many roles that included being the scribe of the gods. Astennu, attendant to Thoth, is represented as a hamadryas in his roles as recorder of the result of the Weighing of the Heart and as one of the four hamadryas baboons guarding the lake of fire in Duat, the ancient Egyptian underworld. A predynastic precursor to Astennu was Babi, or "Bull of the Baboons", a bloodthirsty god said to eat the entrails of the unrighteous dead. Babi was also said to give the righteous dead continued virility, and to use his penis as the mast of a boat to convey them to the Egyptian paradise.
Sometimes, Thoth himself appears in the form of a hamadryas (often shown carrying the moon on his head), as an alternative to his more common representation as an ibis-headed figure. Hapi, one of the Four Sons of Horus that guarded the organs of the deceased in ancient Egyptian religion, is also represented as hamadryas-headed; Hapi protected the lungs, hence the common sculpting of a stone or clay hamadryas head as the lid of the canopic jar that held the lungs and/or represented the protection of the lungs. Hamadryas baboons were revered because certain behaviors that they perform were seen as worshiping the sun.
Modern art
The Grand Babouin sacré "hamadryas" is among Rembrandt Bugatti's most celebrated sculptures.
Threats and conservation
Transformation of field and pastureland represents the main threat to the hamadryas baboon; its only natural predators are the striped hyena, spotted hyena, and a diminishing number of African leopards that can still be found in the same area of distribution. The IUCN Red List listed this species as "least concern" in 2008. No major range-wide threats exist at present, although locally it may be at risk through loss of habitat due to major agricultural expansion and irrigation projects. The species occurs in the proposed Yangudi Rassa National Park, the Harar Wildlife Sanctuary, and a number of wildlife reserves in the lower Awash valley and in northern Eritrea.
| Biology and health sciences | Old World monkeys | Animals |
1199428 | https://en.wikipedia.org/wiki/Whelk | Whelk | Whelks are any of several carnivorous sea snail species with a swirling, tapered shell. Many are eaten by humans, such as the common whelk of the North Atlantic. Most whelks belong to the family Buccinidae and are known as "true whelks." Others, such as the dog whelk, belong to several sea snail families that are not closely related.
True whelks (family Buccinidae) are carnivorous, and feed on annelids, crustaceans, mussels and other molluscs, drilling holes through shells to gain access to the soft tissues. Whelks use chemoreceptors to locate their prey.
Many have historically been used, or are still used, by humans and other animals as food. In a reference serving of whelk, there are of food energy, 24 g of protein, 0.34 g of fat, and 8 g of carbohydrates.
Dog whelk, a predatory species, was used in antiquity to make a rich red dye that improves in color as it ages.
Usage
The common name "whelk" is also spelled welk or even wilk.
The species, genera and families referred to by this common name vary a great deal from one geographic area to another.
United States
In the United States, whelk refers to several large edible species in the genera Busycon and Busycotypus, which are now classified in the family Buccinidae. These are sometimes called Busycon whelks.
In addition, the unrelated invasive murex Rapana venosa is referred to as the Veined rapa whelk or Asian rapa whelk in the family Muricidae.
Brazil
In Brazil, there is a very popular Afro-Brazilian divination game practiced by older women of African ancestry called jogo de búzios (game of whelks), which uses empty shells of these gastropods.
United Kingdom and Ireland, Belgium, Netherlands
In the British Isles, Belgium and the Netherlands (wulk/wullok), the word is used for a number of species in the family Buccinidae, especially Buccinum undatum, an edible European and Northern Atlantic species.
In the British Isles, the common name "dog whelk" is used for Nucella lapillus (family Muricidae) and for Nassarius species (family Nassariidae). Historically, they were a popular street food in Victorian London, typically located close to public houses and theatres.
Scotland
In Scotland, the word "whelk" is also used to mean the periwinkle (Littorina littorea), family Littorinidae.
West Indies
In the English-speaking islands of the West Indies, the word whelks or wilks (this word is both singular and plural) is applied to a large edible top shell, Cittarium pica, also known as the magpie or West Indian top shell, family Trochidae.
Asia
In Japan, are frequently used in sashimi and sushi. In Vietnam, they are served in a dish called Bún ốc - vermicelli with sea snails. () is a Korean dish consisting of whelks and with chili sauce in a salad with cold noodles. It has been a very popular side dish with alcohol for many generations.
Australia, New Zealand
In Australia and New Zealand, species of the genus Cabestana (family Ranellidae) are called predatory whelks, and species of Penion (family Buccinidae) are called siphon whelks.
Some common examples
Channeled whelk
Common whelk
Knobbed whelk, the state shell of Georgia and New Jersey
Lightning whelk
Red whelk
Speckled whelk
"Wrinkled whelk", "inflated whelk", and "lyre whelk", common names for Neptunea lyrata
Wrinkled purple whelk
| Biology and health sciences | Gastropods | Animals |
1200103 | https://en.wikipedia.org/wiki/Moa-nalo | Moa-nalo | The moa-nalo are a group of extinct aberrant, goose-like ducks that lived on the larger Hawaiian Islands, except Hawaii itself, in the Pacific. They were the major herbivores on most of these islands until they became extinct after human settlement.
Description
The moa-nalo (the name literally means "lost fowl"; the plural and the singular are the same) were long unknown to science, having been wiped out before the arrival of James Cook (1778). In the early 1980s, their subfossil remains were discovered in sand dunes on the islands of Molokai and Kauai. Subsequently, bones were found on Maui, Oahu, and Lānai, in lava tubes, lake beds, and sinkholes. They represent four species in three genera so far:
Chelychelynechen quassus (turtle-jawed moa-nalo) from Kauai
Ptaiochen pau (small-billed moa-nalo) from Maui
Thambetochen xanion (O'ahu moa-nalo) from Oahu
Thambetochen chauliodous (Maui Nui large-billed moa-nalo) from Maui, Lānai and Molokai (Maui Nui)
Chelychelynechen, meaning turtle-jawed goose, had a large, heavy bill like that of a tortoise, while the other two genera, Thambetochen and Ptaiochen, all had serrations in their bills known as pseudoteeth, similar to those of mergansers. All species were flightless and large, with an average mass of .
Evolution
Some moa-nalo fossils have been found to contain traces of mitochondrial DNA which were compared to living duck species in order to establish their place in the duck family, Anatidae. Contrary to the expectations of some scientists, the moa-nalo were not related to the large geese (Anserinae), such as the surviving nēnē, but instead to the dabbling ducks of the genus Anas, which includes the mallard. The present DNA analysis' resolution is not high enough to determine their relationships to different species of Anas, but biogeography strongly suggests that their closest living relative is the widespread Pacific black duck.
Ecology
The unusual bill shape and size of the moa-nalo can be attributed to their role in the ecology of prehistoric Hawaii. A study of coprolites (fossil dung) of Thambetochen chauliodous found in Puu Naio Cave on lowland Maui has shown they were folivorous, at least in dry shrub or mesic forest habitats eating particularly fronds from ferns (possibly Asplenium nidus or Dryopteris wallichiana). This conclusion is backed up by the shapes of their beaks (James & Burney 1997). This indicates they were the principal browsers on the island.
The presence of prominent spines on the leaves and soft young stems of several Hawaiian lobelioids in the genus Cyanea—unusual in an island flora where such defenses are frequently lost, as in the ākala (Hawaiian raspberry)—suggests that the Cyanea evolved these thornlike prickles on new growth because they protect against browsing by the moa-nalo. The moa-nalo themselves filled the niche of herbivore usually filled by mammals such as goats and deer, or the giant tortoises of Galápagos and other archipelagoes. This has implications for the ecology of Hawaiian Islands today, as a major group of species have been lost.
Like island taxa from Mauritius, New Zealand and Polynesia, the moa-nalo were unused to mammals and were easily predated on by hunters or the animals that were introduced and became feral, such as domestic pigs.
| Biology and health sciences | Anseriformes | Animals |
1201234 | https://en.wikipedia.org/wiki/Kabocha | Kabocha | Kabocha (; from Japanese , ) is a type of winter squash, a Japanese variety of the species Cucurbita maxima. It is also called kabocha squash or Japanese pumpkin in North America. In Japan, "kabocha" may refer to either this squash, to the Western pumpkin, or indeed to other squashes. In Australia, "Japanese pumpkin" is a synonym of Kent pumpkin, a variety of winter squash (C. moschata).
Many of the kabocha in the market are kuri kabocha, a type created from seiyo kabocha (buttercup squash). Varieties of kabocha include Ajihei, Ajihei No. 107, Ajihei No. 331, Ajihei No. 335, Cutie, Ebisu, Emiguri, Marron d'Or and Miyako.
Description
Kabocha is hard on the outside with knobbly-looking skin. It is shaped like a squat pumpkin and has a dull-finished, deep-green skin with some celadon-to-white stripes and an intense yellow-orange color on the inside. In many respects it is similar to buttercup squash, but without the characteristic protruding "cup" on the blossom (bottom) end. An average kabocha weighs two to three pounds, but a large squash can weigh as much as eight pounds.
Culinary use
Kabocha has an exceptionally sweet flavor, even sweeter than butternut squash. It is similar in texture and flavor to a pumpkin and sweet potato combined. Some kabocha can taste like Russet potatoes or chestnuts. The rind is edible although some cooks may peel it to speed up the cooking process or to suit their personal taste preferences. Kabocha is commonly utilized in side dishes and soups, or as a substitute for potato or other squash varieties. It can be roasted after cutting the squash in half, scooping out the seeds, and then cutting the squash into wedges. With a little cooking oil and seasoning, it can be baked in the oven. Likewise, cut Kabocha halves can be added to a pressure cooker and steamed under high pressure for 15–20 minutes. One can slowly bake Kabocha whole and uncut in a convection oven, after which the entire squash becomes soft and edible, including the rind.
Kabocha is available all year but is best in late summer and early fall.
Kabocha is primarily grown in Japan, South Korea, Thailand, California, Florida, Hawaii, Southwestern Colorado, Mexico, Tasmania, Tonga, New Zealand, Chile, Jamaica, and South Africa, but is widely adapted for climates that provide a growing season of 100 days or more. Most of the kabocha grown in California, Colorado, Tonga and New Zealand is actually exported to Japan.
Japan
In Japan, kabocha is a common ingredient in vegetable tempura and is also made into soup and croquettes. Less traditional but popular usages include its incorporation in desserts such as pies, pudding, and ice cream.
Korea
In Korea, danhobak () is commonly used for making hobak-juk (pumpkin porridge).
Thailand
Fak thong (Thai: ฟักทอง) is used in traditional Thai desserts and main courses. Kabocha is used in Jamaican chicken foot soup.
Nutrition
This squash is rich in beta carotene, with iron, vitamin C, potassium, and smaller traces of calcium, folic acid, and minute amounts of B vitamins.
Ripeness
When kabocha is just harvested, it is still growing. Therefore, unlike other vegetables and fruits, freshness is not as important. It should be fully matured first, in order to become flavorful, by first ripening the kabocha in a warm place (77 °F/25 °C) for 13 days to convert some of the starch to sugar. Then the kabocha is transferred to a cool place (50 °F/10 °C) and stored for about a month in order to increase carbohydrate content. In this way the just-harvested, dry, bland-tasting kabocha is transformed into a smooth, sweet kabocha. Fully ripened, succulent kabocha will have reddish-yellow flesh, a hard skin, and a dry, corky stem. It reaches the peak of ripeness about 1.5–3 months after it is harvested.
History
All squashes were domesticated in Mesoamerica. In 1997, new evidence suggested that domestication occurred 8,000 to 10,000 years ago, a few thousand years earlier than previous estimates. That would be 4,000 years earlier than the domestication of maize and beans, the other major food plant groups in Mesoamerica. Archeological and genetic plant research in the 21st century suggests that the peoples of eastern North America independently domesticated squash, sunflower, marsh elder, and chenopod.
Portuguese sailors introduced kabocha to Japan in 1541, bringing it with them from Cambodia. The Portuguese name for the squash, Camboja abóbora (), was shortened by the Japanese to kabocha. Alternatively, the Portuguese origin is the word cabaça for gourd. Kabocha is written in Kanji as (literally, "southern melon"), and it is also occasionally referred to as (Nanking melon). In China, this term is applied to many types of squashes with harder skin and beefier flesh (including pumpkins), not just kabochas.
Gallery
| Biology and health sciences | Botanical fruits used as culinary vegetables | Plants |
1201321 | https://en.wikipedia.org/wiki/Superposition%20principle | Superposition principle | The superposition principle, also known as superposition property, states that, for all linear systems, the net response caused by two or more stimuli is the sum of the responses that would have been caused by each stimulus individually. So that if input A produces response X, and input B produces response Y, then input (A + B) produces response (X + Y).
A function that satisfies the superposition principle is called a linear function. Superposition can be defined by two simpler properties: additivity
and homogeneity
for scalar .
This principle has many applications in physics and engineering because many physical systems can be modeled as linear systems. For example, a beam can be modeled as a linear system where the input stimulus is the load on the beam and the output response is the deflection of the beam. The importance of linear systems is that they are easier to analyze mathematically; there is a large body of mathematical techniques, frequency-domain linear transform methods such as Fourier and Laplace transforms, and linear operator theory, that are applicable. Because physical systems are generally only approximately linear, the superposition principle is only an approximation of the true physical behavior.
The superposition principle applies to any linear system, including algebraic equations, linear differential equations, and systems of equations of those forms. The stimuli and responses could be numbers, functions, vectors, vector fields, time-varying signals, or any other object that satisfies certain axioms. Note that when vectors or vector fields are involved, a superposition is interpreted as a vector sum. If the superposition holds, then it automatically also holds for all linear operations applied on these functions (due to definition), such as gradients, differentials or integrals (if they exist).
Relation to Fourier analysis and similar methods
By writing a very general stimulus (in a linear system) as the superposition of stimuli of a specific and simple form, often the response becomes easier to compute.
For example, in Fourier analysis, the stimulus is written as the superposition of infinitely many sinusoids. Due to the superposition principle, each of these sinusoids can be analyzed separately, and its individual response can be computed. (The response is itself a sinusoid, with the same frequency as the stimulus, but generally a different amplitude and phase.) According to the superposition principle, the response to the original stimulus is the sum (or integral) of all the individual sinusoidal responses.
As another common example, in Green's function analysis, the stimulus is written as the superposition of infinitely many impulse functions, and the response is then a superposition of impulse responses.
Fourier analysis is particularly common for waves. For example, in electromagnetic theory, ordinary light is described as a superposition of plane waves (waves of fixed frequency, polarization, and direction). As long as the superposition principle holds (which is often but not always; see nonlinear optics), the behavior of any light wave can be understood as a superposition of the behavior of these simpler plane waves.
Wave superposition
Waves are usually described by variations in some parameters through space and time—for example, height in a water wave, pressure in a sound wave, or the electromagnetic field in a light wave. The value of this parameter is called the amplitude of the wave and the wave itself is a function specifying the amplitude at each point.
In any system with waves, the waveform at a given time is a function of the sources (i.e., external forces, if any, that create or affect the wave) and initial conditions of the system. In many cases (for example, in the classic wave equation), the equation describing the wave is linear. When this is true, the superposition principle can be applied. That means that the net amplitude caused by two or more waves traversing the same space is the sum of the amplitudes that would have been produced by the individual waves separately. For example, two waves traveling towards each other will pass right through each other without any distortion on the other side. (See image at the top.)
Wave diffraction vs. wave interference
With regard to wave superposition, Richard Feynman wrote:
Other authors elaborate:
Yet another source concurs:
Wave interference
The phenomenon of interference between waves is based on this idea. When two or more waves traverse the same space, the net amplitude at each point is the sum of the amplitudes of the individual waves. In some cases, such as in noise-canceling headphones, the summed variation has a smaller amplitude than the component variations; this is called destructive interference. In other cases, such as in a line array, the summed variation will have a bigger amplitude than any of the components individually; this is called constructive interference.
Departures from linearity
In most realistic physical situations, the equation governing the wave is only approximately linear. In these situations, the superposition principle only approximately holds. As a rule, the accuracy of the approximation tends to improve as the amplitude of the wave gets smaller. For examples of phenomena that arise when the superposition principle does not exactly hold, see the articles nonlinear optics and nonlinear acoustics.
Quantum superposition
In quantum mechanics, a principal task is to compute how a certain type of wave propagates and behaves. The wave is described by a wave function, and the equation governing its behavior is called the Schrödinger equation. A primary approach to computing the behavior of a wave function is to write it as a superposition (called "quantum superposition") of (possibly infinitely many) other wave functions of a certain type—stationary states whose behavior is particularly simple. Since the Schrödinger equation is linear, the behavior of the original wave function can be computed through the superposition principle this way.
The projective nature of quantum-mechanical-state space causes some confusion, because a quantum mechanical state is a ray in projective Hilbert space, not a vector.
According to Dirac: "if the ket vector corresponding to a state is multiplied by any complex number, not zero, the resulting ket vector will correspond to the same state [italics in original]."
However, the sum of two rays to compose a superpositioned ray is undefined. As a result, Dirac himself
uses ket vector representations of states to decompose or split,
for example, a ket vector
into superposition of component ket vectors as:
where the .
The equivalence class of the allows a well-defined meaning to be given to the relative phases of the ., but an absolute (same amount
for all the ) phase change on the
does not affect the equivalence class of the .
There are exact correspondences between the superposition presented in the main on this page and the quantum superposition.
For example, the Bloch sphere to represent pure state of a two-level quantum mechanical system
(qubit) is also known as the Poincaré sphere representing different types of classical
pure polarization states.
Nevertheless, on the topic of quantum superposition, Kramers writes: "The principle of [quantum] superposition ... has no analogy in classical physics".
According to Dirac: "the superposition that occurs in quantum mechanics is of an essentially different nature from any occurring in the classical theory [italics in original]."
Though reasoning by Dirac includes atomicity of observation, which is valid, as for phase,
they actually mean phase translation symmetry derived from time translation symmetry, which is also
applicable to classical states, as shown above with classical polarization states.
Boundary-value problems
A common type of boundary value problem is (to put it abstractly) finding a function y that satisfies some equation
with some boundary specification
For example, in Laplace's equation with Dirichlet boundary conditions, F would be the Laplacian operator in a region R, G would be an operator that restricts y to the boundary of R, and z would be the function that y is required to equal on the boundary of R.
In the case that F and G are both linear operators, then the superposition principle says that a superposition of solutions to the first equation is another solution to the first equation:
while the boundary values superpose:
Using these facts, if a list can be compiled of solutions to the first equation, then these solutions can be carefully put into a superposition such that it will satisfy the second equation. This is one common method of approaching boundary-value problems.
Additive state decomposition
Consider a simple linear system:
By superposition principle, the system can be decomposed into
with
Superposition principle is only available for linear systems. However, the additive state decomposition can be applied to both linear and nonlinear systems. Next, consider a nonlinear system
where is a nonlinear function. By the additive state decomposition, the system can be additively decomposed into
with
This decomposition can help to simplify controller design.
Other example applications
In electrical engineering, in a linear circuit, the input (an applied time-varying voltage signal) is related to the output (a current or voltage anywhere in the circuit) by a linear transformation. Thus, a superposition (i.e., sum) of input signals will yield the superposition of the responses.
In physics, Maxwell's equations imply that the (possibly time-varying) distributions of charges and currents are related to the electric and magnetic fields by a linear transformation. Thus, the superposition principle can be used to simplify the computation of fields that arise from a given charge and current distribution. The principle also applies to other linear differential equations arising in physics, such as the heat equation.
In engineering, superposition is used to solve for beam and structure deflections of combined loads when the effects are linear (i.e., each load does not affect the results of the other loads, and the effect of each load does not significantly alter the geometry of the structural system). Mode superposition method uses the natural frequencies and mode shapes to characterize the dynamic response of a linear structure.
In hydrogeology, the superposition principle is applied to the drawdown of two or more water wells pumping in an ideal aquifer. This principle is used in the analytic element method to develop analytical elements capable of being combined in a single model.
In process control, the superposition principle is used in model predictive control.
The superposition principle can be applied when small deviations from a known solution to a nonlinear system are analyzed by linearization.
History
According to Léon Brillouin, the principle of superposition was first stated by Daniel Bernoulli in 1753: "The general motion of a vibrating system is given by a superposition of its proper vibrations." The principle was rejected by Leonhard Euler and then by Joseph Lagrange. Bernoulli argued that any sonorous body could vibrate in a series of simple modes with a well-defined frequency of oscillation. As he had earlier indicated, these modes could be superposed to produce more complex vibrations. In his reaction to Bernoulli's memoirs, Euler praised his colleague for having best developed the physical part of the problem of vibrating strings, but denied the generality and superiority of the multi-modes solution.
Later it became accepted, largely through the work of Joseph Fourier.
| Physical sciences | Waves | null |
1201381 | https://en.wikipedia.org/wiki/Apothecaries%27%20system | Apothecaries' system | The apothecaries' system, or apothecaries' weights and measures, is a historical system of mass and volume units that were used by physicians and apothecaries for medical prescriptions and also sometimes by scientists. The English version of the system is closely related to the English troy system of weights, the pound and grain being exactly the same in both. It divides a pound into 12 ounces, an ounce into 8 drachms, and a drachm into 3 scruples of 20 grains each. This exact form of the system was used in the United Kingdom; in some of its former colonies, it survived well into the 20th century. The apothecaries' system of measures is a similar system of volume units based on the fluid ounce. For a long time, medical recipes were written in Latin, often using special symbols to denote weights and measures.
The use of different measure and weight systems depending on the purpose was an almost universal phenomenon in Europe between the decline of the Roman Empire and metrication. This was connected with international commerce, especially with the need to use the standards of the target market and to compensate for a common weighing practice that caused a difference between actual and nominal weight. In the 19th century, most European countries or cities still had at least a "commercial" or "civil" system (such as the English avoirdupois system) for general trading, and a second system (such as the troy system) for precious metals such as gold and silver. The system for precious metals was usually divided in a different way from the commercial system, often using special units such as the carat. More significantly, it was often based on different weight standards.
The apothecaries' system often used the same ounces as the precious metals system, although even then the number of ounces in a pound could be different. The apothecaries' pound was divided into its own special units, which were inherited (via influential treatises of Greek physicians such as Dioscorides and Galen, 1st and 2nd century) from the general-purpose weight system of the Romans. Where the apothecaries' weights and the normal commercial weights were different, it was not always clear which of the two systems was used in trade between merchants and apothecaries, or by which system apothecaries weighed medicine when they actually sold it. In old merchants' handbooks, the former system is sometimes referred to as the pharmaceutical system and distinguished from the apothecaries' system.
English-speaking countries
Weight
From pound down to the scruple, the units of the traditional English apothecaries' system were a subset of the units of the Roman weight system, although the troy pound and its subdivisions were slightly heavier than the Roman pound and its subdivisions. Similar systems were used all over Europe, but with considerable local variation described below under Variants. The traditional English apothecaries' system of weights is shown in the first table in this section: the pound, ounce and grain were identical to the troy pound, ounce and grain (also used to measure weights of precious metals; distinct from the avoirdupois pounds and ounces that were used for measurements of other goods.) The confusing variety of definitions and conversions for pounds and ounces is covered elsewhere in a table of pound definitions.
To unify all weight systems used by apothecaries, the Irish pharmacopœia of 1850 introduced a new variant of the apothecaries' system which subdivided a new apothecaries' pound of 12 avoirdupois ounces instead of the troy pound. To allow effective use of the new system, new weight pieces were produced. Since an avoirdupois ounce corresponds to 28.35 g, the proposed system was very similar to that in use in Portugal and Spain, and in some locations in Italy. But it would have doubled the value of the avoirdupois drachm (an existing unit, but by then only used for weighing silk). Therefore, it conflicted with other non-standard variations that were based on that nearly obsolete unit.
The Irish proposal was not widely adopted, but British legislation, in the form of the Medical Act 1858 (21 & 22 Vict. c. 90), was more radical: it prescribed the use of the avoirdupois system for the United Kingdom (then including Ireland), with none of the traditional subdivisions. This innovation was first used in the United British pharmacopœia of 1864, which recommended the adoption in pharmacy of the imperial avoirdupois pound of 7000 grains and ounce of 437.5 grains, and discontinuation of the drachm and scruple. In practice, however, the old apothecaries' system based on divisions of the troy pound, or of the troy ounce of 480 grains, was still widely used until it was abolished by the Weights and Measures Act 1976 (c. 77), since when it may only be used to measure precious metals and stones. (The troy pound had already been declared illegal for most other uses by the Weights and Measures Act 1878 (41 & 42 Vict. c. 49), but this act allowed the sale of drugs by apothecaries' weight as an exception.)
Apothecaries' units of any kind became obsolete in the UK with the mandatory use of metric drug measurements from January 1, 1971.
In the United States, the apothecaries' system remained official until it was abolished in 1971 in favour of the metric system.
Volume
English-speaking countries also used a system of units of fluid measure, or in modern terminology volume units, based on the apothecaries' system. Originally, the terms and symbols used to describe the volume measurements of liquids were the same as or similar to those used to describe weight measurements of solids (for example, the pound by weight and the fluid pint were both referred to with Latin libra, symbol ). A volume of liquid that was approximately that of an apothecaries' ounce of water was called a fluid ounce, and was divided into fluid drachms and sometimes also fluid scruples. The smallest unit of liquid volume was traditionally a drop (Latin gutta). Although nominally defined as a sixtieth of the fluid drachm, in practice the drop was a not a standardized unit of volume but was measured by means of releasing literal drops of liquid from the lip of a bottle or vial.
The 1809 London Pharmacopoeia introduced modified terminology for apothecaries' measurements of liquid volume. New terms and symbols introduced by the London College of Physicians (octarius, minim) were subsequently adopted by the Pharmacopoeias of the United States and Dublin. The pint was given the Latin name octarius and represented by the symbol O, thus distinguishing its Latin name and symbol from that of the pound. The terms for fluid ounce and fluid drachm were distinguished from weight measures by the prefix fluid- (Latin Fluiduncia, Fluidrachma, English fluidounce and fluidrachm) or in abbreviations by the prefixed letter f (f℥, fʒ). (However, even in the 20th century the f of the symbols f℥, fʒ was sometimes omitted by physicians in the United States, with fluid ounces and fluid drams represented instead simply by the symbols ℥, ʒ.) The smallest unit of volume was standardized as precisely a sixtieth of a fluidrachm (1/61,440 of a wine gallon) and given the name minim. (Subsequently, the old term drop was sometimes viewed as an approximate equivalent of the new minim, based on their nominally equivalent definitions. In fact, the units were qualitatively different because the traditional drop, by the nature of how it was measured, did not actually correspond to a single definite volume.) Along with the new name of minim, the London Pharmacopoeia of 1809 prescribed a new method of measuring the smallest unit of volume using a graduated glass tube.
Before introduction of the imperial units in the U.K., all apothecaries' fluid measures were based on the unit of the wine gallon, which survived in the US under the name liquid gallon or wet gallon. The wine gallon was abolished in Britain in 1826, and this system was replaced by a new one based on the newly introduced imperial gallon. Since the imperial gallon is 20% more than the liquid gallon, the same is true for the imperial pint in relation to the liquid pint. Accordingly, in the new imperial system, the number of fluid ounces per pint was adjusted from 16 to 20 so that the fluid ounce was not changed too much by the reform. Even so, the modern U.K. fluid ounce is 4% less than the US fluid ounce, and the same is true for the smaller units. For some years both systems were used concurrently in the U.K.
As a result, the Imperial and U.S. systems differ in the size of the basic unit (the gallon or the pint, one gallon being equal to eight pints), and in the number of fluid ounces per pint.
Apothecaries' systems for volumes were internationally much less common than those for weights.
There were also commonly used, but unofficial divisions of the Apothecaries' system, consisting of:
Glass-tumbler 8 fl. oz.
Breakfast-cup about 8 fl. oz.
Tea-cup 5 fl. oz.
Wine-glass 2 fl. oz.
Table-spoon fl. oz.
Dessert-spoon 2 fl. dr. (same as fl. oz.)
Tea-spoon 1 fl. dr. (same as fl. oz.)
In the United States, similar measures in use were once:
Tumblerful — ƒ℥ viii (8 fl oz/ 1 cup/ 240 mL)
Teacupful — ƒ℥ iv (4 fl oz/ 1 gill/ 120 mL)
Wineglassful — ƒ℥ ij (2 fl oz/ 60 mL)
Tablespoonful — ƒ℥ ss ( fl oz/ 3 tsp/ 1 Tbsp; 15 mL as once codified in the ninth edition of the United States Pharmacopeia (U.S.P. IX))
Dessertspoonful — ƒʒ ij (≡ ƒ℈ viij or ƒʒ/ 2 tsp; 10 mL as once codified in the U.S.P. IX)
Teaspoonful — ƒʒ j (≡ ƒ℈ iv or ƒʒ/ 1 tsp; 5 mL as once codified in the U.S.P. IX)
The cited book states, "In almost all cases the modern teacups, tablespoons, dessertspoons, and teaspoons, after careful test by the author, were found to average 25 percent greater capacity than the theoretical quantities given above, and thus the use of accurately graduated medicine glasses, which may be had now at a trifling cost, should be insisted upon."
Apothecaries' measures eventually fell out of use in the U.K. and were officially abolished in 1971. In the U.S., they are still occasionally used, for example with prescribed medicine being sold in four-ounce (℥ iv) bottles.
Medical prescriptions
Until around 1900, medical prescriptions and most European pharmacopoeias were written in Latin. Here is a typical example from the middle of the 19th century.
The use of Latin ensured that the prescriptions could be read by an international audience. There was a technical reason why 3 ʒ was written ʒiij, and ʒ as ʒss: Writing iii as iij would prevent tampering with or misinterpretation of a number after it is written. The letters "ss" are an abbreviation for the Latin "semis" meaning "half", which were sometimes written with a ß. In Apothecaries' Latin, numbers were generally written in Roman numerals, immediately following the symbol. Since only the units of the apothecaries' system were used in this way, this made it clear that the civil weight system was not meant.
Variants
Diversity of local standards
The basic form of the apothecaries' system is essentially a subset of the Roman weight system. An apothecaries' pound normally consisted of 12 ounces. (In France this was changed to 16 ounces, and in Spain, the customary unit was the , a mark of 8 ounces.) In the south of Europe and in France, the scruple was generally divided into 24 grains, so that one ounce consisted of 576 grains. Nevertheless, the subdivision of an ounce was somewhat more uniform than that of a pound, and a common feature of all variants is that 12 ounces are roughly 100 drachms (96–128 drachms) and a grain is roughly the weight of a physical grain.
It is most convenient to compare the various local weight standards by the metric weights of their ounces. The actual mass of an ounce varied by ±17% (5 g) around the typical value of 30 g. The table only shows approximate values for the most important standards; even the same nominal standard could vary slightly between one city and its neighbour. The range from 25 g to 31 g is filled with numerous variants, especially the Italian range up to 28 g. But there is a relatively large gap between the troy ounces of 31 g and the Habsburg ounce of 35 g. The latter is the product of an 18th-century weight reform.
Even in Turkey a system of weights similar to the European apothecaries' system was used for the same purpose. For medical purposes the tcheky (approx. 320 g) was divided in 100 drachms, and the drachm in (16 kilos or) 64 grains. This is close to the classical Greek weight system, where a mina (corresponding roughly to a Roman libra) was also divided into 100 drachms.
With the beginning of metrication, some countries standardized their apothecaries' pound to an easily remembered multiple of the French gramme. E.g. in the Netherlands the Dutch troy pound of 369.1 g was standardized in 1820 to 375.000 g, to match a similar reform in France. The British troy pound retained its value of 373.202 g until in 2000 it was legally defined in metric terms, as 373.2417216 g. (At this time its use was mainly confined to trading precious metals.)
Basic variants
In the Romance speaking part of Europe the scruple was divided in 24 grains, in the rest of Europe in 20 grains. Notable exceptions were Venice and Sicily, where the scruple was also divided in 20 grains.
The Sicilian apothecaries ounce was divided into 10 drachms. Since the scruple was divided into only 20 grains, like in the northern countries, an ounce consisted of 600 grains. This was not too different from the situation in most of the other Mediterranean countries, where an ounce consisted of 576 grains.
In France, at some stage the apothecaries' pound of 12 ounces was replaced by the larger civil pound of 16 ounces. The subdivisions of the apothecaries ounce were the same as in the other Romance countries, however, and were different from the subdivisions of the otherwise identical civil ounce.
Origins
Roman weight system
The basic apothecaries' system consists of the units pound, ounce, and scruple from the classical Roman weight system, together with the originally Greek drachm and a new subdivision of the scruple into either 20 ("barley") or 24 ("wheat") grains (). In some countries other units of the original system remained in use, for example in Spain the and . In some cases the apothecaries' and civil weight systems had the same ounces ("an ounce is an ounce"), but the civil pound consisted of 16 ounces. is Latin for the seed of the carob tree.
Many attempts were made to reconstruct the exact mass of the Roman pound. One method for doing this consists in weighing old coins; another uses the fact that Roman weight units were derived from Roman units of length similar to the way the kilogramme was originally derived from the metre, i.e. by weighing a known volume of water. Nowadays the Roman pound is often given as 327.45 g, but one should keep in mind that (apart from the other uncertainties that come with such a reconstruction) the Roman weight standard is unlikely to have remained constant to such a precision over the centuries, and that the provinces often had somewhat inexact copies of the standard. The weight and subdivision of the pound in the Holy Roman Empire were reformed by Charlemagne, but in the Byzantine Empire it remained essentially the same. Since Byzantine coins circulated up to Scandinavia, the old Roman standard continued to be influential through the Middle Ages.
Weight system of Salerno
The history of mediaeval medicine started roughly around the year 1000 with the school of medicine in Salerno, which combined elements of Latin, Greek, Arabic, and Jewish medicine. Galen and Dioscorides (who had used the Graeco-Roman weight system) were among the most important authorities, but also Arabic physicians, whose works were systematically translated into Latin.
According to , a famous 13th-century text that exists in numerous variations and is often ascribed to Dino di Garbo, the system of weights used in Salerno was different from the systems used in Padua and Bologna. As can be seen from the table, it was also different from the Roman weight system used by Galenus and Dioscorides and from all modern apothecaries' systems: The ounce was divided into 9 drachms, rather than 8 drachms.
Centuries later, the region around Salerno was the only exception to the rule that (except for skipping units that had regionally fallen out of use) the apothecaries' ounce was subdivided down to the scruple in exactly the same way as in the Roman system: It divided the ounce into 10 drachms.
Romance countries
While there will naturally have been some changes throughout the centuries, this section only tries to give a general overview of the situation that was recorded in detail in numerous 19th-century merchants' handbooks.
Iberian Peninsula
On the Iberian Peninsula, apothecaries' weights in the 19th century were relatively uniform, with 24 grains per scruple (576 grains per ounce), the standard in Romance countries. The weight of an apothecaries' pound was 345.1 g in Spain and 344.2 g in Portugal. As in Italy, some of the additional subdivisions of the Roman system, such as the , were still in use there. It was standard to use the , defined as 8 ounces, instead of the pound.
France
In 18th century France, there was a national weight standard, the of 8 ounces. The civil pound of 16 ounces was equivalent to 2 marks, and it was also used as the apothecaries' pound. With 30.6 g, the ounces were considerably heavier than other apothecaries ounces in Romance countries, but otherwise, the French system was not remarkable. Its history and connections to the English and Flemish standards are discussed below under Weight standards named after Troyes.
Italy
Due in part to the political conditions in what would become a united Kingdom of Italy only in 1861, the variation of apothecaries' systems and standard weights in this region was enormous. (For background information, see History of Italy during foreign domination and the unification.) The (pound) generally consisted of the standard twelve ounces, however.
The civil weight systems were generally very similar to the apothecaries' system, and since the (or the , where different systems were in use for light and heavy goods) generally had a suitable weight for an apothecaries' pound it was often used for this purpose. Extreme cases were Rome and Genoa, where the same system was used for everything, including medicine. On the other hand, there were relatively large differences even between two cities in the same state. E.g. Bologna (in the Papal States) had an apothecaries pound that was less than the local civil pound, and 4% lighter than the pound used in Rome.
The weight of an apothecaries' pound ranged generally between 300 g and 320 g, slightly less than that of a pound in the Roman Empire. An important exception to this rule is that the Kingdom of Lombardy–Venetia was under the rule of the Habsburg monarchy 1814–1859 and therefore had the extremely large Habsburg apothecaries' pound of 420 g. (See below under Habsburg standard.) E.g. in the large city of Milan the apothecaries' system based on a pound of 326.8 g was officially replaced by the metric system as early as 1803, because Milan was part of the Napoleonic Italian Republic. Since the successor of this little state, the Napoleonic Kingdom of Italy, fell to Habsburg in 1814 (at a time when even in France the had been introduced because the metric system was not accepted by the population), an apothecaries' system was officially introduced again, but now based on the Habsburg apothecaries' pound, which weighed almost 30% more.
The apothecaries' pound in Venice had exactly the same subdivisions as those in the non-Romance countries, but its total weight of 301 g was at the bottom of the range. During the Habsburg reign of 1814–1859 an exception was made for Venice; as a result, the extreme weights of 301 g and 420 g coexisted within one state and in immediate proximity. The Venice standard was also used elsewhere, for example in Udine. In Dubrovnik (called "Ragusa" until 1909) its use was partially continued for a long time in spite of the official Habsburg weight reform.
The measure and weight systems for the large mainland part of the Kingdom of the Two Sicilies were unified in 1840. The area consisted of the southern half of the Italian Peninsula and included Naples and Salerno. The subdivision of apothecaries' weight in the unified system was essentially the same as that for gold, silver, coins, and silk. It was the most excentric variant in that the ounce was divided in 10 drachms, rather than the usual 8. The scruple, like in Venice but unlike in the rest of the Romance region, was divided into 20 grains. The existence of a unit called , the equivalent of , is interesting because 6 were 9 . In the original Salerno weight system an ounce was divided into 9 drachms, and so an would have been of an ounce.
Troyes, Nuremberg, and Habsburg
Weight standards named after Troyes
As early as 1147 in Troyes in Champagne (in the Middle Ages an important trading town) a unit of weight called was used.
The national French standard until 1799 was based on a famous artifact called the , which probably dates back to the second half of the 15th century. It is an elaborate set of nesting weight pieces, with a total metric weight of 12.238 kg. The set is now shown in the Musée des Arts et Métiers in Paris. The total nominal value of the set is 50 or , a mark being 8 ounces. The ounce had therefore a metric equivalent of 30.59 g. The was used as a national French standard for trading, for gold, silver, and jewels, and for weighing medicine. It was also used in international communications between scientists. In the time before the French Revolution, the civil pound also played the role of the apothecaries' pound in the French apothecaries' system, which otherwise remained a standard system of the Romance (24 grains per scruple) type.
In Bruges, Amsterdam, Antwerp and other Flemish cities, a "troy" unit ("") was also in use as a standard for valuable materials and medicine. As in France, the way in which the Flemish troy ounce was subdivided depended on what was weighed. Unlike the French, the Flemish apothecaries divided the scruple into 20 grains. The Flemish troy pound became the standard for the gold and apothecaries' system in the United Kingdom of the Netherlands; it was also used in this way in Lübeck. (The London troy pound was referred to as the '', after metrification.)
The Dutch troy mark consisted of 8 Flemish troy ounces, with each ounce of 20 engels, and each engel divided into 32 assen. The Amsterdam Pound of two marks, used in commerce, weighed 10,280 assen, while the Amsterdam Troy pound weighed 10,240 assen, i.e. exactly two troy marks.
In 1414, six years before the Treaty of Troyes, a statute of Henry V of England gave directions to the goldsmiths in terms of the troy pound. (In 1304 it had apparently not yet been introduced, since it did not appear in the statute of weights and measures.) There is evidence from the 15th century that the troy pound was used for weighing metals and spices. After the abolishment of the Tower pound in 1527 by Henry VIII of England, the troy pound was the official basis for English coin weights. The British apothecaries' system was based on the troy pound until metrication, and it survived in the United States and Australia well into the 20th century.
Since the modern (English, American and Imperial) troy ounces are roughly 1.5% heavier than the late Paris ounce, the exact historical relations between the original , the French , the Flemish and the English troy pound are unclear. It is known, however, that the numerical relation between the English and French troy ounces was exactly 64:63 in the 14th century.
Nuremberg standard
In the Middle Ages the Imperial Free City of Nuremberg, an important trading place in the south of Germany, produced large amounts of nesting weight pieces to various European standards. In the 1540s, the first pharmacopoeia in the modern sense was also printed there. In 1555, a weight standard for the apothecaries' pound of 12 ounces was set in Nuremberg. Under the name Nuremberg pharmaceutical weight () it would become the standard for most of the north-east of Europe. However, some cities kept local copies of the standard.
As of 1800 all German states and cities except Lübeck (which had the Dutch troy standard) followed the Nuremberg standard. It was also the standard for Denmark, Norway, the Russian Empire, and most cantons of Switzerland. Poland and Sweden had their own variants of the standard, which differed from each other by 0.6%.
In 1811, Bavaria legally defined the apothecaries' pound as 360.00 g (an ounce of 30.00 g). In 1815, Nuremberg lost its status as a free city and became part of Bavaria. From now on the Nuremberg apothecaries' pound was no longer the official apothecaries' pound in Nuremberg; but the difference was only 0.6%. In 1836 the Greek apothecaries' pound was officially defined by this standard, four years after Otto, the son of the king of Bavaria, became the first king of Greece. But only few German states followed the example of Bavaria, and with a long delay. The apothecaries' pound of 360 g was also adopted in Lübeck, where it was official as of 1861.
Austria and the states of the Habsburg monarchy officially had a different standard since 1761, and Prussia, followed by its neighbours Anhalt, Lippe and Mecklenburg, would diverge in the opposite direction with a reform in 1816. But in both cases apothecaries continued to use the Nuremberg standard unofficially for a long time after it became illegal.
In Russia the apothecaries' system survived well into the 20th century. The Soviet Union officially abolished it only in January 1927.
Habsburg standard
Empress Maria Theresia of Austria reformed the measures and weights of the Habsburg monarchy in 1761. The weight of an apothecaries' pound of 12 ounces was increased to a value that was later (after the kilogramme was defined) found to be 420.009 g; this was called the . It was defined as of the unusually heavy Habsburg civil pound (defined as of the civil pound of Cologne) and corresponded to a record ounce weight of 35 g.
Before the reform, in the north of the empire, the Nuremberg standard had been in effect, and in Italy, the local standards had been even lighter. It is not surprising that an increase of 17% and more met with some inertia. The 1770 edition of the pharmacopoeia still used the Nuremberg standard , indicating that even in the Austrian capital Vienna it took some time for the reform to become effective. In 1774, the used the new standard, and in 1783 all old apothecaries' weight pieces that were still in use were directed to be destroyed.
Venice was not part of these reforms and kept its standard of approximately 25 g per ounce.
When Austria started producing scales and weight pieces to the new standard with an excellent quality/price ratio, these were occasionally used by German apothecaries as well.
Metrication
Early metrication
At the time of the Industrial Revolution, the fact that each state had its own system of weights and measures became increasingly problematic. Serious work on a "scientific" system was started in France under Louis XVI, and completed in 1799 (after the French Revolution) with its implementation. The French population, however, was initially unhappy with the new system. In 1812, Napoleon Bonaparte reintroduced some of the old measurements, but in a modified form that was defined with respect to the metric system. This was finally abolished in 1837 and became illegal in 1840.
Due to the large expansion of the First French Empire under Napoleon I, French metrication also affected what would be (parts of) France's neighbour countries after the Congress of Vienna.
The Netherlands were partially metricated when they were French, in the years 1810–1813. With full metrication, effective January 1821, the Netherlands reformed the . The apothecaries' new pound was 375.00 g. Apart from rounding issues concerning the subdivisions, this corresponded exactly to the French . (The reform was not followed in the north German city of Lübeck, which continued to use the .) In Belgium, apothecaries' weight was metricated effective 1856.
From 1803 to 1815, all German regions west of the River Rhine were French, organised in the Roer, Sarre, Rhin-et-Moselle, and Mont-Tonnerre. As a result of the Congress of Vienna these became part of various German states. A large part of the Palatinate fell to Bavaria, but having the metric system it was excepted from the Bavarian reform of weights and measures.
Prussia's path to metrication
In Prussia, a reform in 1816 defined the Prussian civil pound in terms of the Prussian foot and distilled water. It also redefined the apothecaries' pound as 12 ounces, i.e. of the civil pound: 350.78 g. This reform was not popular with apothecaries, because it broke the uniformity of the apothecaries' pound in Germany at a time when a German national state was beginning to form. It seems that many apothecaries did not follow this reduction by 2%.
Another reform in 1856 increased the civil pound from 467.711 g to 500.000 g (the German civil pound defined by the Zollverein), as a first step towards metrication. As a consequence the official apothecaries' pound was now 375.000 g, i.e. it was increased by 7%, and it was now very close to the troy standards. §4 of the law that introduced this reform said: "Further, a pharmaceutical weight deviating from the civil weight does not take place." But this paragraph was suspended until further notice.
The abolishment of the apothecaries' system meant that doctors' prescriptions had to take place in terms of the current civil weight: grammes and kilograms. This was considered unfeasible by many, and the state received numerous protests and asked for expertises. Nevertheless, by 1868 §4 of the earlier reform was finally put into force.
Metrication in countries using the troy and avoirdupois systems
Britain was initially involved in the development of the metric system, and the US was among the 17 initial signatories of the Metre Convention in 1875. Yet in spite of enthusiastic support for the new system by intellectuals such as Charles Dickens, these two countries were particularly slow to implement it.
In the US, the metric system replaced the apothecaries' system in the United States Pharmacopeia of 1971.
In the UK, metric drug measurements were required for dealings in drugs from January 1, 1971.
| Physical sciences | Measurement systems | Basics and measurement |
1201556 | https://en.wikipedia.org/wiki/Rectus%20abdominis%20muscle | Rectus abdominis muscle | The rectus abdominis muscle, () also known as the "abdominal muscle" or simply the "abs", is a pair of segmented skeletal muscle on the ventral aspect of a person's abdomen (or "midriff"). The paired muscle is separated at the midline by a band of dense connective tissue called the linea alba, and the connective tissue defining each lateral margin of the rectus abdominus is the linea semilunaris. The muscle extends from the pubic symphysis, pubic crest and pubic tubercle inferiorly, to the xiphoid process and costal cartilages of the 5th–7th ribs superiorly.
The rectus abdominis muscle is contained in the rectus sheath, which consists of the aponeuroses of the lateral abdominal muscles. Each rectus abdominus is traversed by bands of connective tissue called the tendinous intersections, which interrupt it into distinct muscle bellies. In people with low body fat, these muscle bellies can be viewed externally in sets from as few as two to as many as twelve, although six is the most common.
Structure
The rectus abdominis is a very long flat muscle, which extends along the whole length of the front of the abdomen, and is separated from its fellow of the opposite side by the linea alba. Tendinous intersections (intersectiones tendineae) further subdivide each rectus abdominis muscle into a series of smaller muscle bellies. Tensing of the rectus abdominis causes the muscle to expand between each tendinous intersection.
The upper portion, attached principally to the cartilage of the fifth rib, usually has some fibers of insertion into the anterior extremity of the rib itself.
Size
It is typically around 10 mm thick, although, some athletes can have a rectus up to 20 mm thick.
Typical volume is around 300 cm3 in non-active individuals and 500 cm3 in athletes.
Blood supply
The rectus abdominis has many sources of arterial blood supply. Classification of the vascular anatomy of muscles: First, the inferior epigastric artery and vein (or veins) run superiorly on the posterior surface of the rectus abdominis, enter the rectus fascia at the arcuate line, and serve the lower part of the muscle. Second, the superior epigastric artery, a terminal branch of the internal thoracic artery, supplies blood to the upper portion. Finally, numerous small segmental contributions come from the lower six intercostal arteries as well.
Nerve supply
The muscles are innervated by thoraco-abdominal nerves, these are continuations of the T7-T11 intercostal nerves and pierce the anterior layer of the rectus sheath. Sensory supply is from the 7-12 thoracic nerves.
Variation
The sternalis muscle may be a variant form of the pectoralis major or the rectus abdominis. Some fibers are occasionally connected with the costoxiphoid ligaments, and the side of the xiphoid process.
Function
The rectus abdominis is an important postural muscle. It is responsible for flexing the lumbar spine, as when doing a crunch. The rib cage is brought up to where the pelvis is when the pelvis is fixed, or the pelvis can be brought towards the rib cage (posterior pelvic tilt) when the rib cage is fixed, such as in a leg-hip raise. The two can also be brought together simultaneously when neither is fixed in space.
The rectus abdominis assists with breathing and plays an important role in respiration when forcefully exhaling, as seen after exercise as well as in conditions where exhalation is difficult such as emphysema. It also helps in keeping the internal organs intact and in creating intra-abdominal pressure, such as when exercising or lifting heavy weights, during forceful defecation or parturition (childbirth).
Clinical significance
An abdominal muscle strain, also called a pulled abdominal muscle, is an injury to one of the muscles of the abdominal wall. A muscle strain occurs when the muscle is stretched too far. When this occurs the muscle fibers are torn. Most commonly, a strain causes microscopic tears within the muscle, but occasionally, in severe injuries, the muscle can rupture from its attachment.
A rectus sheath hematoma is an accumulation of blood in the sheath of the rectus abdominis muscle. It causes abdominal pain with or without a mass. The hematoma may be caused by either rupture of the epigastric artery or by a muscular tear. Causes of this include anticoagulation, coughing, pregnancy, abdominal surgery and trauma. With an ageing population and the widespread use of anticoagulant medications, there is evidence that this historically benign condition is becoming more common and more serious.
On abdominal examination, people may have a positive Carnett's sign.
Most hematomas resolve without treatment, but they may take several months to resolve.
Other animals
The rectus abdominis is similar in most vertebrates. The most obvious difference between animal and human abdominal musculature is that in animals, there are a different number of tendinous intersections.
Additional images
| Biology and health sciences | Human anatomy | Health |
1201904 | https://en.wikipedia.org/wiki/Cercopithecinae | Cercopithecinae | The Cercopithecinae are a subfamily of the Old World monkeys, which comprises roughly 71 species, including the baboons, the macaques, and the vervet monkeys. Most cercopithecine monkeys are limited to sub-Saharan Africa, although the macaques range from the far eastern parts of Asia through northern Africa, as well as on Gibraltar.
Characteristics
The various species are adapted to the different terrains they inhabit. Arboreal species are slim, delicate, and have a long tail, while terrestrial species are stockier and their tails can be small or completely nonexistent. All species have well-developed thumbs. Some species have ischial callosities on their rump, which can change their colour during their mating periods.
These monkeys are diurnal and live together in social groups. They live in all types of terrain and climate, from rain forests, savannah, and bald rocky areas, to cool or even snowy mountains, such as the Japanese macaque.
Most species are omnivorous, with diets ranging from fruits, leaves, seeds, buds, and mushrooms to insects, spiders, and smaller vertebrates. All species possess cheek pouches in which they can store food.
Gestation lasts around six to seven months. Young are weaned after three to 12 months and are fully mature within three to five years. The life expectancy of some species can be as long as 50 years.
Classification
The Cercopithinae are often split into two tribes, Cercopithecini and Papionini, as shown in the list of genera below.
Family Cercopithecidae
Subfamily Cercopithecinae
Tribe Cercopithecini
Genus Allenopithecus – Allen's swamp monkey
Genus Miopithecus – talapoins
Genus Erythrocebus – patas monkeys
Genus Chlorocebus – vervet monkeys, etc.
Genus Allochrocebus – terrestrial guenons
Genus Cercopithecus – guenons
Tribe Papionini
Genus Macaca – macaques
Genus Lophocebus – crested mangabeys
Genus Rungwecebus – highland mangabey
Genus Papio – baboons
Genus Theropithecus – gelada
Genus Cercocebus – white-eyelid mangabeys
Genus Mandrillus – drill and mandrill
Subfamily Colobinae
| Biology and health sciences | Old World monkeys | Animals |
1202013 | https://en.wikipedia.org/wiki/Painted%20turtle | Painted turtle | The painted turtle (Chrysemys picta) is the most widespread native turtle of North America. It lives in relatively slow-moving fresh waters, from southern Canada to northern Mexico, and from the Atlantic to the Pacific. They have been shown to prefer large wetlands with long periods of inundation and emergent vegetation. This species is one of the few that is specially adapted to tolerate freezing temperatures for extended periods of time due to an antifreeze-like substance in their blood that keeps their cells from freezing. This turtle is a member of the genus Chrysemys, which is part of the pond turtle family Emydidae. Fossils show that the painted turtle existed 15 million years ago. Three regionally based subspecies (the eastern, midland, and western) evolved during the last ice age. The southern painted turtle (C. dorsalis) is alternately considered the only other species in Chrysemys, or another subspecies of C. picta.
The adult painted turtle is long; the male is smaller than the female. The turtle's top shell is dark and smooth, without a ridge. Its skin is olive to black with red, orange, or yellow stripes on its extremities. The subspecies can be distinguished by their shells: the eastern has straight-aligned top shell segments; the midland has a large gray mark on the bottom shell; the western has a red pattern on the bottom shell.
The turtle eats aquatic vegetation, algae, and small water creatures including insects, crustaceans, and fish. Painted turtles primarily feed while in water and are able to locate and subdue prey even in heavily clouded conditions. Although they are frequently consumed as eggs or hatchlings by rodents, canines, and snakes, the adult turtles' hard shells protect them from most predators. Reliant on warmth from its surroundings, the painted turtle is active only during the day when it basks for hours on logs or rocks. During winter, the turtle hibernates, usually in the mud at the bottom of water bodies. The turtles mate in spring and autumn. Females dig nests on land and lay eggs between late spring and mid-summer. Hatched turtles grow until sexual maturity: 2–9 years for males, 6–16 for females.
In the traditional tales of Algonquian tribes, the colorful turtle played the part of a trickster. In modern times, four U.S. states (Colorado, Illinois, Michigan, and Vermont) have named the painted turtle their official reptile. While habitat loss and road killings have reduced the turtle's population, its ability to live in human-disturbed settings has helped it remain the most abundant turtle in North America. Adults in the wild can live for more than 55 years.
Taxonomy and evolution
The painted turtle (C. picta) is the only species in the genus Chrysemys. The parent family for Chrysemys is Emydidae: the pond turtles. Emydidae is split into two sub families; Chrysemys is part of the Deirochelyinae (Western Hemisphere) branch. The four subspecies of the painted turtle are the eastern (C. p. picta), midland (C. p. marginata), southern (C. p. dorsalis), and western (C. p. bellii).
The painted turtle's generic name is derived from the Ancient Greek words for "gold" () and "freshwater tortoise" (); the species name originates from the Latin for "colored" (pictus). The subspecies name, marginata, derives from the Latin for "border" and refers to the red markings on the outer (marginal) part of the upper shell; dorsalis is from the Latin for "back", referring to the prominent dorsal stripe; and bellii honors English zoologist Thomas Bell, a collaborator of Charles Darwin. An alternate East Coast common name for the painted turtle is "skilpot", from the Dutch for turtle, schildpad.
Classification
Originally described in 1783 by Johann Gottlob Schneider as Testudo picta, the painted turtle was called Chrysemys picta first by John Edward Gray in 1855. Four subspecies were then recognized: the eastern by Schneider in 1783, the western by Gray in 1831, and the midland and southern by Louis Agassiz in 1857, though the southern painted turtle is now generally considered a full species.
Subspecies
Although the subspecies of painted turtle intergrade (blend together) at range boundaries they are distinct within the hearts of their ranges.
The male eastern painted turtle (C. p. picta) is long, while the female is . The upper shell is olive green to black and may possess a pale stripe down the middle and red markings on the periphery. The segments (scutes) of the top shell have pale leading edges and occur in straight rows across the back, unlike all other North American turtles, including the other three subspecies of painted turtle, which have alternating segments. The bottom shell is plain yellow or lightly spotted. Sometimes as few as one dark grey spot near the lower center of the shell.
The midland painted turtle (C. p. marginata) is long. The centrally located midland is the hardest to distinguish from the other three subspecies. Its bottom shell has a characteristic symmetrical dark shadow in the center which varies in size and prominence.
The largest subspecies is the western painted turtle (C. p. bellii), which grows up to long. Its top shell has a mesh-like pattern of light lines, and the top stripe present in other subspecies is missing or faint. Its bottom shell has a large colored splotch that spreads to the edges (further than the midland) and often has red hues.
Until the 1930s, many of the subspecies of the painted turtle were labeled by biologists as full species within Chrysemys, but this varied by the researcher. The painted turtles in the border region between the western and midland subspecies were sometimes considered a full species, treleasei. In 1931, Bishop and Schmidt defined the current "four in one" taxonomy of species and subspecies. Based on comparative measurements of turtles from throughout the range, they subordinated species to subspecies and eliminated treleasei.
Since at least 1958, the subspecies were thought to have evolved in response to geographic isolation during the last ice age, 100,000 to 11,000 years ago. At that time painted turtles were divided into three different populations: eastern painted turtles along the southeastern Atlantic coast; southern painted turtles around the southern Mississippi River; and western painted turtles in the southwestern United States. The populations were not completely isolated for sufficiently long, hence wholly different species never evolved. When the glaciers retreated, about 11,000 years ago, all three subspecies moved north. The western and southern subspecies met in Missouri and hybridized to produce the midland painted turtle, which then moved east and north through the Ohio and Tennessee river basins.
Biologists have long debated the genera of closely related subfamily-mates Chrysemys, Pseudemys (cooters), and Trachemys (sliders). After 1952, some combined Pseudemys and Chrysemys because of similar appearance. In 1964, based on measurements of the skull and feet, Samuel B. McDowell proposed all three genera be merged into one. However, further measurements, in 1967, contradicted this taxonomic arrangement. Also in 1967, J. Alan Holman, a paleontologist and herpetologist, pointed out that, although the three turtles were often found together in nature and had similar mating patterns, they did not crossbreed. In the 1980s, studies of turtles' cell structures, biochemistries, and parasites further indicated that Chrysemys, Pseudemys, and Trachemys should remain in separate genera.
In 2003, Starkey et al. proposed that Chrysemys dorsalis, formerly considered a subspecies of C. picta, to be a distinct species sister to all subspecies in C. picta. Although this proposal was largely unrecognized at the time due to evidence of hybridization between dorsalis and picta, the Turtle Taxonomy Working Group and the Reptile Database have since followed through with it, although both the subspecific and specific names have been recognized.
Fossils
Although its evolutionary history—what the forerunner to the species was and how the close relatives branched off—is not well understood, the painted turtle is common in the fossil record. The oldest samples, found in Nebraska, date to about 15 million years ago. Fossils from 15 million to about 5 million years ago are restricted to the Nebraska-Kansas area, but more recent fossils are gradually more widely distributed. Fossils newer than 300,000 years old are found in almost all the United States and southern Canada.
DNA
The turtle's karyotype (nuclear DNA, rather than mitochondrial DNA) consists of 50 chromosomes, the same number as the rest of its subfamily-mates and the most common number for Emydidae turtles in general. Less well-related turtles have from 26 to 66 chromosomes. Little systematic study of variations of the painted turtle's karotype among populations has been done. (However, in 1967, research on protein structure of offshore island populations in New England, showed differences from mainland turtles.)
Comparison of subspecies chromosomal DNA has been discussed, to help address the debate over Starkey's proposed taxonomy, but as of 2009 had not been reported. The complete sequencing of the genetic code for the painted turtle was at a "draft assembled" state in 2010. The turtle was one of two reptiles chosen to be first sequenced.
Description
Adult painted turtles can grow to long, with males being smaller. The shell is oval, smooth with little grooves where the large scale-like plates overlap, and flat-bottomed. The color of the top shell (carapace) varies from olive to black. Darker specimens are more common where the bottom of the water body is darker. The bottom shell (plastron) is yellow, sometimes red, sometimes with dark markings in the center. Similar to the top shell, the turtle's skin is olive to black, but with red and yellow stripes on its neck, legs, and tail. As with other pond turtles, such as the bog turtle, the painted turtle's feet are webbed to aid swimming.
The head of the turtle is distinctive. The face has only yellow stripes, with a large yellow spot and streak behind each eye, and on the chin two wide yellow stripes that meet at the tip of the jaw. The turtle's upper jaw is shaped into an inverted "V" (philtrum), with a downward-facing, tooth-like projection on each side.
The hatchling has a proportionally larger head, eyes, and tail, and a more circular shell than the adult. The adult female is generally longer than the male, versus . For a given length, the female has a higher (more rounded, less flat) top shell. The female weighs around on average, against the males' average adult weight of roughly . The female's greater body volume supports her egg-production. The male has longer foreclaws and a longer, thicker tail, with the anus (cloaca) located further out on the tail.
Similar species
The painted turtle has a very similar appearance to the red-eared slider (the most common pet turtle) and the two are often confused. The painted turtle can be distinguished because it is flatter than the slider. Also, the slider has a prominent red marking on the side of its head (the "ear") and a spotted bottom shell, both features missing in the painted turtle.
Distribution
Range
The most widespread North American turtle, the painted turtle is the only turtle whose native range extends from the Atlantic to the Pacific. It is native to eight of Canada's ten provinces, forty-five of the fifty United States, and one of Mexico's thirty-one states. On the East Coast, it lives from the Canadian Maritimes to the U.S. state of Georgia. On the West Coast, it lives in British Columbia, Washington, and Oregon and offshore on southeast Vancouver Island. The northernmost American turtle, its range includes much of southern Canada. To the south, its range reaches the U.S. Gulf Coast in Louisiana and Alabama. In the southwestern United States there are only dispersed populations. It is found in one river in extreme northern Mexico. It is absent in a part of southwestern Virginia and the adjacent states as well as in north-central Alabama. There is a harsher divide between midland and eastern painted turtles in the southeast because they are separated by the Appalachian mountains, but the two subspecies tend to mix in the northeast.
The borders between the four subspecies are not sharp, because the subspecies interbreed. Many studies have been performed in the border regions to assess the intermediate turtles, usually by comparing the anatomical features of hybrids that result from intergradation of the classical subspecies. Despite the imprecision, the subspecies are assigned nominal ranges.
Eastern painted turtle
The eastern painted turtle ranges from southeastern Canada to Georgia with a western boundary at approximately the Appalachians. At its northern extremes, the turtle tends to be restricted to the warmer areas closer to the Atlantic Ocean. It is uncommon in far north New Hampshire and in Maine is common only in a strip about 50 miles from the coast. In Canada, it lives in New Brunswick and Nova Scotia but not in Quebec or Prince Edward Island. To the south it is not found in the coastal lowlands of southern North Carolina, South Carolina, or Georgia, or in southern Georgia in general or at all in Florida.
In the northeast, there is extensive mixing with the midland subspecies, and some writers have called these turtles a "hybrid swarm". In the southeast, the border between the eastern and midland is more sharp as mountain chains separate the subspecies to different drainage basins.
Midland painted turtle
The midland painted turtle lives from southern Ontario and Quebec, through the eastern U.S. Midwest states, to Kentucky, Tennessee and northwestern Alabama, where it intergrades with the southern painted turtle. It also is found eastward through West Virginia, western Maryland and Pennsylvania. The midland painted turtle appears to be moving east, especially in Pennsylvania. To the northeast it is found in western New York and much of Vermont, and it intergrades extensively with the eastern subspecies.
Western painted turtle
The western painted turtle's northern range includes southern parts of western Canada from Ontario through Manitoba, Saskatchewan, Alberta and British Columbia. In Ontario, the western subspecies is found north of Minnesota and directly north of Lake Superior, but there is a gap to the east of Lake Superior (in the area of harshest winter climate) where no painted turtles of any subspecies occur. Thus Ontario's western subspecies does not intergrade with the midland painted turtle of southeastern Ontario. In Manitoba, the turtle is numerous and ranges north to Lake Manitoba and the lower part of Lake Winnipeg. The turtle is also common in south Saskatchewan, but in Alberta, there may only be 100 individuals, all found very near the U.S. border, mostly in the southeast.
In British Columbia, populations exist in the interior in the vicinity of the Kootenai, Columbia, Okanagan, and Thompson river valleys. At the coast, turtles occur near the mouth of the Fraser and a bit further north, as well as the bottom of Vancouver Island, and some other nearby islands. Within British Columbia, the turtle's range is not continuous and can better be understood as northward extensions of the range from the United States. High mountains present barriers to east–west movement of the turtles within the province or from Alberta. Some literature has shown isolated populations much further north in British Columbia and Alberta, but these were probably pet-releases.
In the United States, the western subspecies forms a wide intergrade area with the midland subspecies covering much of Illinois as well as a strip of Wisconsin along Lake Michigan and part of the Upper Peninsula of Michigan (UP). Further west, the rest of Illinois, Wisconsin and the UP are part of the range proper, as are all of Minnesota and Iowa, as well as all of Missouri except a narrow strip in the south. All of North Dakota is within range, all of South Dakota except a very small area in the west, and all of Nebraska. Almost all of Kansas is in range; the border of that state with Oklahoma is roughly the species range border, but the turtle is found in three counties of north central Oklahoma.
To the northwest, almost all of Montana is in range. Only a narrow strip in the west, along most of the Idaho border (which is at the Continental Divide) lacks turtles. Wyoming is almost entirely out of range; only the lower elevation areas near the eastern and northern borders have painted turtles. In Idaho, the turtles are found throughout the far north (upper half of the Idaho Panhandle). Recently, separate Idaho populations have been observed in the southwest (near the Payette and Boise rivers) and the southeast (near St. Anthony). In Washington state, turtles are common throughout the state within lower elevation river valleys. In Oregon, the turtle is native to the northern part of the state throughout the Columbia River Valley as well as the Willamette River Valley north of Salem.
To the southwest, the painted turtle's range is fragmented. In Colorado, while range is continuous in the eastern, prairie, half of the state, it is absent in most of the western, mountainous, part of the state. However, the turtle is confirmed present in the lower elevation southwest part of the state (Archuleta and La Plata counties), where a population ranges into northern New Mexico in the San Juan River basin. In New Mexico, the main distribution follows the Rio Grande and the Pecos River, two waterways that run in a north–south direction through the state. Within the aforementioned rivers, it is also found in the northern part of Far West Texas. In Utah, the painted turtle lives in an area to the south (Kane County) in streams draining into the Colorado River, although it is disputed if they are native. In Arizona, the painted turtle is native to an area in the east, Lyman Lake. The painted turtle is not native to Nevada or California.
In Mexico, painted turtles have been found about 50 miles south of New Mexico near Galeana in the state of Chihuahua. There, two expeditions found the turtles in the Rio Santa Maria which is in a closed basin.
Human-introduced range
Pet releases are starting to establish the painted turtle outside its native range. It has been introduced into waterways near Phoenix, Arizona, and to Germany, Indonesia, the Philippines, and Spain.
Habitat
To thrive, painted turtles need fresh waters with soft bottoms, basking sites, and aquatic vegetation. They find their homes in shallow waters with slow-moving currents, such as creeks, marshes, ponds, and the shores of lakes. The subspecies have evolved different habitat preferences.
The eastern painted turtle is very aquatic, leaving the immediate vicinity of its water body only when forced by drought to migrate. Along the Atlantic, painted turtles have appeared in brackish waters. They can be found in wetland areas like swamps and marshes with a thick layer of mud as well as sandy bottoms with lots of vegetation. Areas with warmer climates have higher relative densities among populations and habitat desirability also influences density.
The midland and southern painted turtles seek especially quiet waters, usually shores and coves. They favor shallows that contain dense vegetation and have an unusual toleration of pollution.
The western painted turtle lives in streams and lakes, similar to the other painted turtles, but also inhabits pasture ponds and roadside pools. It is found as high as .
Population features
Within much of its range, the painted turtle is the most abundant turtle species. Population densities range from 10 to 840 turtles per hectare (2.5 acres) of water surface. Warmer climates produce higher relative densities among populations, and habitat desirability also influences density. Rivers and large lakes have lower densities because only the shore is desirable habitat; the central, deep waters skew the surface-based estimates. Also, lake and river turtles have to make longer linear trips to access equivalent amounts of foraging space.
Adults outnumber juveniles in most populations, but gauging the ratios is difficult because juveniles are harder to catch; with current sampling methods, estimates of age distribution vary widely. Annual survival rate of painted turtles increases with age. The probability of a painted turtle surviving from the egg to its first birthday is only 19%. For females, the annual survival rate rises to 45% for juveniles and 95% for adults. The male survival rates follow a similar pattern, but are probably lower overall than females, as evidenced by the average male age being lower than that of the female. Natural disasters can confound age distributions. For instance, a hurricane can destroy many nests in a region, resulting in fewer hatchlings the next year. Age distributions may also be skewed by migrations of adults.
To understand painted turtle adult age distributions, researchers require reliable methods. Turtles younger than four years (up to 12 years in some populations) can be aged based on "growth rings" in their shells. For older turtles, some attempts have been made to determine age based on size and shape of their shells or legs using mathematical models, but this method is more uncertain. The most reliable method to study the long-lived turtles is to capture them, permanently mark their shells by notching with a drill, release the turtles, and then recapture them in later years. The longest-running study, in Michigan, has shown that painted turtles can live more than 55 years.
Adult sex ratios of painted turtle populations average around 1:1. Many populations are slightly male-heavy, but some are strongly female-imbalanced; one population in Ontario has a female to male ratio of 4:1. Hatchling sex ratio varies based on egg temperature. During the middle third of incubation, temperatures of produce males, and anything above or below that, females. It does not appear that females choose nesting sites to influence the sex of the hatchlings; within a population, nests will vary sufficiently to give both male and female-heavy broods.
Ecology
Diet
The painted turtle is a bottom-dwelling hunter. It quickly juts its head into and out of vegetation to stir potential victims out into the open water, where they are pursued. Large prey is ripped apart with the forefeet as the turtle holds it in its mouth. It also consumes plants and skims the surface of the water with its mouth open to catch small particles of food.
Although all subspecies of painted turtle eat both plants and animals (in the form of leaves, algae, fish, crustaceans, aquatic insects and carrion), their specific diets vary. Young painted turtles are mostly carnivorous and as they mature they become more herbivorous.
Painted turtles obtain coloration from carotenoids in their natural diet by eating algae and a variety of aquatic plants from their environment. Stripes and spots increase red and yellow chroma and decrease UV chroma and brightness in turtles with large amounts of carotenoids in their diet compared to the stripes and spots of turtles with only moderate amounts of carotenoids in their diet.
The eastern painted turtle's diet is the least studied. It prefers to eat in the water, but has been observed eating on land. The fish it consumes are typically dead or injured.
The midland painted turtle eats mostly aquatic insects and both vascular and non-vascular plants.
The western painted turtle's consumption of plants and animals changes seasonally. In early summer, 60% of its diet comprises insects. In late summer, 55% includes plants. Of note, the western painted turtle aids in the dispersal of white water-lily seeds. The turtle consumes the hard-coated seeds, which remain viable after passing through the turtle, and disperses them through its feces.
Predators
Painted turtles are most vulnerable to predators when young. Nests are frequently ransacked and the eggs eaten by garter snakes, crows, chipmunks, thirteen-lined ground and gray squirrels, skunks, groundhogs, raccoons, badgers, gray and red fox, and humans. The small and sometimes bite-size, numerous hatchlings fall prey to water bugs, bass, catfish, bullfrogs, snapping turtles, three types of snakes (copperheads, racers and water snakes), herons, rice rats, weasels, muskrats, minks, and raccoons. As adults, the turtles' armored shells protect them from many potential predators, but they still occasionally fall prey to alligators, ospreys, crows, red-shouldered hawks, bald eagles, and especially raccoons.
Painted turtles defend themselves by kicking, scratching, biting, or urinating. In contrast to land tortoises, painted turtles can right themselves if they are flipped upside down.
Life cycle
Mating
The painted turtles mate in spring and fall in waters of . Males start producing sperm in early spring, when they can bask to an internal temperature of . Females begin their reproductive cycles in mid-summer, and ovulate the following spring.
Courtship begins when a male follows a female until he meets her face-to-face. He then strokes her face and neck with his elongated front claws, a gesture returned by a receptive female. The pair repeat the process several times, with the male retreating from and then returning to the female until she swims to the bottom, where they copulate. As the male is smaller than the female, he is not dominant. Although not directly observed, evidence indicates that the male will inflict injury on the female in attempts of coercion. Males will use their tooth-like cusps on their beaks and their foreclaws during this act of coercion with the female. The female stores sperm, to be used for up to three clutches, in her oviducts; the sperm may remain viable for up to three years. A single clutch may have multiple fathers.
Egg-laying
Nesting is done, by the females only, between late May and mid-July. The nests are vase-shaped and are usually dug in sandy soil, often at sites with southern exposures. Nests are often within of water, but may be as far away as , with older females tending to nest further inland. Nest sizes vary depending on female sizes and locations but are about deep. Females may return to the same sites several consecutive years, but if several females make their nests close together, the eggs become more vulnerable to predators. Female eastern painted turtles have been shown to nest together, possibly even participating in communal nesting.
The female's optimal body temperature while digging her nest is . If the weather is unsuitable, for instance a too hot night in the Southeast, she delays the process until later at night. Painted turtles in Virginia have been observed waiting three weeks to nest because of a hot drought.
While preparing to dig her nest, the female sometimes exhibits a mysterious preliminary behavior. She presses her throat against the ground of different potential sites, perhaps sensing moisture, warmth, texture, or smell, although her exact motivation is unknown. She may further temporize by excavating several false nests as the wood turtles also do.
The female relies on her hind feet for digging. She may accumulate so much sand and mud on her feet that her mobility is reduced, making her vulnerable to predators. To lighten her labors, she lubricates the area with her bladder water. Once the nest is complete, the female deposits into the hole. The freshly laid eggs are white, elliptical, porous, and flexible. From start to finish, the female's work may take four hours. Sometimes she remains on land overnight afterwards, before returning to her home water.
Females can lay five clutches per year, but two is a normal average after including the 30–50% of a population's females that do not produce any clutches in a given year. In some northern populations, no females lay more than one clutch per year. Bigger females tend to lay bigger eggs and more eggs per clutch. Clutch sizes of the subspecies vary, although the differences may reflect different environments, rather than different genetics. The two more northerly subspecies, western and midland, are larger and have more eggs per clutch—11.9 and 7.6, respectively—than the eastern (4.9). Within subspecies, also, the more northerly females lay larger clutches.
Growth
Incubation lasts 72–80 days in the wild and for a similar period in artificial conditions. In August and September, the young turtle breaks out from its egg, using a special projection of its jaw called the egg tooth. Not all offspring leave the nest immediately, though. Hatchlings north of a line from Nebraska to northern Illinois to New Jersey typically arrange themselves symmetrically in the nest and overwinter to emerge the following spring.
The hatchling's ability to survive winter in the nest has allowed the painted turtle to extend its range farther north than any other American turtle. The painted turtle is genetically adapted to survive extended periods of subfreezing temperatures with blood that can remain supercooled and skin that resists penetration from ice crystals in the surrounding ground. The hardest freezes nevertheless kill many hatchlings.
Immediately after hatching, turtles are dependent on egg yolk material for sustenance. About a week to a week and a half after emerging from their eggs (or the following spring if emergence is delayed), hatchlings begin feeding to support growth. The young turtles grow rapidly at first, sometimes doubling their size in the first year. Growth slows sharply at sexual maturity and may stop completely. Likely owing to differences of habitat and food by water body, growth rates often differ from population to population in the same area. Among the subspecies, the western painted turtles are the quickest growers.
Females grow faster than males overall, and must be larger to mature sexually. In most populations males reach sexual maturity at 2–4 years old, and females at 6–10. Size and age at maturity increase with latitude; at the northern edge of their range, males reach sexual maturity at 7–9 years of age and females at 11–16.
Behavior
Daily routine and basking
A cold-blooded reptile, the painted turtle regulates its temperature through its environment, notably by basking. All ages bask for warmth, often alongside other species of turtle. Sometimes more than 50 individuals are seen on one log together. Turtles bask on a variety of objects, often logs, but have even been seen basking on top of common loons that were covering eggs.
The turtle starts its day at sunrise, emerging from the water to bask for several hours. Warmed for activity, it returns to the water to forage. After becoming chilled, the turtle re-emerges for one to two more cycles of basking and feeding. At night, the turtle drops to the bottom of its water body or perches on an underwater object and sleeps.
To be active, the turtle must maintain an internal body temperature between . When fighting infection, it manipulates its temperature up to higher than normal.
Seasonal routine and hibernation
In the spring, when the water reaches , the turtle begins actively foraging. However, if the water temperature exceeds , the turtle will not feed. In fall, the turtle stops foraging when temperatures drop below the spring set-point.
During the winter, the turtle hibernates. In the north, the inactive season may be as long as from October to March, while the southernmost populations may not hibernate at all. While hibernating, the body temperature of the painted turtle averages . Periods of warm weather bring the turtle out of hibernation, and even in the north, individuals have been seen basking in February.
The painted turtle hibernates by burying itself, either on the bottom of a body of water, near water in the shore-bank or the burrow of a muskrat, or in woods or pastures. When hibernating underwater, the turtle prefers shallow depths, no more than . Within the mud, it may dig down an additional . In this state, the turtle does not breathe, although if surroundings allow, it may get some oxygen through its skin. The species is one of the best-studied vertebrates able to survive long periods without oxygen. Adaptations of its blood chemistry, brain, heart, and particularly its shell allow the turtle to survive extreme lactic acid buildup while oxygen-deprived.
Anoxia tolerance
During the winter months, painted turtles become ice-locked and spend their time in either hypoxic (low oxygen) or anoxic (no oxygen) regions of the pond or lake. Painted turtles essentially hold their breath until the following spring when the ice melts. As a result, painted turtles rely on anaerobic respiration, which leads to the production of lactic acid. However, painted turtles can tolerate long periods of anoxia due to three factors: a depressed metabolic rate, large glycogen stores in the liver, and sequestering lactate in the shell and releasing carbonate buffers to the extracellular fluid.
The shell of an adult painted turtle has the largest concentration of carbonate content recorded among animals. This large carbonate content helps the painted turtle buffer the accumulation of lactic acid during anoxia. Both the shell and skeleton release calcium and magnesium carbonates to buffer extracellular lactic acid. A painted turtle can also sequester 44% of total body lactate in their shell. Despite the shell's large buffering contribution, it does not experience any significant decrease in mechanical properties under natural conditions.
The duration of anoxia tolerance varies depending on the sub-species of painted turtle. The western painted turtle (C. picta bellii) can survive 170 days of anoxia, followed by the midland painted turtle (C. picta marginata) which can survive 150 days, and finally the eastern painted turtle (C. picta picta), which can survive 125 days. Differences in anoxia tolerance are partially attributed to the rate of lactate production and buffering capability in painted turtles. Furthermore, northern populations of painted turtles have a higher anoxia tolerance than southern populations.
Other anoxia tolerant freshwater turtles include: the southern painted turtle (Chrysemys dorsalis), which can survive 75–86 days of anoxia, the snapping turtle (Chelydra serpentina), which can survive 100 days under anoxia, and the map turtle (Graptemys geographica), which can survive 50 days of anoxia. One reason for the difference in duration between more anoxia-tolerant species and less anoxia-tolerant species is the turtle's ability to buffer lactic acid accumulation during anoxia.
Unlike adult painted turtles, hatchlings can survive only 40 days, but still exhibit high anoxia tolerance and freeze tolerance compared to other hatchling species (30 days for Chelydra serpentina, and 15 days for Graptemys geographica) due to cold winters.
Movement
Searching for water, food, or mates, the painted turtles travel up to several kilometers at a time. During summer, in response to heat and water-clogging vegetation, the turtles may vacate shallow marshes for more permanent waters. Short overland migrations may involve hundreds of turtles together. If heat and drought are prolonged, the turtles will bury themselves and, in extreme cases, die.
Foraging turtles frequently cross lakes or travel linearly down creeks. Daily crossings of large ponds have been observed. Tag and release studies show that sex also drives turtle movement. Males travel the most, up to , between captures; females the second most, up to , between captures; and juveniles the least, less than , between captures. Males move the most and are most likely to change wetlands because they seek mates.
The painted turtles, through visual recognition, have homing capabilities. Many individuals can return to their collection points after being released elsewhere, trips that may require them to traverse land. One experiment placed 98 turtles varying several-kilometer distances from their home wetland; 41 returned. When living in a single large body of water, the painted turtles can home from up to away. Another experiment found that if placed far enough away from water the turtles will just walk in straight paths and not orient towards water or in any specific direction which indicates a lack of homing ability. Females may use homing to help locate suitable nesting sites.
Eastern painted turtle movements may contribute to aquatic plant seed dispersal. A study done in Massachusetts found that the quantity of intact macrophyte seeds defecated by Eastern painted turtles can be high and that the seeds of specifically Nymphaea ordorata that were found in feces were capable of moderate to high level germination. As turtles move between ponds and habitats, they carry seeds along with them to new locations.
Interaction with humans
Conservation
The species is currently classified as least concern by the IUCN but populations have been subject to decline locally.
The decline in painted turtle populations is not a simple matter of dramatic range reduction, like that of the American bison. Instead the turtle is classified as G5 (demonstrably widespread) in its Natural Heritage Global Rank, and the IUCN rates it as a species of least concern. The painted turtle's high reproduction rate and its ability to survive in polluted wetlands and artificially made ponds have allowed it to maintain its range, but the post-Columbus settlement of North America has reduced its numbers.
Only within the Pacific Northwest is the turtle's range eroding. Even there, in Washington, the painted turtle is designated S5 (demonstrably widespread). However, in Oregon, the painted turtle is designated S2 (imperiled), and in British Columbia, the turtle's populations in the Coast and Interior regions are labeled "endangered" and "of special concern", respectively.
Much is written about the different factors that threaten the painted turtle, but they are unquantified, with only inferences of relative importance. A primary threat category is habitat loss in various forms. Related to water habitat, there is drying of wetlands, clearing of aquatic logs or rocks (basking sites), and clearing of shoreline vegetation, which allows more predator access or increased human foot traffic. Related to nesting habitat, urbanization or planting can remove needed sunny soils.
Another significant human impact is roadkill—dead turtles, especially females, are commonly seen on summer roads. In addition to direct killing, roads genetically isolate some populations. Localities have tried to limit roadkill by constructing underpasses, highway barriers, and crossing signs. Oregon has introduced public education on turtle awareness, safe swerving, and safely assisting turtles across the road.
In the West, human-introduced bass, bullfrogs, and especially snapping turtles, have increased the predation of hatchlings. Outside the Southeast, where sliders are native, released pet red-eared slider turtles increasingly compete with painted turtles. In cities, increased urban predators (raccoons, canines, and felines) may impact painted turtles by eating their eggs.
Other factors of concern for the painted turtles include over-collection from the wild, released pets introducing diseases or reducing genetic variability, pollution, boating traffic, angler's hooks (the turtles are noteworthy bait-thieves), wanton shooting, and crushing by agricultural machines or golf course lawnmowers or all-terrain vehicles. Gervais and colleagues note that research itself impacts the populations and that much funded turtle trapping work has not been published. They advocate discriminating more on what studies are done, thereby putting fewer turtles into scientists' traps. Global warming represents an uncharacterized future threat.
As the most common turtle in Nova Scotia, the eastern painted turtle is not listed under the Species at Risk Act for conservation requirements.
Pets and other uses
According to a trade data study, painted turtles were the second most popular pet turtles after red-eared sliders in the early 1990s. As of 2010, most U.S. states allow, but discourage, painted turtle pets, although Oregon forbids keeping them as pets, and Indiana prohibits their sale. U.S. federal law prohibits sale or transport of any turtle less than , to limit human contact to salmonella. However, a loophole for scientific samples allows some small turtles to be sold, and illegal trafficking also occurs.
Painted turtle pet-keeping requirements are similar to those of the red-eared slider. Keepers are urged to provide them with adequate space and a basking site, and water that is regularly filtered and changed. Aquatic turtles are generally unsuitable pets for children, as they do not enjoy being held. Hobbyists have maintained turtles in captivity for decades. Painted turtles are long-lived pets, and have a lifespan of up to 40 years in captivity.
The painted turtle is sometimes eaten but is not highly regarded as food, as even the largest subspecies, the western painted turtle, is inconveniently small and larger turtles are available. Schools frequently dissect painted turtles, which are sold by biological supply companies; specimens often come from the wild but may be captive-bred. In the Midwest, turtle racing is popular at summer fairs.
Capture
Commercial harvesting of painted turtles in the wild is controversial and, increasingly, restricted. Wisconsin formerly had virtually unrestricted trapping of painted turtles but based on qualitative observations forbade all commercial harvesting in 1997. Neighboring Minnesota, where trappers collected more than 300,000 painted turtles during the 1990s, commissioned a study of painted turtle harvesting. Scientists found that harvested lakes averaged half the painted turtle density of off-limit lakes, and population modeling suggested that unrestricted harvests could produce a large decline in turtle populations. In response, Minnesota forbade new harvesters in 2002 and limited trap numbers. Although harvesting continued, subsequent takes averaged half those of the 1990s. In 2023, Minnesota banned the practice of commercial turtle trapping. As of 2009, painted turtles faced virtually unlimited harvesting in Arkansas, Iowa, Missouri, Ohio, and Oklahoma; since then, Missouri has prohibited their harvesting.
Individuals who trap painted turtles typically do so to earn additional income, selling a few thousand a year at $1–2 each. Many trappers have been involved in the trade for generations, and value it as a family activity. Some harvesters disagree with limiting the catch, saying the populations are not dropping.
Many U.S. state fish and game departments allow non-commercial taking of painted turtles under a creel limit, and require a fishing (sometimes hunting) license; others completely forbid the recreational capture of painted turtles. Trapping is not allowed in Oregon, where western painted turtle populations are in decline, and in Missouri, where there are populations of both southern and western subspecies. In Canada, Ontario protects both subspecies present, the midland and western, and British Columbia protects its dwindling western painted turtles.
Capture methods are also regulated by locality. Typically trappers use either floating "basking traps" or partially submerged, baited "hoop traps". Trapper opinions, commercial records, and scientific studies show that basking traps are more effective for collecting painted turtles, while the hoop traps work better for collecting "meat turtles" (snapping turtles and soft-shell turtles). Nets, hand capture, and fishing with set lines are generally legal, but shooting, chemicals, and explosives are forbidden.
Culture
Native American tribes were familiar with the painted turtle—young braves were trained to recognize its splashing into water as an alarm—and incorporated it in folklore. A Potawatomi myth describes how the talking turtles, "Painted Turtle" and allies "Snapping Turtle" and "Box Turtle", outwit the village women. Painted Turtle is the star of the legend and uses his distinctive markings to trick a woman into holding him so he can bite her. An Illini myth recounts how Painted Turtle put his paint on to entice a chief's daughter into the water.
As of 2010, four U.S. states designated the painted turtle as official reptile. Vermont honored the reptile in 1994, following the suggestion of Cornwall Elementary School students. In 1995, Michigan followed, based on the recommendation of Niles fifth graders, who discovered the state lacked an official reptile. On February 2, 2005, Representative Bob Biggins introduced a bill to make the tiger salamander the official state amphibian of Illinois and to make the painted turtle the official state reptile. The bill was signed into law by Governor Rod Blagojevich on July 19, 2005. Colorado chose the western painted turtle in 2008, following the efforts of two succeeding years of Jay Biachi's fourth grade classes. In New York, the painted turtle narrowly lost (5,048 to 5,005, versus the common snapping turtle) a 2006 statewide student election for state reptile.
In the border town of Boissevain, Manitoba, a western painted turtle, Tommy the Turtle, is a roadside attraction. The statue was built in 1974 to celebrate the Canadian Turtle Derby, a festival including turtle races that ran from 1972 to 2001.
Another Canadian admirer of the painted turtle is Jon Montgomery, who won the 2010 Olympic gold medal in skeleton (a form of sled) racing, while wearing a painted turtle painting on the crown of his helmet, prominently visible when he slid downhill. Montgomery, who also iconically tattooed his chest with a maple-leaf, explained his visual promotion of the turtle, saying that he had assisted one to cross the road. BC Hydro referred to Montgomery's action when describing its own sponsorship of conservation research for the turtle in British Columbia.
Several private entities use the painted turtle as a symbol. Wayne State University Press operates an imprint "named after the Michigan state reptile" that "publishes books on regional topics of cultural and historical interest". In California, The Painted Turtle is a camp for ill children, founded by Paul Newman. Painted Turtle Winery of British Columbia trades on the "laid back and casual lifestyle" of the turtle with a "job description to bask in the sun". Also, there is an Internet company in Michigan, a guesthouse in British Columbia, and a café in Maine that use the painted turtle commercially.
In children's books, the painted turtle is a popular subject, with at least seven books published between 2000 and 2010.
| Biology and health sciences | Reptiles | null |
17177420 | https://en.wikipedia.org/wiki/Hummocky%20cross-stratification | Hummocky cross-stratification | Hummocky cross-stratification is a type of sedimentary structure found in sandstones. It is a form of cross-bedding usually formed by the action of large storms, such as hurricanes. It takes the form of a series of "smile"-like shapes, crosscutting each other. It is only formed at a depth of water below fair-weather wave base and above storm-weather wave base. They are not related to "hummocks" except in shape.
History
The name was introduced by Harms et al. in 1975. Before this time, these structures were recognized under many different names. When hummocky cross-stratification was founded, it was originally given the name "truncated wave-ripple laminae", by Campbell (1966, 1971). The main features were listed by Bourgeois (1980), Harms et al. (1982), and Walker (1983), in order to identify the structure. Dott and Bourgeois launched an idealized hummocky stratification sequence. From bottom to top, these include: first-order scoured base (± sole marks); characteristic hummocky zone with several second-order truncation surfaces separating individual undulating lamina sets; a zone of flat laminae; a zone with well-oriented ripple cross-laminae and symmetrical ripple forms; all overlain by a more or less burrowed mudstone or siltstone. Walker (1983) wanted to create a second sequence, but it was decided that this sequence offers the best basis for studying hummocky cross-stratification for the future.
Composition
This structure is commonly found in silt to fine sand. It is typically interbedded with bioturbated mudstone. It commonly contains concretions of abundant mica and plant detritus in the tops of many laminae. This helps indicate a shape sorting. Although hummocky cross-stratification is usually found in shallow marine sedimentary rocks, it has also been found in some lacustrine sedimentary rocks.
Common characteristics
In plan view (seen from above), it takes on the form of hummocks and swales that are circular to elliptical, with long wavelengths (1–5 m) but with low height (tens of centimeters). Laminations drape these hummocks; in cross-section view, these laminations have an upward curvature, and low angle, curved intersections. Hummocky cross-stratification can form in sediments up to about 3 cm in diameter, with near-bed water particle velocities between about 40–100 cm/s.
Formation of structure
This structure is formed under a combination of unidirectional and oscillatory flow that is generated by relatively large storm waves in the ocean. Deposition involves fallout from suspension and lateral tractive flow due to wave oscillation. As the large waves drape sand over an irregular scoured surface, this strong storm-wave action erodes the seabed into low hummocks and swales that lack any significant orientation. It is usually formed by redeposition below normal fair weather wave base delivered offshore by flooding rivers and shoals by large waves.
Depositional environments
During ancient times, hummocky cross-stratification was located in shallow marine environments, on the shore face and shelf by waves. It can also form on land during especially large storms when large amounts of water are pushed up onto the tidal flat. These landward deposits feature smaller bed forms due to the attenuation of storm waves as they move onto the land. While it is usually formed in marine settings by the action of storms (e.g.hurricane) it may also be deposited in fluvial strata; a fluvial origin is more likely if the unit solely comprises sand.
| Physical sciences | Sedimentology | Earth science |
12647385 | https://en.wikipedia.org/wiki/Achatina | Achatina | Achatina is a genus of medium-sized to very large, air-breathing, tropical land snails, terrestrial pulmonate gastropod mollusks in the family Achatinidae.
Distribution
There are some 200 species of Achatinidae in Sub-Saharan Africa. Some species are kept as terrarium animals due to their size of three inches, and colourful shells.
Shell description
Snails in this genus have medium to large shells which are ovate in shape and often colourfully streaked.
Species
Species within the genus Achatina include:
Achatina achatina Linnaeus, 1758 or giant African snail, agate snail or Ghana tiger snail, from Western Africa (Liberia through Nigeria) grows to be the largest land snail on Earth.
Achatina ampullacea Böttger, 1910
Achatina balteata Reeve, 1849 - Cameroon to Central Angola.
Achatina bandeirana Morelet, 1866
Achatina bayaona Morelet, 1866
Achatina bayoli Morelet, 1888
Achatina bisculpta E. A. Smith, 1878
Achatina connollyi Preston, 1912
Achatina coroca Bruggen, 1978
Achatina craveni E. A. Smith, 1881 - Congo, Tanzania.
Achatina dammarensis L. Pfeiffer, 1870 - Botswana
Achatina dohrniana L. Pfeiffer, 1870
Achatina greyi Da Costa, 1907
Achatina hortensiae Morelet, 1866
Achatina inaequalis L. Pfeiffer, 1855
Achatina iostoma L. Pfeiffer, 1854 - Cameroon
Achatina morrelli Preston, 1905
Achatina obscura Da Costa, 1907
Achatina osborni Pilsbry, 1919
Achatina passargei von Martens, 1900
Achatina perfecta Morelet, 1867
Achatina pfeifferi Dunker, 1845
Achatina polychroa Morelet, 1866
Achatina randabeli Bourguignat, 1889
Achatina rugosa Putzeys, 1898
Achatina schinziana Mousson, 1888 - Botswana
Achatina schweinfurthi von Martens, 1873 - East Africa.
Achatina semisculpta Dunker, 1845 - East Africa
Achatina smithii Craven, 1881
Achatina spekei Dohrn, 1864
Achatina stuhlmanni von Martens, 1892 - Uganda.
Achatina tavaresiana Morelet, 1866 - Angola
Achatina tincta Reeve, 1842 - Congo, Angola
Achatina tracheia Connolly, 1929 - Southeast Africa
Achatina transparens Da Costa, 1907
Achatina vignoniana Morelet, 1874
Achatina virgulata Da Costa, 1907
Achatina welwitschi Morelet, 1866
Achatina weynsi Dautzenberg, 1891 - Congo
incertae sedis:
Achatina vassei Germain, 1918 - The internal anatomy of this species is not known, and therefore the generic classification of ‘Achatina’ vassei cannot be made.
Species brought into synonymy
Achatina albopicta E. A. Smith, 1878 - along the coast of Kenya and Tanzania: synonym of Lissachatina albopicta (E. A. Smith, 1878)
Achatina allisa Reeve, 1849: synonym of Lissachatina allisa (L. Reeve, 1849)
Achatina antourtourensis Crosse, 1879 : synonym of Achatina immaculata Lamarck, 1822: synonym of Lissachatina immaculata (Lamarck, 1822) (junior synonym)
Achatina arctespirata Bourguignat, 1890: synonym of Achatina randabeli Bourguignat, 1890
Achatina barbigera Morelet, 1866: synonym of Petriola marmorea (Reeve, 1850)
Achatina bloyeti Bourguignat, 1890: synonym of Lissachatina bloyeti (Bourguignat, 1890)
Achatina capelloi Furtado, 1886: synonym of Lissachatina capelloi (Furtado, 1886)
Achatina fulgurata Pfeiffer, 1853 - Senegal: synonym of Achatina eleanorae Mead, 1995
Achatina drakensbergensis Melvill & Ponsonby, 1897 : synonym of Cochlitoma drakensbergensis (Melvill & Ponsonby, 1897)
Achatina eleanorae Mead, 1995: synonym of Lissachatina eleanorae (Mead, 1995)
Achatina ellioti E. A. Smith, 1895: synonym of Oreohomorus ellioti (E. A. Smith, 1895)
Achatina fulica Bowdich, 1822 or giant East African snail from Eastern Africa is a serious pest in the many tropical countries where it has been introduced, and is listed as an invasive species by some governments: synonym of Lissachatina fulica (Bowdich, 1822)
Achatina glaucina E. A. Smith, 1899: synonym of Lissachatina glaucina (E. A. Smith, 1899)
Achatina glutinosa Pfeiffer, 1854 - Mozambique: synonym of Lissachatina glutinosa (L. Pfeiffer, 1854)
Achatina gruveli Dautzenberg, 1921: synonym of Achatina iostoma L. Pfeiffer, 1854
Achatina gundlachi L. Pfeiffer, 1850: synonym of Geostilbia gundlachi (L. Pfeiffer, 1850)
Achatina hamillei Petit de la Saussaye, 1859: synonym of Lissachatina fulica hamillei (Petit de la Saussaye, 1859)
Achatina immaculata Lamarck, 1822 - Southeastern Africa: synonym of Lissachatina immaculata (Lamarck, 1822)
Achatina iredalei Preston, 1910: synonym of Lissachatina allisa (Reeve, 1849)
Achatina ivensi Furtado, 1886: synonym of Achatina pfeifferi Dunker, 1845
Achatina johnstoni E. A. Smith, 1899: synonym of Lissachatina johnstoni (E. A. Smith, 1899)
Achatina kilimae Dautzenberg, 1908: synonym of Lissachatina kilimae (Dautzenberg, 1908)
Achatina lowei Paiva, 1866: synonym of Amphorella oryza (R. T. Lowe, 1852)
Achatina lechaptoisi Ancey, 1894: synonym of Lissachatina immaculata (Lamarck, 1822)
Achatina letourneuxi Bourguignat, 1879: synonym of Lissachatina fulica hamillei (Petit de la Saussaye, 1859)
Achatina lhotellerii Bourguignat, 1879: synonym of Lissachatina zanzibarica (Bourguignat, 1879)
Achatina marginata Swainson, 1821: synonym of Archachatina marginata (Swainson, 1821)
Achatina milneedwardsiana Revoil, 1885: synonym of Lissachatina fulica hamillei (Petit de la Saussaye, 1859)
Achatina minima Siemaschko, 1847: synonym of Cochlicopa lubricella (Porro, 1838)
Achatina monochromatica Pilsbry (Deprecated. Old historical synonym of Achatina achatina var. monochromatic)
Achatina moreletiana Deshayes, 1851: taxon inquirendum
Achatina mulanjensis Crowley & Pain - Malawi: synonym of Achatina immaculata Lamarck, 1822 (junior synonym)
Achatina nigella Morelet, 1867: synonym of Homorus nigellus (Morelet, 1867)
Achatina nyikaensis Pilsbry, 1909 - Malawi: synonym of Bequaertina pintoi (Bourguignat, 1889) (junior synonym)
Achatina panthera Férussac, 1832 - Zimbabwe, Mauritius: synonym of Achatina immaculata Lamarck, 1822
Achatina pellucida L. Pfeiffer, 1840: synonym of Blauneria heteroclita (Montagu, 1808)
Achatina purpurea (Gmelin, 1790): synonym of Archachatina purpurea (Gmelin, 1790)
Achatina raffrayi Jousseaume, 1883: synonym of Leptocallista raffrayi (Jousseaume, 1883)
Achatina rediviva Mabille, 1901: synonym of Lissachatina fulica (Bowdich, 1822)
Achatina reticulata Pfeiffer, 1845 - Zanzibar: synonym of Lissachatina reticulata (L. Pfeiffer, 1845)
Achatina semitarum L. Pfeiffer, 1842: synonym of Laevaricella semitarum (L. Pfeiffer, 1842)
Achatina sylvatica Putzeys, 1898 - Congo: synonym of Leptocalina putzeysi (Dautzenberg & Germain, 1914)
Achatina varicosa Pfeiffer, 1845 - South Africa: synonym of Cochlitoma varicosa (L. Pfeiffer, 1861)
Achatina variegata Lamarck, 1801: synonym of Achatina achatina (Linnaeus, 1758)
Achatina variegata var. minima Germain, 1912: synonym of Achatina achatina elegans (Link, 1807)
Achatina wildemani Dautzenberg, 1907: synonym of Leptocalina specularis (Morelet, 1866)
Achatina yalaensis Germain, 1936: synonym of Oreohomorus connollyi (Odhner, 1932)
Achatina zanzibarica Bourguignat, 1879 - Tanzania: synonym of Lissachatina zanzibarica (Bourguignat, 1879)
Achatina zebra Bruguiere, 1792 - South Africa: synonym of Cochlitoma zebra (Bruguière, 1792)
| Biology and health sciences | Gastropods | Animals |
3430951 | https://en.wikipedia.org/wiki/Cardinal%20beetle | Cardinal beetle | Pyrochroa coccinea, commonly known as the black-headed cardinal beetle, is a species of cardinal beetle in the family Pyrochroidae. It is found mainly in wooded areas and pastures throughout central Europe, including southern Great Britain. Similar to other species of Ambrosia beetles, P. coccinea live and reproduce on wooden logs in early stages of decomposition. Larvae develop over the span of many years, with overlapping generations often inhabiting a single wooden territory. Adults, however, are short-lived and exist during a brief season. They typically show up in April, become more populous in May and early June, and become very rare in the remaining months.
Geographic range
Pyrochroa coccinea is widespread in southern England and is most commonly found in the southeastern regions. It is found more locally and sporadically in northern regions toward the Lake District and expands well into the Welsh Border Counties. This species is found to a lesser extent in southwest England, northern into southern Cumbria, and is generally not found in the West Country and western Wales, nor in Scotland.
In Europe, Pyrochroa coccinea is found in mainly the central regions but its presence also expands south to the Pyrenees, central Italy, and Greece; north to southern Scandinavia and the United Kingdom; and east to Ukraine, western Russia, and Kazakhstan.
Morphology
Adults
This species is large, with an average length of 14–20 mm. Its pronotum and elytra are distinctively brightly coloured and smooth, with either a red or scarlet colour or shiny black. Compared to the red-headed common cardinal beetle (Pyrochroa serraticornis), this species is distinct in that its head is black with feathery antennae, its pronotum and elytra are characteristically structured, and it is larger and deeper blood red in colour. Adult P. coccinea has evolved aposematic coloration which serves as a protective mechanism as the beetles disperse, given that its notable bright red coloration is known to be toxic to predators.
Larvae
The larvae are distinguishable based on colour, as they are creamy grey or yellow. The larvae are also elongated and flattened in shape, and each thoracic and abdominal segment is distinctively rounded. Extending from each thoracic segment is a pair of short legs, with the eighth tergite containing a perpendicular raised line at its base that is otherwise absent in P. serraticornis. The final abdominal segment also has a pair of hardened, straight urogomphi, which allows these beetles to crawl in between the narrow crevices of wood and bark to establish their habitat.
Sex differences
There are distinctive anatomical features present in each sex. Males contain antennae pectinate and have a deep indentation between the eyes, whereas females contain antennae serrates and have a much shallower depression between its eyes.
Habitat
Pyrochroa coccinea is active during the day (diurnal) and inhabits vascular plant species in wooded environments under the bark of decaying broad-leaved timber and fallen logs. P. coccinea is less likely to be found in wooded areas with increased sun exposure, but its presence is unaffected by microhabitat factors such as the moisture or humidity within the tree bark. This species also does not have an obvious preference for different tree species, and its presence is unaffected by factors such as diameter, bark coverage, or the presence of fungi. This is in contrast to many species of Ambrosia beetles that live mutualistically in wooded areas with a fungal source.
Larvae mature over the span of many years and therefore require a habitat with abundant host resources.
Adults have a short life span and only exist during limited months of the year, typically showing up in April, becoming the most populous throughout May and June, declining in July, and appearing only very rarely in the remaining months.
Feeding behaviour and diet
While these beetles inhabit fallen timber, they are active during the day and live an exposed lifestyle easily detectable by predators and researchers. Larvae develop in small groups and feed upon the decomposing wood, nearby dead insects, and their own faeces as well as microorganisms inhabiting the wooded debris. However, when these beetles are highly populous, this species has been found to engage in cannibalism.
Adults are predatory and, in contrast to larvae, feed primarily on many different small insects living within the nearby foliage, as well as flowers and pollen.
Life cycle
Because it takes larvae many years to fully develop, there are often many overlapping generations simultaneously inhabiting a single wooded area. Small larvae typically develop under narrower bark areas, but the bark slackens as they grow and develop. This allows the developing larvae to find groupings of other large larvae within these more spacious regions that are filled with debris. This property of fallen timber also explains why this is where females choose to lay their eggs, as the bark initially remains solid, stably containing the eggs, until it eventually loosens once the eggs hatch and larvae begin to grow in size.
Once adults appear from underneath the tree bark, they remain confined to the host material and will often mate during this period. Adults eventually disperse by flying and can be found among foliage within a small distance from their native logs.
In contrast to adults, larvae are present throughout the entire year but preferentially pupate in the spring once they are fully developed.
Chemical signaling
Semiochemicals are chemical agents that facilitate communication between either members of the same or different species. Cantharidin (CTD) is a type of semiochemical that has different actions based on which species are relying on it for communication. CTD is a type of terpene, which is a volatile unsaturated hydrocarbon found in the essential oils of plants. CTD specifically is naturally produced by blister beetles and false blister beetles, and it is highly toxic such that it strongly discourages predators and parasites from feeding on these beetles throughout all stages of their development. Even though CTD is highly poisonous, it actually lures some arthropods known as canthariphilous species. These species can sense the blister and false blister beetles producing CTD and manipulate the compound to convert it into a defensive material.
This canthariphilous mechanism involving transforming the properties of CTD is common within the Pyrochidae family, where it functions as a pheromone for males and females in close proximity and is an important factor in sexual selection. All species within the Pyrochroa genus are drawn toward specifically blister beetles, and P. coccinea has previously been observed feeding on Melo brevicollis, Meloe proscarabaeus, and Melo violaceus.
Reproduction
Sexual anatomy
Specifically, CTD interacts with specialized glands found only in specific male anatomical structures. In this glandular cranial apparatus, CTD is released as secretions that are consumed by females when males are courting them, inducing sexual intercourse. In males, this cranial structure is located in the frontal region and contains one indentation extending to a modest depth in between the eyes. However, in different genera of Pyrochroinae, the structure can vary from a single shallow indentation interocularly, to two matching indentations behind the eyes, to a bulging frontal ledge in the frontoclypeal region.
The head also contains distinctive cuticular ducts that enter and pass through the cuticular wall of the structure, both of which are involved in transporting the chemicals produced by the secretory cells. These two different kinds of ducts comprise two different types of ectodermal glands.
Copulation
After males ingest CTD, the male gradually approaches the female following a brief settling phase and presents itself face to face. The male and female proceed to approach each other and interact by touching their antennae, where the male positions its antennae sideways. This allows the cranial structure to be exposed to the female, and she immediately tests it out with her mouth anatomical structures. The male proceeds to mount the female and begin copulating, securely grasping the female's body and the level of the pronotum and elytra and forcing his open mandibles onto the pronotum. Following the conclusion of copulation, the male orients itself face to face once again to the female, revealing his cranial structure repeatedly, allowing the female to test the apparatus again and potentially consume the secretions. This repeated sampling of the apparatus is a relatively unusual post-copulatory behaviour. This is because when gifts are imparted by males, it typically occurs before or during copulation, rather than after.
Mating preferences & oviposition
Although this species does not have a significant preference for the tree species that it inhabits, mating occurs most frequently on the decomposing bark of broadleaf wooded plants, including oak and beech trees, as well as fallen timber. Reproduction takes place early in the season of the adult beetle's presence (mainly during the early spring months). Females lay their eggs in small groups and primarily beneath the wood and rarely on upright trunks where they will have access to feed on nearby insects.
| Biology and health sciences | Beetles (Coleoptera) | Animals |
3433115 | https://en.wikipedia.org/wiki/Chronostratigraphy | Chronostratigraphy | Chronostratigraphy is the branch of stratigraphy that studies the ages of rock strata in relation to time.
The ultimate aim of chronostratigraphy is to arrange the sequence of deposition and the time of deposition of all rocks within a geological region, and eventually, the entire geologic record of the Earth.
The standard stratigraphic nomenclature is a chronostratigraphic system based on palaeontological intervals of time defined by recognised fossil assemblages (biostratigraphy). The aim of chronostratigraphy is to give a meaningful age date to these fossil assemblage intervals and interfaces.
Methodology
Chronostratigraphy relies heavily upon isotope geology and geochronology to derive hard dating of known and well defined rock units which contain the specific fossil assemblages defined by the stratigraphic system. In practice, as it is very difficult to isotopically date most fossils and sedimentary rocks directly, inferences must be made in order to arrive at an age date which reflects the beginning of the interval.
The methodology used is derived from the law of superposition and the principles of cross-cutting relationships.
Because igneous rocks occur at specific intervals in time and are essentially instantaneous on a geologic time scale, and because they contain mineral assemblages which may be dated more accurately and precisely by isotopic methods, the construction of a chronostratigraphic column relies heavily upon intrusive and extrusive igneous rocks.
Metamorphism, often associated with faulting, may also be used to bracket depositional intervals in a chronostratigraphic column. Metamorphic rocks can occasionally be dated, and this may give some limitations to the age in which a bed could have been laid down. For example, if a bed containing graptolites overlies crystalline basement at some point, dating the crystalline basement will give a maximum age of that fossil assemblage.
This process requires a considerable degree of effort and checking of field relationships and age dates. For instance, there may be many millions of years between a bed being laid down and an intrusive rock cutting it; the estimate of age must necessarily be between the oldest cross-cutting intrusive rock in the fossil assemblage and the youngest rock upon which the fossil assemblage rests.
Units
Chronostratigraphic units, with examples:
eonothem – Phanerozoic
erathem – Paleozoic
system – Ordovician
series – Upper Ordovician
stage – Ashgill
Differences from geochronology
It is important not to confuse geochronologic and chronostratigraphic units. Chronostratigraphic units are geological material, so it is correct to say that fossils of the species Tyrannosaurus rex have been found in the Upper Cretaceous Series. Geochronological units are periods of time and take the same name as standard stratigraphic units but replacing the terms upper/lower with late/early. Thus it is also correct to say that Tyrannosaurus rex lived during the Late Cretaceous Epoch.
Chronostratigraphy is an important branch of stratigraphy because the age correlations derived are crucial in drawing accurate cross sections of the spatial organization of rocks and in preparing accurate paleogeographic reconstructions.
| Physical sciences | Stratigraphy | Earth science |
3433678 | https://en.wikipedia.org/wiki/Kipp%27s%20apparatus | Kipp's apparatus | Kipp's apparatus, also called a Kipp generator, is an apparatus designed for preparation of small volumes of gases. It was invented around 1844 by the Dutch pharmacist Petrus Jacobus Kipp and widely used in chemical laboratories and for demonstrations in schools into the second half of the 20th century.
It later fell out of use, at least in laboratories, because most gases then became available in small gas cylinders. These industrial gases are much purer and drier than those initially obtained from a Kipp apparatus without further processing.
Design and operation
The apparatus is usually made of glass, or sometimes of polyethylene, and consists of three vertically stacked chambers, roughly resembling a snowman. The upper chamber extends downward as a tube that passes through the middle chamber into the lower chamber. There is no direct path between the middle and upper chambers, but the middle chamber is separated from the lower chamber by a retention plate, such as a conical piece of glass with small holes, which permits the passage of liquid and gas. The solid material (e.g., iron sulfide) is placed into the middle chamber in lumps sufficiently large to avoid falling through the retention plate. The liquid, such as an acid, is poured into the top chamber. Although the acid is free to flow down through the tube into the bottom chamber, it is prevented from rising there by the pressure of the gas contained above it, which is able to leave the apparatus only by a stopcock near the top of the middle chamber. This stopcock may be opened, initially to permit the air to leave the apparatus, allowing the liquid in the bottom chamber to rise through the retention plate into the middle chamber and react with the solid material. Gas is evolved from this reaction, which may be drawn off through the stopcock as desired. When the stopcock is closed, the pressure of the evolved gas in the middle chamber rises and pushes the acid back down into the bottom chamber, until it is not in contact with the solid material anymore. At that point the chemical reaction comes to a stop, until the stopcock is opened again and more gas is drawn off.
Kipp generators only work properly in the described manner if the solid material is insoluble in the acid, as otherwise the dissolved material would continue to evolve gas even after the level dropped. The produced gas often requires further purification and/or drying, due to content of water vapor and possibly mist if the reaction is vigorous.
Examples for prepared gases and their educts
For successful use in a Kipp's apparatus, the solid material has to be available in lumps large enough to stay on the retention plate without falling through its holes.
Hydrogen from iron flakes or zinc and hydrochloric acid or diluted sulfuric acid respectively.
Carbon dioxide from pieces of marble (calcium carbonate) and hydrochloric acid
Hydrogen sulfide from iron(II) sulfide and hydrochloric acid
Acetylene from calcium carbide and water
Methane from aluminium carbide and lukewarm water, deuterated methane (CD4) from aluminium carbide and heavy water
Chlorine from potassium permanganate, calcium hypochlorite, or manganese dioxide and hydrochloric acid; also from barium ferrate and hydrochloric acid
Oxygen from calcium hypochlorite and hydrogen peroxide with a bit of nitric acid; also from barium ferrate and dilute sulfuric acid
Ozone from barium peroxide and concentrated sulfuric acid
Nitric oxide from copper turnings and diluted nitric acid
Nitrogen dioxide from copper turnings and concentrated nitric acid
Ammonia from magnesium nitride and water, deuterated ammonia when heavy water is used; also from calcium oxide and solution of ammonium chloride
Carbon monoxide from pumice impregnated with oxalic acid and concentrated sulfuric acid
Sulfur dioxide from pumice impregnated with sodium metabisulfite (or sufficiently large pieces of sodium metabisulfite) and concentrated sulfuric acid, or from sodium hydrogen sulphite and concentrated sulfuric acid
Hydrogen chloride can be prepared from lumps of ammonium chloride and concentrated sulfuric acid
Generally, weak acidic gases can be released from their metal salts by dilute acids, and sometimes just with water:
Hydrogen sulfide from metal sulfides
Hydrogen selenide from selenides, e.g. aluminium selenide
Hydrogen telluride from tellurides, e.g. aluminium telluride
Some hydrocarbons can be prepared from certain carbides
Methane from methanides
acetylene from acetylides
Methylacetylene and propadiene from sesquicarbides, e.g. magnesium carbide
Ammonia from certain nitrides, e.g. magnesium nitride
Phosphine from phosphides, e.g. calcium phosphide (often produced together with small amount of diphosphane)
Arsine from arsenides, e.g. zinc arsenide
Stibine from antimonides, e.g. magnesium antimonide
Silanes from some silicides (analogue of hydrocarbons, with number of silicon atoms corresponding to the silicide anion structure, sometimes more are produced from the same compound; e.g. silane, disilane and trisilane from decomposition of magnesium silicide)
Germanes from germanides, e.g. magnesium germanide
Stannanes from stannides, e.g. magnesium stannide
Boranes from borides (e.g. tetraborane from magnesium boride, aluminium boride, or beryllium boride and an acid)
Hydrogen fluoride can be made from concentrated sulfuric acid and e.g. calcium fluoride
Hydrogen bromide can be prepared from bromides with concentrated phosphoric acid (conc. sulfuric acid is too oxidizing)
A version of the apparatus can be used for reaction between two liquid precursors. A mercury trap has to be added as a check valve, and the middle bulb is filled with an inert porous material, e.g. pumice, onto which one of the precursors is dropped.
Hydrogen chloride is prepared from hydrochloric acid and concentrated sulfuric acid
Hydrogen sulfide from concentrated sodium sulfide solution and diluted sulfuric acid
Sulfur dioxide from 40% solution of sodium metabisulfite and concentrated sulfuric acid
Nitric oxide from ferrous chloride in hydrochloric acid and 20% solution of sodium nitrite
Dinitrogen trioxide, aka nitrous anhydride, from 20% solution of sodium nitrite and concentrated sulfuric acid
Carbon monoxide, from concentrated formic acid and concentrated sulfuric acid.
Further gas treatments
The prepared gas is usually impure, contaminated with fine aerosol of the reagents and water vapor. The gases may need to be filtered, washed and dried before further use.
Hydrogen can be washed from sulfane, arsine and oxygen with subsequent bubbling through solutions of lead acetate, silver nitrate, and alkaline pyrogallic acid.
Acidic gases (e.g. hydrogen sulfide, hydrogen chloride, sulfur dioxide) can be dried with concentrated sulfuric acid, or with phosphorus pentoxide. Basic gases (e.g. ammonia) can be dried with calcium oxide, sodium hydroxide or soda lime.
Disposal of the gases can be done by burning the flammable ones (carbon monoxide, hydrogen, hydrocarbons), absorbing them in water (ammonia, hydrogen sulfide, sulfur dioxide, chlorine), or reacting them with a suitable reagent.
Variants
Many variants of the gas production apparatus exist. Some are suitable for production of larger amounts of gases (Gay-Lussac and Verkhovsky), some for smaller amounts (Kiryushkin, U-tube).
A Döbereiner's lamp is a small modified Kipp's apparatus for production of hydrogen. The hydrogen is led over a platinum sponge catalyst, where it reacts with air oxygen, heats the catalyst and ignites from it, producing a gentle flame. It was commercialized for lighting fires and pipes. It's said that in 1820s over a million of the "tinderboxes" ("Feuerzeug") was sold.
| Physical sciences | Other reactions | Chemistry |
3436583 | https://en.wikipedia.org/wiki/Atomic%20packing%20factor | Atomic packing factor | In crystallography, atomic packing factor (APF), packing efficiency, or packing fraction is the fraction of volume in a crystal structure that is occupied by constituent particles. It is a dimensionless quantity and always less than unity. In atomic systems, by convention, the APF is determined by assuming that atoms are rigid spheres. The radius of the spheres is taken to be the maximum value such that the atoms do not overlap. For one-component crystals (those that contain only one type of particle), the packing fraction is represented mathematically by
where Nparticle is the number of particles in the unit cell, Vparticle is the volume of each particle, and Vunit cell is the volume occupied by the unit cell. It can be proven mathematically that for one-component structures, the most dense arrangement of atoms has an APF of about 0.74 (see Kepler conjecture), obtained by the close-packed structures. For multiple-component structures (such as with interstitial alloys), the APF can exceed 0.74.
The atomic packing factor of a unit cell is relevant to the study of materials science, where it explains many properties of materials. For example, metals with a high atomic packing factor will have a higher "workability" (malleability or ductility), similar to how a road is smoother when the stones are closer together, allowing metal atoms to slide past one another more easily.
Single component crystal structures
Common sphere packings taken on by atomic systems are listed below with their corresponding packing fraction.
Hexagonal close-packed (HCP): 0.74
Face-centered cubic (FCC): 0.74 (also called cubic close-packed, CCP)
Body-centered cubic (BCC): 0.68
Simple cubic: 0.52
Diamond cubic: 0.34
The majority of metals take on either the HCP, FCC, or BCC structure.
Simple cubic
For a simple cubic packing, the number of atoms per unit cell is one. The side of the unit cell is of length 2r, where r is the radius of the atom.
Face-centered cubic
For a face-centered cubic unit cell, the number of atoms is four. A line can be drawn from the top corner of a cube diagonally to the bottom corner on the same side of the cube, which is equal to 4r. Using geometry, and the side length, a can be related to r as:
Knowing this and the formula for the volume of a sphere, it becomes possible to calculate the APF as follows:
Body-centered cubic
The primitive unit cell for the body-centered cubic crystal structure contains several fractions taken from nine atoms (if the particles in the crystal are atoms): one on each corner of the cube and one atom in the center. Because the volume of each of the eight corner atoms is shared between eight adjacent cells, each BCC cell contains the equivalent volume of two atoms (one central and one on the corner).
Each corner atom touches the center atom. A line that is drawn from one corner of the cube through the center and to the other corner passes through 4r, where r is the radius of an atom. By geometry, the length of the diagonal is a. Therefore, the length of each side of the BCC structure can be related to the radius of the atom by
Knowing this and the formula for the volume of a sphere, it becomes possible to calculate the APF as follows:
Hexagonal close-packed
For the hexagonal close-packed structure the derivation is similar. Here the unit cell (equivalent to 3 primitive unit cells) is a hexagonal prism containing six atoms (if the particles in the crystal are atoms). Indeed, three are the atoms in the middle layer (inside the prism); in addition, for the top and bottom layers (on the bases of the prism), the central atom is shared with the adjacent cell, and each of the six atoms at the vertices is shared with other six adjacent cells. So the total number of atoms in the cell is 3 + (1/2)×2 + (1/6)×6×2 = 6. Each atom touches other twelve atoms. Now let be the side length of the base of the prism and be its height. The latter is twice the distance between adjacent layers, i. e., twice the height of the regular tetrahedron whose vertices are occupied by (say) the central atom of the lower layer, two adjacent non-central atoms of the same layer, and one atom of the middle layer "resting" on the previous three. Obviously, the edge of this tetrahedron is . If , then its height can be easily calculated to be , and, therefore, . So the volume of the hcp unit cell turns out to be (3/2) , that is 24 .
It is then possible to calculate the APF as follows:
| Physical sciences | Crystallography | Physics |
3439285 | https://en.wikipedia.org/wiki/Forbidden%20mechanism | Forbidden mechanism | In spectroscopy, a forbidden mechanism (forbidden transition or forbidden line) is a spectral line associated with absorption or emission of photons by atomic nuclei, atoms, or molecules which undergo a transition that is not allowed by a particular selection rule but is allowed if the approximation associated with that rule is not made. For example, in a situation where, according to usual approximations (such as the electric dipole approximation for the interaction with light), the process cannot happen, but at a higher level of approximation (e.g. magnetic dipole, or electric quadrupole) the process is allowed but at a low rate.
An example is phosphorescent glow-in-the-dark materials, which absorb light and form an excited state whose decay involves a spin flip, and is therefore forbidden by electric dipole transitions. The result is emission of light slowly over minutes or hours.
Should an atomic nucleus, atom or molecule be raised to an excited state and should the transitions be nominally forbidden, then there is still a small probability of their spontaneous occurrence. More precisely, there is a certain probability that such an excited entity will make a forbidden transition to a lower energy state per unit time; by definition, this probability is much lower than that for any transition permitted or allowed by the selection rules. Therefore, if a state can de-excite via a permitted transition (or otherwise, e.g. via collisions) it will almost certainly do so before any transition occurs via a forbidden route. Nevertheless, most forbidden transitions are only relatively unlikely: states that can only decay in this way (so-called meta-stable states) usually have lifetimes on the order milliseconds to seconds, compared to less than a microsecond for decay via permitted transitions. In some radioactive decay systems, multiple levels of forbiddenness can stretch life times by many orders of magnitude for each additional unit by which the system changes beyond what is most allowed under the selection rules. Such excited states can last years, or even for many billions of years (too long to have been measured).
In radioactive decay
Gamma decay
The most common mechanism for suppression of the rate of gamma decay of excited atomic nuclei, and thus make possible the existence of a metastable isomer for the nucleus, is lack of a decay route for the excited state that will change nuclear angular momentum (along any given direction) by the most common (allowed) amount of 1 quantum unit of spin angular momentum. Such a change is necessary to emit a gamma-ray photon, which has a spin of 1 unit in this system. Integral changes of 2, 3, 4, and more units in angular momentum are possible (the emitted photons carry off the additional angular momentum), but changes of more than 1 unit are known as forbidden transitions. Each degree of forbiddenness (additional unit of spin change larger than 1, that the emitted gamma ray must carry) inhibits decay rate by about 5 orders of magnitude. The highest known spin change of 8 units occurs in the decay of Ta-180m, which suppresses its decay by a factor of 1035 from that associated with 1 unit, so that instead of a natural gamma decay half life of 10−12 seconds, it has a half life of more than 1023 seconds, or at least 3 x 1015 years, and thus has yet to be observed to decay.
Although gamma decays with nuclear angular momentum changes of 2, 3, 4, etc., are forbidden, they are only relatively forbidden, and do proceed, but with a slower rate than the normal allowed change of 1 unit. However, gamma emission is absolutely forbidden when the nucleus begins and ends in a zero-spin state, as such an emission would not conserve angular momentum. These transitions cannot occur by gamma decay, but must proceed by another route, such as beta decay in some cases, or internal conversion where beta decay is not favored.
Beta decay
Beta decay is classified according to the -value of the emitted radiation. Unlike gamma decay, beta decay may proceed from a nucleus with a spin of zero and even parity to a nucleus also with a spin of zero and even parity (Fermi transition). This is possible because the electron and neutrino emitted may be of opposing spin (giving a radiation total angular momentum of zero), thus preserving angular momentum of the initial state even if the nucleus remains at spin-zero before and after emission. This type of emission is super-allowed meaning that it is the most rapid type of beta decay in nuclei that are susceptible to a change in proton/neutron ratios that accompanies a beta decay process.
The next possible total angular momentum of the electron and neutrino emitted in beta decay is a combined spin of 1 (electron and neutrino spinning in the same direction), and is allowed. This type of emission (Gamow-Teller transition) changes nuclear spin by 1 to compensate. States involving higher angular momenta of the emitted radiation (2, 3, 4, etc.) are forbidden and are ranked in degree of forbiddenness by their increasing angular momentum.
Specifically, when the decay is referred to as forbidden. Nuclear selection rules require L-values greater than two to be accompanied by changes in both nuclear spin () and parity (π). The selection rules for the th forbidden transitions are
where or corresponds to no parity change or parity change, respectively. As noted, the special case of a Fermi 0+ → 0+ transition (which in gamma decay is absolutely forbidden) is referred to as super-allowed for beta decay, and proceeds very quickly if beta decay is possible. The following table lists the Δ and Δπ values for the first few values of :
As with gamma decay, each degree of increasing forbiddenness increases the half life of the beta decay process involved by a factor of about 4 to 5 orders of magnitude.
Double beta decay has been observed in the laboratory, e.g. in . Geochemical experiments have also found this rare type of forbidden decay in several isotopes, with mean half lives over 1018 yr.
In solid-state physics
Forbidden transitions in rare earth atoms such as erbium and neodymium make them useful as dopants for solid-state lasing media. In such media, the atoms are held in a matrix which keeps them from de-exciting by collision, and the long half life of their excited states makes them easy to optically pump to create a large population of excited atoms. Neodymium doped glass derives its unusual coloration from forbidden f-f transitions within the neodymium atom, and is used in extremely high power solid state lasers. Bulk semiconductor transitions can also be forbidden by symmetry, which change the functional form of the absorption spectrum, as can be shown in a Tauc plot.
In astrophysics and atomic physics
Forbidden emission lines have been observed in extremely low-density gases and plasmas, either in outer space or in the extreme upper atmosphere of the Earth. In space environments, densities may be only a few atoms per cubic centimetre, making atomic collisions unlikely. Under such conditions, once an atom or molecule has been excited for any reason into a meta-stable state, then it is almost certain to decay by emitting a forbidden-line photon. Since meta-stable states are rather common, forbidden transitions account for a significant percentage of the photons emitted by the ultra-low density gas in space. Forbidden transitions in highly charged ions resulting in the emission of visible, vacuum-ultraviolet, soft x-ray and x-ray photons are routinely observed in certain laboratory devices such as electron beam ion traps and ion storage rings, where in both cases residual gas densities are sufficiently low for forbidden line emission to occur before atoms are collisionally de-excited. Using laser spectroscopy techniques, forbidden transitions are used to stabilize atomic clocks and quantum clocks that have the highest accuracies currently available.
Forbidden lines of nitrogen ([N II] at 654.8 and 658.4 nm), sulfur ([S II] at 671.6 and 673.1 nm), and oxygen ([O II] at 372.7 nm, and [O III] at 495.9 and 500.7 nm) are commonly observed in astrophysical plasmas. These lines are important to the energy balance of planetary nebulae and H II regions. The forbidden 21-cm hydrogen line is particularly important for radio astronomy as it allows very cold neutral hydrogen gas to be seen. Also, the presence of [O I] and [S II] forbidden lines in the spectra of T-tauri stars implies low gas density.
Notation
Forbidden line transitions are noted by placing square brackets around the atomic or molecular species in question, e.g. [O III] or [S II].
| Physical sciences | Basics | Astronomy |
237275 | https://en.wikipedia.org/wiki/Lantern | Lantern | A lantern is a source of lighting, often portable. It typically features a protective enclosure for the light sourcehistorically usually a candle, a wick in oil, or a thermoluminescent mesh, and often a battery-powered light in modern timesto make it easier to carry and hang up, and make it more reliable outdoors or in drafty interiors. Lanterns may also be used for signaling, as torches, or as general light-sources outdoors.
Uses
The lantern enclosure was primarily used to prevent a burning candle or wick being extinguished from wind, rain or other causes. Some antique lanterns have only a metal grid, indicating their function was to protect the candle or wick during transportation and avoid the excess heat from the top to avoid unexpected fires.
Another important function was to reduce the risk of fire should a spark leap from the flame or the light be dropped. This was especially important below deck on ships: a fire on a wooden ship was a major catastrophe. Use of unguarded lights was taken so seriously that obligatory use of lanterns, rather than unprotected flames, below decks was written into one of the few known remaining examples of a pirate code, on pain of severe punishment.
Lanterns may also be used for signaling. In naval operations, ships used lights to communicate at least as far back as the Middle Ages; the use of a lantern that blinks code to transmit a message dates to the mid-1800s. In railroad operations, lanterns have multiple uses. Permanent lanterns on poles are used to signal trains about the operational status of the track ahead, sometimes with color gels in front of the light to signify stop, etc. Historically, a flagman at a level crossing used a lantern to stop cars and other vehicular traffic before a train arrived. Lanterns also provided a means to signal from train-to-train or from station-to-train.
A "dark lantern" was a candle lantern with a sliding shutter so that a space could be conveniently made dark without extinguishing the candle. For example, in the Sherlock Holmes story "The Red-Headed League", the detective and police make their way down to a bank vault by lantern light but then put a 'screen over that dark lantern' in order to wait in the dark for thieves to finish tunneling. This type of lantern could also preserve the light source for sudden use when needed.
Lanterns may be used in religious observances. In the Eastern Orthodox Church, lanterns are used in religious processions and liturgical entrances, usually coming before the processional cross. Lanterns are also used to transport the Holy Fire from the Church of the Holy Sepulchre on Great Saturday during Holy Week.
Lanterns are used in many Asian festivals. During the Ghost Festival, lotus shaped lanterns are set afloat in rivers and seas to symbolically guide the lost souls of forgotten ancestors to the afterlife. During the Lantern Festival, the displaying of many lanterns is still a common sight on the 15th day of the first lunar month throughout China. During other Chinese festivities, kongming lanterns (sky lanterns) can be seen floating high into the air. However, some jurisdictions, such as in Canada, some states in the U.S., and parts of India, as well as some organizations, ban the use of sky lanterns because of concerns about fire and safety.
The term "lantern" can be used more generically to mean a light source, or the enclosure for a light source, even if it is not portable. Decorative lanterns exist in a wide range of designs. Some hang from buildings, such as street lights enclosed in glass panes. Others are placed on or just above the ground; low-light varieties can function as decoration or landscape lighting and can be a variety of colors and sizes. The housing for the top lamp and lens section of a lighthouse may be called a lantern.
Etymology
The word lantern comes via French from Latin meaning "lamp, torch," possibly itself derived from Greek.
An alternate historical spelling was "lanthorn", possibly derived from the ancient use of animal horn to cover window apertures, but allow in light. A lanthorn might have been significantly larger and brighter than a lantern.
Construction
Lanterns were usually made from a metal frame with several sides (usually four, but up to eight) or round, commonly with a hook or a hoop of metal on top. Windows of some translucent material may be fitted in the sides; these are now usually glass or plastic but formerly were thin sheets of animal horn, or tinplate punched with holes or decorative patterns.
Paper lanterns are made in societies around the world.
A lantern generally contains a burning light source: a candle, liquid oil with a wick, or gas with a mantle. The ancient Chinese sometimes captured fireflies in transparent or semi-transparent containers and used them as (short-term) lanterns, and use of fireflies in transparent containers was also a widespread practice in ancient India; however, since these were short-term solutions, the use of fire torches was more prevalent.
Modern varieties often place an electric light in a decorative glass case.
History
In 1417, the Mayor of London ordered that all homes must hang lanterns outdoors after nightfall during the winter months. This marked the first organized public street lighting.
Lanterns have been used functionally, for light rather than decoration, since antiquity. Some used a wick in oil, while others were essentially protected candle-holders. Before the development of glass sheets, animal horns were scraped thin and flattened to create a translucent window.
Beginning in the Middle Ages, middle eastern towns hired watchmen to patrol the streets at night, as a crime deterrent. Each watchman carried a lantern or oil lamp against the darkness. The practice continued up through at least the 18th century.
In March 1764 and twice in October 1764, George Allsopp, a British-born Canadian, was arrested in Quebec for violating an order to carry lanterns during the night. There was violence every time he was arrested and Allsopp would denounce the military. In October he prosecuted the soldiers involved in his arrests.
On April 18, 1775, Paul Revere's midnight ride took place after two lanterns were held up in the Old North Church to signal to patriots in Charlestown that the British troops were crossing the Charles River to disarm the rebel colonial militias. The Battles of Lexington and Concord occurred the day after, on April 19, starting the American Revolution.
Public spaces became increasingly lit with lanterns in the 1500s, especially following the invention of lanterns with glass windows, which greatly improved the quantity of light. In 1588 the Parisian Parlement decreed that a torch be installed and lit at each intersection, and in 1594 the police changed this to lanterns. Beginning in 1667 during the reign of King Louis XIV, thousands of street lights were installed in Parisian streets and intersections. Under this system, streets were lit with lanterns suspended apart on a cord over the middle of the street at a height of ; as an English visitor described in 1698, 'The streets are lit all winter and even during the full moon!' In London, a diarist wrote in 1712 that ‘All the way, quite through Hyde Park to the Queen's Palace at Kensington, lanterns were placed for illuminating the roads on dark nights.’
Modern lanterns
Fueled lanterns
All fueled lanterns are somewhat hazardous owing to the danger of handling flammable and toxic fuel, danger of fire or burns from the high temperatures involved, and potential dangers from carbon monoxide poisoning if used in an enclosed environment.
Simple wick lanterns remain available. They are cheap and durable and usually can provide enough light for reading. They require periodic trimming of the wick and regular cleaning of soot from the inside of the glass chimney.
Mantle lanterns use a woven ceramic impregnated gas mantle to accept and re-radiate heat as visible light from a flame. The mantle does not burn (but the cloth matrix carrying the ceramic must be "burned out" with a match prior to its first use). When heated by the operating flame the mantle becomes incandescent and glows brightly. The heat may be provided by a gas, by kerosene, or by a pressurized liquid such as "white gas", which is essentially naphtha. For protection from the high temperatures produced and to stabilize the airflow, a cylindrical glass shield called the globe or chimney is placed around the mantle.
Manually pressurized lanterns using white gas (also marketed as Coleman fuel or "Camp Fuel") are manufactured by the Coleman Company in one and two-mantle models. Some models are dual fuel and can also use gasoline. These are being supplanted by a battery-powered fluorescent lamp and LED models, which are safer in the hands of young people and inside tents. Liquid fuel lanterns remain popular where the fuel is easily obtained and in common use.
Many portable mantle-type fuel lanterns now use fuel gases that become liquid when compressed, such as propane, either alone or combined with butane. Such lamps usually use a small disposable steel container to provide the fuel. The ability to refuel without liquid fuel handling increases safety. Additional fuel supplies for such lamps have an indefinite shelf life if the containers are protected from moisture (which can cause corrosion of the container) and excess heat.
Electric lanterns
Lanterns designed as permanently mounted electric lighting fixtures are used in interior, landscape, and civic lighting applications. Styles can evoke former eras, unify street furniture themes, or enhance aesthetic considerations. They are manufactured for use with various wired voltage supplies.
Various battery types are used in portable light sources. They are more convenient, safer, and produce less heat than combustion lights. Solar-powered lanterns have become popular in developing countries, where they provide a safer and cheaper alternative to kerosene lamps.
Lanterns utilizing LEDs are popular as they are more energy-efficient and rugged than other types, and prices of LEDs suitable for lighting have dropped.
Some rechargeable fluorescent lanterns may be plugged in at all times and may be set up to illuminate upon a power failure, a useful feature in some applications. During extensive power failures (or for remote use), supplemental recharging may be provided from an automobile's 12-volt electrical system or from a modest solar-powered charger.
Gallery
Hand-held lanterns
Paper lanterns
Exterior lighting
In popular culture
The derived term "lantern jaw[ed]" is used in two quite different still current ways, comparing faces with different types of lantern. According to the Oxford English Dictionary, it refers to "long thin jaws, giving a hollow appearance to the cheek"; this use was recorded in 1361, referring to a lantern with concave horn sides before glass was in use. Another meaning of "lantern jaw" compares a lantern with a jutting base – such as the 15th-century example above – to the face of a person with the extended chin of mandibular prognathism; this condition was also known as Habsburg jaw or Habsburg lip, as it was a hereditary feature of the House of Habsburg (see, for example, portraits of ).
Raise the Red Lantern, a 1991 Chinese film, prominently features lanterns as a motif.
"The Tell-Tale Heart", a short story by Edgar Allan Poe, features the use of a dark lantern by the protagonist to shine a single ray of light on his victim's eye.
| Technology | Lighting | null |
237412 | https://en.wikipedia.org/wiki/Heart%20sounds | Heart sounds | Heart sounds are the noises generated by the beating heart and the resultant flow of blood through it. Specifically, the sounds reflect the turbulence created when the heart valves snap shut. In cardiac auscultation, an examiner may use a stethoscope to listen for these unique and distinct sounds that provide important auditory data regarding the condition of the heart.
In healthy adults, there are two normal heart sounds, often described as a lub and a dub that occur in sequence with each heartbeat. These are the first heart sound (S1) and second heart sound (S2),
produced by the closing of the atrioventricular valves and semilunar valves, respectively. In addition to these normal sounds, a variety of other sounds may be present including heart murmurs, adventitious sounds, and gallop rhythms S3 and S4.
Heart murmurs are generated by turbulent flow of blood and a murmur to be heard as turbulent flow must require pressure difference of at least 30 mm of Hg between the chambers and the pressure dominant chamber will outflow the blood to non-dominant chamber in diseased condition which leads to Left-to-right shunt or Right-to-left shunt based on the pressure dominance. Turbulence may occur inside or outside the heart; if it occurs outside the heart then the turbulence is called bruit or vascular murmur. Murmurs may be physiological (benign) or pathological (abnormal). Abnormal murmurs can be caused by stenosis restricting the opening of a heart valve, resulting in turbulence as blood flows through it. Abnormal murmurs may also occur with valvular insufficiency (regurgitation), which allows backflow of blood when the incompetent valve closes with only partial effectiveness. Different murmurs are audible in different parts of the cardiac cycle, depending on the cause of the murmur.
Primary heart sounds
Normal heart sounds are associated with heart valves closing:
First heart sound
The first heart sound, or S1, forms the "lub" of "lub-dub" and is composed of components M1 (mitral valve closure) and T1 (tricuspid valve closure). Normally M1 precedes T1 slightly. It is caused by the closure of the atrioventricular valves, i.e. tricuspid and mitral (bicuspid), at the beginning of ventricular contraction, or systole. When the ventricles begin to contract, so do the papillary muscles in each ventricle. The papillary muscles are attached to the cusps or leaflets of the tricuspid and mitral valves via chordae tendineae (heart strings). When the papillary muscles contract, the chordae tendineae become tense and thereby prevent the backflow of blood into the lower pressure environment of the atria. The chordae tendineae act a bit like the strings on a parachute, and allow the leaflets of the valve to balloon up into the atria slightly, but not so much as to evert the cusp edges and allow backflow of blood. It is the pressure created from ventricular contraction that closes the valve, not the papillary muscles themselves. The contraction of the ventricle begins just prior to AV valves closing and prior to the opening of the semilunar valves. The sudden tensing of the chordae tendineae and the squeezing of the ventricles against closed semilunar valves, send blood rushing back toward the atria, and the parachute-like valves catch the rush of blood in their leaflets causing the valve to snap shut. The S1 sound results from reverberation within the blood associated with the sudden block of flow reversal by the valves. The delay of T1 even more than normally causes the split S1 which is heard in a right bundle branch block.
Second heart sound
The second heart sound, or S2, forms the "dub" of "lub-dub" and is composed of components A2 (aortic valve closure) and P2 (pulmonary valve closure). Normally A2 precedes P2 especially during inspiration where a split of S2 can be heard. It is caused by the closure of the semilunar valves (the aortic valve and pulmonary valve) at the end of ventricular systole and the beginning of ventricular diastole. As the left ventricle empties, its pressure falls below the pressure in the aorta. Aortic blood flow quickly reverses back toward the left ventricle, catching the pocket-like cusps of the aortic valve, and is stopped by aortic valve closure. Similarly, as the pressure in the right ventricle falls below the pressure in the pulmonary artery, the pulmonary valve closes. The S2 sound results from reverberation within the blood associated with the sudden block of flow reversal.
Splitting of S2, also known as physiological split, normally occurs during inhalation because the decrease in intrathoracic pressure increases the time needed for pulmonary pressure to exceed that of the right ventricular pressure. A widely split S2 can be associated with several different cardiovascular conditions, and the split is sometimes wide and variable whereas, sometimes wide and fixed. The wide and variable split occurs in Right bundle branch block, pulmonary stenosis, pulmonary hypertension and ventricular septal defects. The wide and fixed splitting of S2 occurs in atrial septal defect. Pulmonary S2 (P2) will be accentuated (loud P2) in pulmonary hypertension and pulmonary embolism. S2 becomes softer in aortic stenosis.
Extra heart sounds
The rarer extra heart sounds form gallop rhythms and are heard in both normal and abnormal situations.
Third heart sound
The third heart sound, or S3 is rarely heard, and is also called a protodiastolic gallop, ventricular gallop, or informally the "Kentucky" gallop as an onomatopoeic reference to the rhythm and stress of S1 followed by S2 and S3 together (S1=Ken; S2=tuck; S3=y).
"lub-dub-ta" or "slosh-ing-in" If new, indicates heart failure or volume overload.
It occurs at the beginning of diastole after S2 and is lower in pitch than S1 or S2 as it is not of valvular origin. The third heart sound is benign in youth, some trained athletes, and sometimes in pregnancy but if it re-emerges later in life it may signal cardiac problems, such as a failing left ventricle as in dilated congestive heart failure (CHF). S3 is thought to be caused by the oscillation of blood back and forth between the walls of the ventricles initiated by blood rushing in from the atria. The reason the third heart sound does not occur until the middle third of diastole is probably that during the early part of diastole, the ventricles are not filled sufficiently to create enough tension for reverberation.
It may also be a result of tensing of the chordae tendineae during rapid filling and expansion of the ventricle. In other words, an S3 heart sound indicates increased volume of blood within the ventricle. An S3 heart sound is best heard with the bell-side of the stethoscope (used for lower frequency sounds). A left-sided S3 is best heard in the left lateral decubitus position and at the apex of the heart, which is normally located in the 5th left intercostal space at the midclavicular line. A right-sided S3 is best heard at the lower left sternal border. The way to distinguish between left and right-sided S3 is to observe whether it increases in intensity with inhalation or exhalation. A right-sided S3 will increase on inhalation, while a left-sided S3 will increase on exhalation.
S3 can be a normal finding in young patients but is generally pathologic over the age of 40. The most common cause of pathologic S3 is congestive heart failure.
Fourth heart sound
The fourth heart sound, or S4 when audible in an adult is called a presystolic gallop or atrial gallop. This gallop is produced by the sound of blood being forced into a stiff or hypertrophic ventricle.
"ta-lub-dub" or "a-stiff-wall"
It is a sign of a pathologic state, usually a failing or hypertrophic left ventricle, as in systemic hypertension, severe valvular aortic stenosis, and hypertrophic cardiomyopathy. The sound occurs just after atrial contraction at the end of diastole and immediately before S1, producing a rhythm sometimes referred to as the "Tennessee" gallop where S4 represents the "Ten-" syllable. It is best heard at the cardiac apex with the patient in the left lateral decubitus position and holding their breath. The combined presence of S3 and S4 is a quadruple gallop, also known as the "Hello-Goodbye" gallop. At rapid heart rates, S3 and S4 may merge to produce a summation gallop, sometimes referred to as S7.
Atrial contraction must be present for production of an S4. It is absent in atrial fibrillation and in other rhythms in which atrial contraction does not precede ventricular contraction.
Murmurs
Heart murmurs are produced as a result of turbulent flow of blood strong enough to produce audible noise. They are usually heard as a whooshing sound. The term murmur only refers to a sound believed to originate within blood flow through or near the heart; rapid blood velocity is necessary to produce a murmur. Most heart problems do not produce any murmur and most valve problems also do not produce an audible murmur.
Murmurs can be heard in many situations in adults without major congenital heart abnormalities:
Regurgitation through the mitral valve is by far the most commonly heard murmur, producing a pansystolic/holosystolic murmur which is sometimes fairly loud to a practiced ear, even though the volume of regurgitant blood flow may be quite small. Yet, though obvious using echocardiography visualization, probably about 20% of cases of mitral regurgitation do not produce an audible murmur.
Stenosis of the aortic valve is typically the next most common heart murmur, a systolic ejection murmur. This is more common in older adults or in those individuals having a two-leaflet, not a three-leaflet, aortic valve.
Regurgitation through the aortic valve, if marked, is sometimes audible to a practiced ear with high quality, especially electronically amplified, stethoscope. Generally, this is a very rarely heard murmur, even though aortic valve regurgitation is not so rare. Aortic regurgitation, though obvious using echocardiography visualization, usually does not produce an audible murmur.
Stenosis of the mitral valve, if severe, also rarely produces an audible, low frequency soft rumbling murmur, best recognized by a practiced ear using high quality, especially electronically amplified, stethoscope.
Other audible murmurs are associated with abnormal openings between the left ventricle and right heart or from the aortic or pulmonary arteries back into a lower pressure heart chamber.
Though several different cardiac conditions can cause heart murmurs, the murmurs can change markedly with the severity of the cardiac disease. An astute physician can sometimes diagnose cardiac conditions with some accuracy based largely on the murmur, related physical examination, and experience with the relative frequency of different heart conditions. However, with the advent of better quality and wider availability of echocardiography and other techniques, heart status can be recognized and quantified much more accurately than formerly possible with only a stethoscope, examination, and experience. Another advantage to the use of the echocardiogram is that the devices can be handheld.
Effects of breathing
Inhalation decreases intrathoracic pressure which allows more venous blood to return to the right heart (pulling blood into the right side of the heart via a vacuum-like effect). Therefore, right-sided heart murmurs generally increase in intensity with inhalation. The decreased (more negative) intrathoracic pressure has an opposite effect on the left side of the heart, making it harder for the blood to exit into circulation. Therefore, left-sided murmurs generally decrease in intensity during inhalation. Increasing venous blood return to the right side of the heart by raising a patient's legs to a 45-degree while lying supine produces similar effect which occurs during inhalation. Inhalation can also produce a non-pathological split S2 which will be heard upon auscultation.
With exhalation, the opposite haemodynamic changes occur: left-sided murmurs generally increase in intensity with exhalation.
Interventions that change murmurs
There are a number of interventions that can be performed that alter the intensity and characteristics of abnormal heart sounds. These interventions can differentiate the different heart sounds to more effectively obtain a diagnosis of the cardiac anomaly that causes the heart sound.
Other abnormal sounds
Clicks – Heart clicks are short, high-pitched sounds that can be appreciated with modern non-invasive imaging techniques.
Rubs – The pericardial friction rub can be heard in pericarditis, an inflammation of the pericardium, the sac surrounding the heart. This is a characteristic scratching, creaking, high-pitched sound emanating from the rubbing of both layers of inflamed pericardium. It is the loudest in systole, but can often be heard at the beginning and at the end of diastole. It is very dependent on body position and breathing, and changes from hour to hour.
Surface anatomy
The aortic area, pulmonic area, tricuspid area and mitral area are areas on the surface of the chest where the heart is auscultated.
Heart sounds result from reverberation within the blood associated with the sudden block of flow reversal by the valves closing. Because of this, auscultation to determine function of a valve is usually not performed at the position of the valve, but at the position to where the sound waves reverberate.
Recording heart sounds
Using electronic stethoscopes, it is possible to record heart sounds via direct output to an external recording device, such as a laptop or MP3 recorder. The same connection can be used to listen to the previously recorded auscultation through the stethoscope headphones, allowing for a more detailed study of murmurs and other heart sounds, for general research as well as evaluation of a particular patient's condition.
| Biology and health sciences | Diagnostics | Health |
237495 | https://en.wikipedia.org/wiki/Information%20system | Information system | An information system (IS) is a formal, sociotechnical, organizational system designed to collect, process, store, and distribute information. From a sociotechnical perspective, information systems comprise four components: task, people, structure (or roles), and technology. Information systems can be defined as an integration of components for collection, storage and processing of data, comprising digital products that process data to facilitate decision making and the data being used to provide information and contribute to knowledge.
A computer information system is a system, which consists of people and computers that process or interpret information. The term is also sometimes used to simply refer to a computer system with software installed.
"Information systems" is also an academic field of study about systems with a specific reference to information and the complementary networks of computer hardware and software that people and organizations use to collect, filter, process, create and also distribute data. An emphasis is placed on an information system having a definitive boundary, users, processors, storage, inputs, outputs and the aforementioned communication networks.
In many organizations, the department or unit responsible for information systems and data processing is known as "information services".
Any specific information system aims to support operations, management and decision-making. An information system is the information and communication technology (ICT) that an organization uses, and also the way in which people interact with this technology in support of business processes.
Some authors make a clear distinction between information systems, computer systems, and business processes. Information systems typically include an ICT component but are not purely concerned with ICT, focusing instead on the end-use of information technology. Information systems are also different from business processes. Information systems help to control the performance of business processes.
Alter argues that viewing an information system as a special type of work system has its advantages. A work system is a system in which humans or machines perform processes and activities using resources to produce specific products or services for customers. An information system is a work system in which activities are devoted to capturing, transmitting, storing, retrieving, manipulating and displaying information.
As such, information systems inter-relate with data systems on the one hand and activity systems on the other. An information system is a form of communication system in which data represent and are processed as a form of social memory. An information system can also be considered a semi-formal language which supports human decision making and action.
Information systems are the primary focus of study for organizational informatics.
Overview
Silver et al. (1995) provided two views on IS that includes software, hardware, data, people, and procedures.
The Association for Computing Machinery defines "Information systems specialists [as] focus[ing] on integrating information technology solutions and business processes to meet the information needs of businesses and other enterprises."
There are various types of information systems, : including transaction processing systems, decision support systems, knowledge management systems, learning management systems, database management systems, and office information systems. Critical to most information systems are information technologies, which are typically designed to enable humans to perform tasks for which the human brain is not well suited, such as: handling large amounts of information, performing complex calculations, and controlling many simultaneous processes.
Information technologies are a very important and malleable resource available to executives. Many companies have created a position of chief information officer (CIO) that sits on the executive board with the chief executive officer (CEO), chief financial officer (CFO), chief operating officer (COO), and chief technical officer (CTO). The CTO may also serve as CIO, and vice versa. The chief information security officer (CISO) focuses on information security management.
Six components
The six components that must come together in order to produce an information system are:
Hardware: The term hardware refers to machinery and equipment. In a modern information system, this category includes the computer itself and all of its support equipment. The support equipment includes input and output devices, storage devices and communications devices. In pre-computer information systems, the hardware might include ledger books and ink.
Software: The term software refers to computer programs and the manuals (if any) that support them. Computer programs are machine-readable instructions that direct the circuitry within the hardware parts of the system to function in ways that produce useful information from data. Programs are generally stored on some input/output medium, often a disk or tape. The "software" for pre-computer information systems included how the hardware was prepared for use (e.g., column headings in the ledger book) and instructions for using them (the guidebook for a card catalog).
Data: Data are facts that are used by systems to produce useful information. In modern information systems, data are generally stored in machine-readable form on disk or tape until the computer needs them. In pre-computer information systems, the data were generally stored in human-readable form.
Procedures: Procedures are the policies that govern the operation of an information system. "Procedures are to people what software is to hardware" is a common analogy that is used to illustrate the role of procedures in a system.
People: Every system needs people if it is to be useful. Often the most overlooked element of the system is the people, probably the component that most influences the success or failure of information systems. This includes "not only the users, but those who operate and service the computers, those who maintain the data, and those who support the network of computers".
Internet: The internet is a combination of data and people. (Although this component is not necessary for functionality.)
Data is the bridge between hardware and people. This means that the data we collect is only data until we involve people. At that point, data becomes information.
Types
The "classic" view of Information systems found in textbooks in the 1980s was a pyramid of systems that reflected the hierarchy of the organization, usually transaction processing systems at the bottom of the pyramid, followed by management information systems, decision support systems, and ending with executive information systems at the top. Although the pyramid model remains useful since it was first formulated, a number of new technologies have been developed and new categories of information systems have emerged, some of which no longer fit easily into the original pyramid model.
Some examples of such systems are:
Artificial intelligence system
Computing platform
Data warehouses
Decision support system
Enterprise resource planning
Enterprise systems
Expert systems
Geographic information system
Global information system
Management information system
Multimedia information system
Office automation
Process control system
Search engines
Social information systems
A computer(-based) information system is essentially an IS using computer technology to carry out some or all of its planned tasks. The basic components of computer-based information systems are:
Hardware are the devices like the monitor, processor, printer, and keyboard, all of which work together to accept, process, show data, and information.
Software are the programs that allow the hardware to process the data.
Databases are the gathering of associated files or tables containing related data.
Networks are a connecting system that allows diverse computers to distribute resources.
Procedures are the commands for combining the components above to process information and produce the preferred output.
The first four components (hardware, software, database, and network) make up what is known as the information technology platform.
Information technology workers could then use these components to create information systems that watch over safety measures, risk and the management of data. These actions are known as information technology services.
Certain information systems support parts of organizations, others support entire organizations, and still others, support groups of organizations. Each department or functional area within an organization has its own collection of application programs or information systems. These functional area information systems (FAIS) are supporting pillars for more general IS namely, business intelligence systems and dashboards. As the name suggests, each FAIS supports a particular function within the organization, e.g.: accounting IS, finance IS, production-operation management (POM) IS, marketing IS, and human resources IS. In finance and accounting, managers use IT systems to forecast revenues and business activity, to determine the best sources and uses of funds, and to perform audits to ensure that the organization is fundamentally sound and that all financial reports and documents are accurate.
Other types of organizational information systems are FAIS, transaction processing systems, enterprise resource planning, office automation system, management information system, decision support system, expert system, executive dashboard, supply chain management system, and electronic commerce system. Dashboards are a special form of IS that support all managers of the organization. They provide rapid access to timely information and direct access to structured information in the form of reports. Expert systems attempt to duplicate the work of human experts by applying reasoning capabilities, knowledge, and expertise within a specific domain.
Development
Information technology departments in larger organizations tend to strongly influence the development, use, and application of information technology in the business.
A series of methodologies and processes can be used to develop and use an information system. Many developers use a systems engineering approach such as the system development life cycle (SDLC), to systematically develop an information system in stages. The stages of the system development lifecycle are planning, system analysis, and requirements, system design, development, integration and testing, implementation and operations, and maintenance.
Recent research aims at enabling and measuring the ongoing, collective development of such systems within an organization by the entirety of human actors themselves.
An information system can be developed in house (within the organization) or outsourced. This can be accomplished by outsourcing certain components or the entire system. A specific case is the geographical distribution of the development team (offshoring, global information system).
A computer-based information system, following a definition of Langefors, is a technologically implemented medium for recording, storing, and disseminating linguistic expressions, as well as for drawing conclusions from such expressions.
Geographic information systems, land information systems, and disaster information systems are examples of emerging information systems, but they can be broadly considered as spatial information systems.
System development is done in stages which include:
Problem recognition and specification
Information gathering
Requirements specification for the new system
System design
System construction
System implementation
Review and maintenance
As an academic discipline
The field of study called information systems encompasses a variety of topics including systems analysis and design, computer networking, information security, database management, and decision support systems. Information management deals with the practical and theoretical problems of collecting and analyzing information in a business function area including business productivity tools, applications programming and implementation, electronic commerce, digital media production, data mining, and decision support. Communications and networking deals with telecommunication technologies.
Information systems bridges business and computer science using the theoretical foundations of information and computation to study various business models and related algorithmic processes on building the IT systems within a computer science discipline. Computer information systems (CIS) is a field studying computers and algorithmic processes, including their principles, their software and hardware designs, their applications, and their impact on society, whereas IS emphasizes functionality over design.
Several IS scholars have debated the nature and foundations of information systems which have its roots in other reference disciplines such as computer science, engineering, mathematics, management science, cybernetics, and others. Information systems also can be defined as a collection of hardware, software, data, people, and procedures that work together to produce quality information.
Related terms
Similar to computer science, other disciplines can be seen as both related and foundation disciplines of IS. The domain of study of IS involves the study of theories and practices related to the social and technological phenomena, which determine the development, use, and effects of information systems in organizations and society. But, while there may be considerable overlap of the disciplines at the boundaries, the disciplines are still differentiated by the focus, purpose, and orientation of their activities.
In a broad scope, information systems is a scientific field of study that addresses the range of strategic, managerial, and operational activities involved in the gathering, processing, storing, distributing, and use of information and its associated technologies in society and organizations. The term information systems is also used to describe an organizational function that applies IS knowledge in the industry, government agencies, and not-for-profit organizations.
Information systems often refers to the interaction between algorithmic processes and technology. This interaction can occur within or across organizational boundaries. An information system is a technology an organization uses and also the way in which the organizations interact with the technology and the way in which the technology works with the organization's business processes. Information systems are distinct from information technology (IT) in that an information system has an information technology component that interacts with the processes' components.
One problem with that approach is that it prevents the IS field from being interested in non-organizational use of ICT, such as in social networking, computer gaming, mobile personal usage, etc. A different way of differentiating the IS field from its neighbours is to ask, "Which aspects of reality are most meaningful in the IS field and other fields?" This approach, based on philosophy, helps to define not just the focus, purpose, and orientation, but also the dignity, destiny and, responsibility of the field among other fields.
Business informatics is a related discipline that is well-established in several countries, especially in Europe. While Information systems has been said to have an "explanation-oriented" focus, business informatics has a more "solution-oriented" focus and includes information technology elements and construction and implementation-oriented elements.
Career pathways
Information systems workers enter a number of different careers:
Information system strategy
Management information systems – A management information system (MIS) is an information system used for decision-making, and for the coordination, control, analysis, and visualization of information in an organization.
Project management – Project management is the practice of initiating, planning, executing, controlling, and closing the work of a team to achieve specific goals and meet specific success criteria at the specified time.
Enterprise architecture – A well-defined practice for conducting enterprise analysis, design, planning, and implementation, using a comprehensive approach at all times, for the successful development and execution of strategy.
IS development
IS organization
IS consulting
IS security
IS auditing
There is a wide variety of career paths in the information systems discipline. "Workers with specialized technical knowledge and strong communications skills will have the best prospects. Workers with management skills and an understanding of business practices and principles will have excellent opportunities, as companies are increasingly looking to technology to drive their revenue."
Information technology is important to the operation of contemporary businesses, it offers many employment opportunities. The information systems field includes the people in organizations who design and build information systems, the people who use those systems, and the people responsible for managing those systems.
The demand for traditional IT staff such as programmers, business analysts, systems analysts, and designer is significant. Many well-paid jobs exist in areas of Information technology. At the top of the list is the chief information officer (CIO).
The CIO is the executive who is in charge of the IS function. In most organizations, the CIO works with the chief executive officer (CEO), the chief financial officer (CFO), and other senior executives. Therefore, he or she actively participates in the organization's strategic planning process.
Bachelor of Business Information Systems
Research
Information systems research is generally interdisciplinary concerned with the study of the effects of information systems on the behaviour of individuals, groups, and organizations. Hevner et al. (2004) categorized research in IS into two scientific paradigms including behavioural science which is to develop and verify theories that explain or predict human or organizational behavior and design science which extends the boundaries of human and organizational capabilities by creating new and innovative artifacts.
Salvatore March and Gerald Smith proposed a framework for researching different aspects of information technology including outputs of the research (research outputs) and activities to carry out this research (research activities). They identified research outputs as follows:
Constructs which are concepts that form the vocabulary of a domain. They constitute a conceptualization used to describe problems within the domain and to specify their solutions.
A model which is a set of propositions or statements expressing relationships among constructs.
A method which is a set of steps (an algorithm or guideline) used to perform a task. Methods are based on a set of underlying constructs and a representation (model) of the solution space.
An instantiation is the realization of an artefact in its environment.
Also research activities including:
Build an artefact to perform a specific task.
Evaluate the artefact to determine if any progress has been achieved.
Given an artefact whose performance has been evaluated, it is important to determine why and how the artefact worked or did not work within its environment. Therefore, theorize and justify theories about IT artefacts.
Although Information Systems as a discipline has been evolving for over 30 years now, the core focus or identity of IS research is still subject to debate among scholars. There are two main views around this debate: a narrow view focusing on the IT artifact as the core subject matter of IS research, and a broad view that focuses on the interplay between social and technical aspects of IT that is embedded into a dynamic evolving context. A third view calls on IS scholars to pay balanced attention to both the IT artifact and its context.
Since the study of information systems is an applied field, industry practitioners expect information systems research to generate findings that are immediately applicable in practice. This is not always the case however, as information systems researchers often explore behavioral issues in much more depth than practitioners would expect them to do. This may render information systems research results difficult to understand, and has led to criticism.
In the last ten years, the business trend is represented by the considerable increase of Information Systems Function (ISF) role, especially with regard to the enterprise strategies and operations supporting. It became a key factor to increase productivity and to support value creation. To study an information system itself, rather than its effects, information systems models are used, such as EATPUT.
The international body of Information Systems researchers, the Association for Information Systems (AIS), and its Senior Scholars Forum Subcommittee on Journals (202), proposed a list of 11 journals that the AIS deems as 'excellent'. According to the AIS, this list of journals recognizes topical, methodological, and geographical diversity. The review processes are stringent, editorial board members are widely-respected and recognized, and there is international readership and contribution. The list is (or should be) used, along with others, as a point of reference for promotion and tenure and, more generally, to evaluate scholarly excellence.
A number of annual information systems conferences are run in various parts of the world, the majority of which are peer reviewed. The AIS directly runs the International Conference on Information Systems (ICIS) and the Americas Conference on Information Systems (AMCIS), while AIS affiliated conferences include the Pacific Asia Conference on Information Systems (PACIS), European Conference on Information Systems (ECIS), the Mediterranean Conference on Information Systems (MCIS), the International Conference on Information Resources Management (Conf-IRM) and the Wuhan International Conference on E-Business (WHICEB). AIS chapter conferences include Australasian Conference on Information Systems (ACIS), Scandinavian Conference on Information Systems (SCIS), Information Systems International Conference (ISICO), Conference of the Italian Chapter of AIS (itAIS), Annual Mid-Western AIS Conference (MWAIS) and Annual Conference of the Southern AIS (SAIS). EDSIG, which is the special interest group on education of the AITP, organizes the Conference on Information Systems and Computing Education and the Conference on Information Systems Applied Research which are both held annually in November.
| Technology | Networks | null |
237577 | https://en.wikipedia.org/wiki/Heron | Heron | Herons are long-legged, long-necked, freshwater and coastal birds in the family Ardeidae, with 74 recognised species, some of which are referred to as egrets or bitterns rather than herons. Members of the genus Botaurus are referred to as bitterns, and, together with the zigzag heron, or zigzag bittern, in the monotypic genus Zebrilus, form a monophyletic group within the Ardeidae. Egrets do not form a biologically distinct group from herons, and tend to be named differently because they are mainly white or have decorative plumes in breeding plumage. Herons, by evolutionary adaptation, have long beaks.
The classification of the individual heron/egret species is fraught with difficulty, and no clear consensus exists about the correct placement of many species into either of the two major genera, Ardea and Egretta. Similarly, the relationships of the genera in the family are not completely resolved. However, one species formerly considered to constitute a separate monotypic family, the Cochlearidae or the boat-billed heron, is now regarded as a member of the Ardeidae.
Although herons resemble birds in some other families, such as the storks, ibises, spoonbills, and cranes, they differ from these in flying with their necks retracted, not outstretched. They are also one of the bird groups that have powder down. Some members of this group nest colonially in trees, while others, notably the bitterns, use reed beds. A group of herons has been called a "siege".
Name
The word heron first appeared in the English language around 1300, originating from Old French hairon, eron (12th century), earlier hairo (11th century), from Frankish haigiro or from Proto-Germanic *haigrô, *hraigrô.
Herons are also known as shitepokes , or euphemistically as shikepokes or shypokes. Webster's Dictionary suggests that herons were given this name because of their habit of defecating when flushed.
The 1971 Compact Edition of the Oxford English Dictionary describes the use of shitepoke for the small green heron of North America (Butorides virescens) as originating in the United States, citing a published example from 1853. The OED also observes that shiterow or shederow are terms used for herons, and also applied as derogatory terms meaning a thin, weakly person. This name for a heron is found in a list of game birds in a royal decree of James VI (1566–1625) of Scotland. The OED speculates that shiterow is a corruption of shiteheron.
Another former name was heronshaw or hernshaw, derived from Old French heronçeau. Corrupted to handsaw, this name appears in Shakespeare's Hamlet. A possible further corruption took place in the Norfolk Broads, where the heron is often referred to as a harnser.
Description
The herons are medium- to large-sized birds with long legs and necks. They exhibit very little sexual dimorphism in size. The smallest species is usually considered the dwarf bittern, which measures in length, although all the species in the genus Ixobrychus are small and many broadly overlap in size. The largest species of heron is the goliath heron, which stands up to tall. All herons can retract their necks by folding them into a tight S-shape, due to the modified shape of the cervical vertebrae, of which they have 20 or 21; the neck is retracted during flight, unlike most other long-necked birds. The neck is longer in the day herons than the night herons and bitterns. The legs are long and strong, and in almost every species are unfeathered from the lower part of the tibia (the exception is the zigzag heron). In flight, the legs and feet are generally held in a horizontal position, pointing backwards. Toes are long and thin, with three pointing forwards and one backwards.
The bill is generally long and harpoon-like. It can vary from extremely narrow, as in the agami heron, to wider as in the grey heron. The most atypical heron bill is owned by the boat-billed heron, which has a broad, thick bill. Herons' bills and other bare parts of the body are usually yellow, black, or brown in colour, although this can vary during the breeding season. The wings are broad and long, exhibiting 10 or 11 primary feathers (the boat-billed heron has only nine), 15–20 secondaries, and 12 rectrices (10 in the bitterns). The feathers of the herons are soft and the plumage is usually blue, black, brown, grey, or white, and can often be strikingly complex. Amongst the day herons, little sexual dimorphism in plumage is seen (except in the pond-herons); however, for the night herons and smaller bitterns, plumage differences between the sexes are the rule. Many species also have different colour morphs. In the Pacific reef heron, both dark and light colour morphs exist, and the percentage of each morph varies geographically; its white morphs only occur in areas with coral beaches.
Distribution and habitat
The herons are a widespread family with a cosmopolitan distribution. They exist on all continents except Antarctica and are present in most habitats except the coldest extremes of the Arctic, extremely high mountains, and the driest deserts. Almost all species are associated with water; they are essentially non-swimming waterbirds that feed on the margins of lakes, rivers, swamps, ponds, and the sea. They are predominantly found in lowland areas, although some species live in alpine areas, and the majority of species occur in the tropics.
The herons are a highly mobile family, with most species being at least partially migratory; for example, the grey heron is mostly sedentary in Britain, but mostly migratory in Scandinavia. Birds are particularly inclined to disperse widely after breeding, but before the annual migration, where the species is colonial, searching out new feeding areas and reducing the pressures on feeding grounds near the colony. The migration typically occurs at night, usually as individuals or in small groups.
Behaviour and ecology
Diet
Herons, egrets, and bitterns are carnivorous. The members of this family are mostly associated with wetlands and water and feed on a variety of live aquatic prey. Their diet includes a wide variety of aquatic animals, including fish, reptiles, amphibians, crustaceans, molluscs, and aquatic insects. Individual species may be generalists or specialize in certain prey types, such as the yellow-crowned night heron, which specializes in crustaceans, particularly crabs. Many species also opportunistically take larger prey, including birds and bird eggs, rodents, and more rarely carrion. Even more rarely, herons eating acorns, peas, and grains have been reported, but most vegetable matter consumed is accidental.
The most common hunting technique is for the bird to sit motionless on the edge of or standing in shallow water and to wait until prey comes within range. Birds may either do this from an upright posture, giving them a wider field of view for seeing prey, or from a crouched position, which is more cryptic and means the bill is closer to the prey when it is located. Having seen prey, the head is moved from side to side, so that the heron can calculate the position of the prey in the water and compensate for refraction, and then the bill is used to spear the prey.
In addition to sitting and waiting, herons may feed more actively. They may walk slowly, around or less than 60 paces a minute, snatching prey when it is observed. Other active feeding behaviours include foot stirring and probing, where the feet are used to flush out hidden prey. The wings may be used to frighten prey (or possibly attract it to shade) or to reduce glare; the most extreme example of this is exhibited by the black heron, which forms a full canopy with its wings over its body.
Some species of heron, such as the little egret and grey heron, have been documented using bait to lure prey to within striking distance. Herons may use items already in place, or actively add items to the water to attract fish such as the banded killifish. Items used may be man-made, such as bread; alternatively, striated herons in the Amazon have been watched repeatedly dropping seeds, insects, flowers, and leaves into the water to catch fish.
Three species, the black-headed heron, whistling heron, and especially the cattle egret, are less tied to watery environments and may feed far away from water. Cattle egrets improve their foraging success by following large grazing animals, catching insects flushed by their movement. One study found that the success rate of prey capture increased 3.6 times over solitary foraging.
Breeding
While the family exhibits a range of breeding strategies, overall, the herons are monogamous and mostly colonial. Most day herons and night herons are colonial, or partly colonial depending on circumstances, whereas the bitterns and tiger herons are mostly solitary nesters. Colonies may contain several species, as well as other species of waterbirds. In a study of little egrets and cattle egrets in India, the majority of the colonies surveyed contained both species. Nesting is seasonal in temperate species; in tropical species, it may be seasonal (often coinciding with the rainy season) or year-round. Even in year-round breeders, nesting intensity varies throughout the year. Tropical herons typically have only one breeding season per year, unlike some other tropical birds which may raise up to three broods a year.
Courtship usually takes part on the nest. Males arrive first and begin the building of the nest, where they display to attract females. During courtship, the male employs a stretch display and uses erectile neck feathers; the neck area may swell. The female risks an aggressive attack if she approaches too soon and may have to wait up to four days. In colonial species, displays involve visual cues, which can include adopting postures or ritual displays, whereas in solitary species, auditory cues, such as the deep booming of the bitterns, are important. The exception to this is the boat-billed heron, which pairs up away from the nesting site. Having paired, they continue to build the nest in almost all species, although in the little bittern and least bittern, only the male works on the nest.
Some ornithologists have reported observing female herons attaching themselves to impotent mates, then seeking sexual gratification elsewhere.
The nests of herons are usually found near or above water. Although the nests of a few species have been found on the ground where suitable trees or shrubs are unavailable, they are typically placed in vegetation. Trees are used by many species, and here they may be placed high up from the ground, whereas species living in reed beds may nest very close to the ground. Though the majority of nesting of herons is seen in or immediately around water, colonies commonly occur in several cities when human persecution is absent.
Generally, herons lay between three and seven eggs. Larger clutches are reported in the smaller bitterns and more rarely some of the larger day herons, and single-egg clutches are reported for some of the tiger herons. Clutch size varies by latitude within species, with individuals in temperate climates laying more eggs than tropical ones. On the whole, the eggs are glossy blue or white, with the exception being the large bitterns, which lay olive-brown eggs.
Taxonomy and systematics
Analyses of skeletons, mainly skulls, suggested that the Ardeidae could be split into a diurnal and a crepuscular/nocturnal group which included the bitterns. From DNA studies, and from skeletal analyses that focussed more on bones of the body and limbs, that two-group division has been revealed to be incorrect. Rather, the similarities in skull morphology among certain herons reflect convergent evolution to cope with the different challenges of daytime and nighttime feeding. Today, it is believed that three major groups can be distinguished, which are:
tiger herons and the boatbill
bitterns
day herons and egrets, and night herons
The night herons may still warrant separation from the day herons and egrets (as subfamily Nycticoracinae, as it was traditionally done). However, the position of some genera (e.g. Butorides or Syrigma) is unclear at the moment, and molecular studies have so far suffered from studying only a small number of taxa. Especially among the subfamily Ardeinae, the relationships are very inadequately resolved. The arrangement presented here should be considered provisional.
A 2008 study suggests that this family belongs to the Pelecaniformes. In response to these findings, the International Ornithological Congress reclassified Ardeidae and their sister taxa Threskiornithidae under the order Pelecaniformes instead of the previous order of Ciconiiformes.
The cladogram shown below is based on a molecular phylogenetic study of the Ardeidae by Jack Hruska and collaborators published in 2023. For several species these results conflict with the taxonomy published online in July 2023 by Frank Gill, Pamela Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC). The least bittern (Ixobrychus exilis) and the stripe-backed bittern (Ixobrychus involucris) were nested with members of the genus Botaurus. Hruska and collaborators resurrected the genus Calherodius Peters, 1931 to contain two night herons (the white-backed night heron and the white-eared night heron) that were previously placed in Gorsachius. The western cattle egret (Bubulcus ibis) was embedded in the genus Ardea. The eastern cattle egret (Bubulcus coromandus) was not sampled. The placement of the forest bittern (Zonerodius heliosylus) was ambiguous, but the results suggest that it is probably closely related to members of the genus Ardeola rather than to the subfamily Tigriornithinae.
As of August 2024 the IOC lists 74 heron species, divided into 18 genera.
Subfamily Tigriornithinae
Genus Taphophoyx (fossil, Late Miocene of Levy County, Florida)
Genus Tigrisoma – typical tiger herons (three species)
Genus Tigriornis – white-crested tiger heron
Subfamily Cochleariinae
Genus Cochlearius – boat-billed heron
Subfamily Agamiinae
Genus Agamia – Agami heron
Subfamily Botaurinae
Genus Zebrilus – zigzag heron
Genus Botaurus – bitterns (14 species, one recently extinct. Includes species formerly placed in Ixobrychus)
Genus Pikaihao – Saint Bathan's bittern (fossil, Early Miocene of Otago, New Zealand)
Subfamily Ardeinae
Genus Zeltornis (fossil, Early Miocene of Djebel Zelten, Libya)
Genus Nycticorax – typical night herons (two living species, four recently extinct; sometimes includes Nyctanassa)
Genus Nyctanassa – American night herons (one living species, one recently extinct)
Genus Gorsachius – Asian and African night herons (four species)
Genus Butorides – green-backed herons (three species; sometimes included in Ardea)
Genus Pilherodius – capped heron
Genus Zonerodius – forest bittern
Genus Ardeola – pond herons (six species)
Genus Bubulcus – cattle egrets (one or two species, sometimes included in Ardea)
Genus Proardea (fossil)
Genus Ardea – typical herons (16 species)
Genus Syrigma – whistling heron
Genus Egretta – typical egrets (7–13 species)
Genus undetermined
Easter Island heron, Ardeidae gen. et sp. indet. (prehistoric)
Fossil herons of unresolved affiliations
"Anas" basaltica (Late Oligocene of Varnsdorf, Czech Republic)
Ardeagradis
Proardeola – possibly same as Proardea
Matuku (Early Miocene of Otago, New Zealand)
Other prehistoric and fossil species are included in the respective genus accounts. In addition, Proherodius is a disputed fossil which was variously considered a heron or one of the extinct long-legged waterfowl, the Presbyornithidae. It is only known from a sternum; a tarsometatarsus that had been assigned to it actually belongs to the paleognath Lithornis vulturinus.
Symbolic meaning in mysticism
In Buddhism, a heron symbolizes purity, transformation and the wisdom of the Buddha. In addition, as a bird that transcends elements – on the earth, in the water and in the air – the heron symbolizes the expansion of awareness and the ubiquity of consciousness.
In some Native American cultures, this bird symbolizes renewal, rejuvenation and rebirth – an ever present reminder that we are all a part of a larger cycle of life and death.
| Biology and health sciences | Pelecanimorphae | null |
237647 | https://en.wikipedia.org/wiki/Forearm | Forearm | The forearm is the region of the upper limb between the elbow and the wrist. The term forearm is used in anatomy to distinguish it from the arm, a word which is used to describe the entire appendage of the upper limb, but which in anatomy, technically, means only the region of the upper arm, whereas the lower "arm" is called the forearm. It is homologous to the region of the leg that lies between the knee and the ankle joints, the crus.
The forearm contains two long bones, the radius and the ulna, forming the two radioulnar joints. The interosseous membrane connects these bones. Ultimately, the forearm is covered by skin, the anterior surface usually being less hairy than the posterior surface.
The forearm contains many muscles, including the flexors and extensors of the wrist, flexors and extensors of the digits, a flexor of the elbow (brachioradialis), and pronators and supinators that turn the hand to face down or upwards, respectively. In cross-section, the forearm can be divided into two fascial compartments. The posterior compartment contains the extensors of the hands, which are supplied by the radial nerve. The anterior compartment contains the flexors and is mainly supplied by the median nerve. The flexor muscles are more massive than the extensors because they work against gravity and act as anti-gravity muscles. The ulnar nerve also runs the length of the forearm.
The radial and ulnar arteries and their branches supply the blood to the forearm. These usually run on the anterior face of the radius and ulna down the whole forearm. The main superficial veins of the forearm are the cephalic, median antebrachial and the basilic vein. These veins can be used for cannularisation or venipuncture, although the cubital fossa is a preferred site for getting blood.
Structure
Bones and joints
The bones of the forearm are the radius (located on the lateral side) and the ulna (located on the medial side)
Radius
Proximally, the head of the radius articulates with the capitulum of the humerus and the radial notch of the ulna at the elbow. The articulation between the radius and the ulna at the elbow is known as the proximal radioulnar joint.
Distally, it articulates with the ulna again at the distal radioulnar joint. It forms part of the wrist joint by articulating with the scaphoid at its lateral aspect and with the lunate at its medial aspect.
Ulna
Proximally, the trochlear notch of the ulna articulates with the trochlea of the humerus and the radial notch articulates with the head of the radius at the elbow.
Distally it forms part of the distal radioulnar joint and also articulates with the wrist.
Muscles
"E/I" refers to "extrinsic" or "intrinsic". The intrinsic muscles of the forearm act on the forearm, meaning, across the elbow joint and the proximal and distal radioulnar joints (resulting in pronation or supination), whereas the extrinsic muscles act upon the hand and wrist. In most cases, the extrinsic anterior muscles are flexors, while the extrinsic posterior muscles are extensors.
The brachioradialis, flexor of the forearm, is unusual in that it is located in the posterior compartment, but it is actually in the anterior portion of the forearm.
The anconeus is considered by some as a part of the posterior compartment of the arm.
Nerves
See separate nerve articles for details on divisions proximal to the elbow and distal to the wrist; see Brachial plexus for the origins of the median, radial and ulnar nerves.
Median nerve – interior nerve of the anterior compartment (PT, FCR, PL, FDS).
anterior interosseous nerve (supplies FPL, lat. 1/2 of FDP, PQ).
Radial nerve – supplies muscles of the posterior compartment (ECRL, ECRB).
Superficial branch of radial nerve
Deep branch of radial nerve, becomes Posterior interosseus nerve and supplies muscles of the posterior compartment (ED, EDM, ECU, APL, EPB, EPL, EI).
Ulnar nerve – supplies some medial muscles (FCU, med. 1/2 of FDP).
Vessels
Brachial artery
Radial artery
Radial recurrent artery
dorsal metacarpal artery
Princeps pollicis artery
Ulnar artery
Anterior ulnar recurrent artery and posterior ulnar recurrent artery
Common interosseous artery
Posterior interosseous artery
Anterior interosseous artery
Other structures
Interosseous membrane of forearm
Annular ligament of ulna
Function
The forearm can be brought closer to the upper arm (flexed) and brought away from the upper arm (extended) due to movement at the elbow. The forearm can also be rotated so that the palm of the hand rotates inwards (pronated) and rotated back so that the palm rotates outwards (supinated) due to movement at the elbow and the distal radioulnar joint.
Clinical significance
A fracture of the forearm can be classified as to whether it involves only the ulna (ulnar fracture), only the radius (radius fracture), or both radioulnar fracture.
For treatment of children with torus fractures of the forearm splinting appears to work better than casting.
Genetically determined disorders like hereditary multiple exostoses can lead to hand and forearm deformities. Hereditary multiple exostoses is due growth disturbance of the epiphyses of the radius and ulna, the two bones of the forearm.
Additional images
| Biology and health sciences | Human anatomy | Health |
237661 | https://en.wikipedia.org/wiki/Owlet-nightjar | Owlet-nightjar | Owlet-nightjars are small crepuscular birds related to the nightjars and frogmouths. Most are native to New Guinea, but some species extend to Australia, the Moluccas, and New Caledonia. A flightless species from New Zealand is extinct. There is a single monotypic family Aegothelidae with the genus Aegotheles.
Owlet-nightjars are insectivores which hunt mostly in the air but sometimes on the ground; their soft plumage is a cryptic mixture of browns and paler shades, they have fairly small, weak feet (but larger and stronger than those of a frogmouth or a nightjar), a tiny bill that opens extraordinarily wide, surrounded by prominent whiskers. The wings are short, with 10 primaries and about 11 secondaries; the tail long and rounded.
Systematics
A comprehensive 2003 study analyzing mtDNA sequences of Cytochrome b and ATPase subunit 8 suggests that 12 living species of owlet-nightjar should be recognized, as well as another that became extinct early in the second millennium AD.
The relationship between the owlet-nightjars and the (traditional) Caprimulgiformes has long been controversial and obscure and remains so today: in the 19th century they were regarded as a subfamily of the frogmouths, and they are still generally considered to be related to the frogmouths and/or the nightjars. It appears though that they are not as closely related to either as previously thought, and that the owlet-nightjars share a more recent common ancestor with the Apodiformes. As has been suggested on occasion since morphological studies of the cranium in the 1960s, they are thus considered a distinct order, Aegotheliformes. This, the caprimulgiform lineage(s), and the Apodiformes, are postulated to form a clade called Cypselomorphae, with the owlet-nightjars and the Apodiformes forming the clade Daedalornithes.
In form and habits, however, they are very similar to both caprimulgiform group – or, at first glance, to small owls with huge eyes. The ancestors of the swifts and hummingbirds, two groups of birds which are morphologically very specialized, seem to have looked very similar to a small owlet-nightjar, possessing strong legs and a wide gape, while the legs and feet are very reduced in today's swifts and hummingbirds, and the bill is narrow in the latter.
Owlet-nightjars are an exclusively Australasian group, but close relatives apparently thrived all over Eurasia in the late Paleogene.
Taxonomy
Family Aegothelidae
Genus Quipollornis Rich & McEvey 1977 (Early/Middle Miocene of New South Wales)
†Quipollornis koniberi Rich & McEvey 1977
Genus Aegotheles
†New Zealand owlet-nightjar, Aegotheles novaezealandiae (Scarlett 1968) (prehistoric; formerly Megaegotheles)
New Caledonian owlet-nightjar, Aegotheles savesi Layard & Layard 1881
Feline owlet-nightjar, Aegotheles insignis Salvadori 1876
Starry owlet-nightjar or spangled owlet-nightjar, Aegotheles tatei Rand 1941
Moluccan owlet-nightjar or long-whiskered owlet-nightjar, Aegotheles crinifrons (Bonaparte 1850)
Australian owlet-nightjar, Aegotheles cristatus (Shaw 1790)
A. c. cristatus (Shaw 1790)
A. c. tasmanicus Mathews 1918
Vogelkop owlet-nightjar, Aegotheles affinis Salvadori 1876
Karimui owlet-nightjar, Aegotheles terborghi Diamond 1967
Barred owlet-nightjar, Aegotheles bennettii Salvadori & Albertis 1875
A. b. bennettii Salvadori & Albertis 1875
A. b. plumifer Ramsay 1883
A. b. wiedenfeldi Laubmann 1914
Wallace's owlet-nightjar, Aegotheles wallacii Gray 1859
Mountain owlet-nightjar, Aegotheles albertisi Sclater 1874
A fossil proximal right tarsometatarsus (MNZ S42800) was found at the Bannockburn Formation of the Manuherikia Group near the Manuherikia River in Otago, New Zealand. Dating from the Early to Middle Miocene (Altonian, 19–16 million years ago), it seems to represent an owlet-nightjar ancestral to A. novaezealandiae. In 2022, an additional specimen from the same locality was described by Worthy et al. as a new extinct species of Aeotheles, A. zealandivetus. The holotype specimen is NMNZ S.52917, a distal right tarsometatarsus.
| Biology and health sciences | Caprimulgiformes and related | Animals |
237704 | https://en.wikipedia.org/wiki/Saccharomyces%20cerevisiae | Saccharomyces cerevisiae | Saccharomyces cerevisiae () (brewer's yeast or baker's yeast) is a species of yeast (single-celled fungal microorganisms). The species has been instrumental in winemaking, baking, and brewing since ancient times. It is believed to have been originally isolated from the skin of grapes. It is one of the most intensively studied eukaryotic model organisms in molecular and cell biology, much like Escherichia coli as the model bacterium. It is the microorganism which causes many common types of fermentation. S. cerevisiae cells are round to ovoid, 5–10 μm in diameter. It reproduces by budding.
Many proteins important in human biology were first discovered by studying their homologs in yeast; these proteins include cell cycle proteins, signaling proteins, and protein-processing enzymes. S. cerevisiae is currently the only yeast cell known to have Berkeley bodies present, which are involved in particular secretory pathways. Antibodies against S. cerevisiae are found in 60–70% of patients with Crohn's disease and 10–15% of patients with ulcerative colitis, and may be useful as part of a panel of serological markers in differentiating between inflammatory bowel diseases (e.g. between ulcerative colitis and Crohn's disease), their localization and severity.
Etymology
"Saccharomyces" derives from Latinized Greek and means "sugar-mould" or "sugar-fungus", saccharon (σάκχαρον) being the combining form "sugar" and myces (μύκης) being "fungus". cerevisiae comes from Latin and means "of beer". Other names for the organism are:
Brewer's yeast, though other species are also used in brewing
Ale yeast
Top-fermenting yeast
Baker's yeast
Ragi yeast, in connection to making tapai
Budding yeast
This species is also the main source of nutritional yeast and yeast extract.
History
In the 19th century, bread bakers obtained their yeast from beer brewers, and this led to sweet-fermented breads such as the Imperial "Kaisersemmel" roll,
which in general lacked the sourness created by the acidification typical of Lactobacillus. However, beer brewers slowly switched from top-fermenting (S. cerevisiae) to bottom-fermenting (S. pastorianus) yeast. The Vienna Process was developed in 1846.
While the innovation is often popularly credited for using steam in baking ovens, leading to a different crust characteristic, it is notable for including procedures for high milling of grains (see Vienna grits),
cracking them incrementally instead of mashing them with one pass; as well as better processes for growing and harvesting top-fermenting yeasts, known as press-yeast.
Refinements in microbiology following the work of Louis Pasteur led to more advanced methods of culturing pure strains. In 1879, Great Britain introduced specialized growing vats for the production of S. cerevisiae, and in the United States around the turn of the 20th century centrifuges were used for concentrating the yeast,
turning yeast production into a major industrial process which simplified its distribution, reduced unit costs and contributed to the commercialization and commoditization of bread and beer. Fresh "cake yeast" became the standard leaven for bread bakers in much of the Western world during the early 20th century.
During World War II, Fleischmann's developed a granulated active dry yeast for the United States armed forces, which did not require refrigeration and had a longer shelf-life and better temperature tolerance than fresh yeast; it is still the standard yeast for US military recipes. The company created yeast that would rise twice as fast, cutting down on baking time. Lesaffre would later create instant yeast in the 1970s, which has gained considerable use and market share at the expense of both fresh and dry yeast in their various applications.
Biology
Ecology
In nature, yeast cells are found primarily on ripe fruits such as grapes (before maturation, grapes are almost free of yeasts). S. cerevisiae can also be found year-round in the bark of oak trees. Since S. cerevisiae is not airborne, it requires a vector to move.
Queens of social wasps overwintering as adults (Vespa crabro and Polistes spp.) can harbor yeast cells from autumn to spring and transmit them to their progeny. The intestine of Polistes dominula, a social wasp, hosts S. cerevisiae strains as well as S. cerevisiae × S. paradoxus hybrids. Stefanini et al. (2016) showed that the intestine of Polistes dominula favors the mating of S. cerevisiae strains, both among themselves and with S. paradoxus cells by providing environmental conditions prompting cell sporulation and spores germination.
The optimum temperature for growth of S. cerevisiae is .
Life cycle
Two forms of yeast cells can survive and grow: haploid and diploid. The haploid cells undergo a simple lifecycle of mitosis and growth, and under conditions of high stress will, in general, die. This is the asexual form of the fungus. The diploid cells (the preferential 'form' of yeast) similarly undergo a simple lifecycle of mitosis and growth. The rate at which the mitotic cell cycle progresses often differs substantially between haploid and diploid cells. Under conditions of stress, diploid cells can undergo sporulation, entering meiosis and producing four haploid spores, which can subsequently mate. This is the sexual form of the fungus. Under optimal conditions, yeast cells can double their population every 100 minutes. However, growth rates vary enormously between strains and between environments. Mean replicative lifespan is about 26 cell divisions.
In the wild, recessive deleterious mutations accumulate during long periods of asexual reproduction of diploids, and are purged during selfing: this purging has been termed "genome renewal".
Nutritional requirements
All strains of S. cerevisiae can grow aerobically on glucose, maltose, and trehalose and fail to grow on lactose and cellobiose. However, growth on other sugars is variable. Galactose and fructose are shown to be two of the best fermenting sugars. The ability of yeasts to use different sugars can differ depending on whether they are grown aerobically or anaerobically. Some strains cannot grow anaerobically on sucrose and trehalose.
All strains can use ammonia and urea as the sole nitrogen source, but cannot use nitrate, since they lack the ability to reduce them to ammonium ions. They can also use most amino acids, small peptides, and nitrogen bases as nitrogen sources. Histidine, glycine, cystine, and lysine are, however, not readily used. S. cerevisiae does not excrete proteases, so extracellular protein cannot be metabolized.
Yeasts also have a requirement for phosphorus, which is assimilated as a dihydrogen phosphate ion, and sulfur, which can be assimilated as a sulfate ion or as organic sulfur compounds such as the amino acids methionine and cysteine. Some metals, like magnesium, iron, calcium, and zinc, are also required for good growth of the yeast.
Concerning organic requirements, most strains of S. cerevisiae require biotin. Indeed, a S. cerevisiae-based growth assay laid the foundation for the isolation, crystallization, and later structural determination of biotin. Most strains also require pantothenate for full growth. In general, S. cerevisiae is prototrophic for vitamins.
Mating
Yeast has two mating types, a and α (alpha), which show primitive aspects of sex differentiation. As in many other eukaryotes, mating leads to genetic recombination, i.e. production of novel combinations of chromosomes. Two haploid yeast cells of opposite mating type can mate to form diploid cells that can either sporulate to form another generation of haploid cells or continue to exist as diploid cells. Mating has been exploited by biologists as a tool to combine genes, plasmids, or proteins at will.
The mating pathway employs a G protein-coupled receptor, G protein, RGS protein, and three-tiered MAPK signaling cascade that is homologous to those found in humans. This feature has been exploited by biologists to investigate basic mechanisms of signal transduction and desensitization.
Cell cycle
Growth in yeast is synchronized with the growth of the bud, which reaches the size of the mature cell by the time it separates from the parent cell. In well nourished, rapidly growing yeast cultures, all the cells have buds, since bud formation occupies the whole cell cycle. Both mother and daughter cells can initiate bud formation before cell separation has occurred. In yeast cultures growing more slowly, cells lacking buds can be seen, and bud formation only occupies a part of the cell cycle.
Cytokinesis
Cytokinesis enables budding yeast Saccharomyces cerevisiae to divide into two daughter cells. S. cerevisiae forms a bud which can grow throughout its cell cycle and later leaves its mother cell when mitosis has completed.
S. cerevisiae is relevant to cell cycle studies because it divides asymmetrically by using a polarized cell to make two daughters with different fates and sizes. Similarly, stem cells use asymmetric division for self-renewal and differentiation.
Timing
For many cells, M phase does not happen until S phase is complete. However, for entry into mitosis in S. cerevisiae this is not true. Cytokinesis begins with the budding process in late G1 and is not completed until about halfway through the next cycle. The assembly of the spindle can happen before S phase has finished duplicating the chromosomes. Additionally, there is a lack of clearly defined G2 in between M and S. Thus, there is a lack of extensive regulation present in higher eukaryotes.
When the daughter emerges, the daughter is two-thirds the size of the mother. Throughout the process, the mother displays little to no change in size. The RAM pathway is activated in the daughter cell immediately after cytokinesis is complete. This pathway makes sure that the daughter has separated properly.
Actomyosin ring and primary septum formation
Two interdependent events drive cytokinesis in S. cerevisiae. The first event is contractile actomyosin ring (AMR) constriction and the second event is formation of the primary septum (PS), a chitinous cell wall structure that can only be formed during cytokinesis. The PS resembles in animals the process of extracellular matrix remodeling. When the AMR constricts, the PS begins to grow. Disrupting AMR misorients the PS, suggesting that both have a dependent role. Additionally, disrupting the PS also leads to disruptions in the AMR, suggesting both the actomyosin ring and primary septum have an interdependent relationship.
The AMR, which is attached to the cell membrane facing the cytosol, consists of actin and myosin II molecules that coordinate the cells to split. The ring is thought to play an important role in ingression of the plasma membrane as a contractile force.
Proper coordination and correct positional assembly of the contractile ring depends on septins, which is the precursor to the septum ring. These GTPases assemble complexes with other proteins. The septins form a ring at the site where the bud will be created during late G1. They help promote the formation of the actin-myosin ring, although this mechanism is unknown. It is suggested they help provide structural support for other necessary cytokinesis processes. After a bud emerges, the septin ring forms an hourglass. The septin hourglass and the myosin ring together are the beginning of the future division site.
The septin and AMR complex progress to form the primary septum consisting of glucans and other chitinous molecules sent by vesicles from the Golgi body. After AMR constriction is complete, two secondary septums are formed by glucans. How the AMR ring dissembles remains poorly unknown.
Microtubules do not play as significant a role in cytokinesis compared to the AMR and septum. Disruption of microtubules did not significantly impair polarized growth. Thus, the AMR and septum formation are the major drivers of cytokinesis.
Differences from fission yeast
Budding yeast form a bud from the mother cell. This bud grows during the cell cycle and detaches; fission yeast divide by forming a cell wall
Cytokinesis begins at G1 for budding yeast, while cytokinesis begins at G2 for fission yeast. Fission yeast "select" the midpoint, whereas budding yeast "select" a bud site
During early anaphase the actomyosin ring and septum continues to develop in budding yeast, in fission yeast during metaphase-anaphase the actomyosin ring begins to develop
In biological research
Model organism
When researchers look for an organism to use in their studies, they look for several traits. Among these are size, short generation time, accessibility, ease of manipulation, genetics, conservation of mechanisms, and potential economic benefit. The yeast species Schizosaccharomyces pombe and S. cerevisiae are both well studied; these two species diverged approximately , and are significant tools in the study of DNA damage and repair mechanisms.
S. cerevisiae has developed as a model organism because it scores favorably on a number of criteria.
As a single-cell organism, S. cerevisiae is small with a short generation time (doubling time 1.25–2 hours at ) and can be easily cultured. These are all positive characteristics in that they allow for the swift production and maintenance of multiple specimen lines at low cost.
S. cerevisiae divides with meiosis, allowing it to be a candidate for sexual genetics research.
S. cerevisiae can be transformed allowing for either the addition of new genes or deletion through homologous recombination. Furthermore, the ability to grow S. cerevisiae as a haploid simplifies the creation of gene knockout strains.
As an eukaryote, S. cerevisiae shares the complex internal cell structure of plants and animals without the high percentage of non-coding DNA that can confound research in higher eukaryotes.
S. cerevisiae research is a strong economic driver, at least initially, as a result of its established use in industry.
In the study of aging
For more than five decades S. cerevisiae has been studied as a model organism to better understand aging and has contributed to the identification of more mammalian genes affecting aging than any other model organism. Some of the topics studied using yeast are calorie restriction, as well as in genes and cellular pathways involved in senescence. The two most common methods of measuring aging in yeast are Replicative Life Span (RLS), which measures the number of times a cell divides, and Chronological Life Span (CLS), which measures how long a cell can survive in a non-dividing stasis state. Limiting the amount of glucose or amino acids in the growth medium has been shown to increase RLS and CLS in yeast as well as other organisms. At first, this was thought to increase RLS by up-regulating the sir2 enzyme; however, it was later discovered that this effect is independent of sir2. Over-expression of the genes sir2 and fob1 has been shown to increase RLS by preventing the accumulation of extrachromosomal rDNA circles, which are thought to be one of the causes of senescence in yeast. The effects of dietary restriction may be the result of a decreased signaling in the TOR cellular pathway. This pathway modulates the cell's response to nutrients, and mutations that decrease TOR activity were found to increase CLS and RLS. This has also been shown to be the case in other animals. A yeast mutant lacking the genes and Ras2 has recently been shown to have a tenfold increase in chronological lifespan under conditions of calorie restriction and is the largest increase achieved in any organism.
Mother cells give rise to progeny buds by mitotic divisions, but undergo replicative aging over successive generations and ultimately die. However, when a mother cell undergoes meiosis and gametogenesis, lifespan is reset. The replicative potential of gametes (spores) formed by aged cells is the same as gametes formed by young cells, indicating that age-associated damage is removed by meiosis from aged mother cells. This observation suggests that during meiosis removal of age-associated damages leads to rejuvenation. However, the nature of these damages remains to be established.
During starvation of non-replicating S. cerevisiae cells, reactive oxygen species increase leading to the accumulation of DNA damages such as apurinic/apyrimidinic sites and double-strand breaks. Also in non-replicating cells the ability to repair endogenous double-strand breaks declines during chronological aging.
Meiosis, recombination and DNA repair
S. cerevisiae reproduces by mitosis as diploid cells when nutrients are abundant. However, when starved, these cells undergo meiosis to form haploid spores.
Evidence from studies of S. cerevisiae bear on the adaptive function of meiosis and recombination. Mutations defective in genes essential for meiotic and mitotic recombination in S. cerevisiae cause increased sensitivity to radiation or DNA damaging chemicals. For instance, gene rad52 is required for both meiotic recombination and mitotic recombination. Rad52 mutants have increased sensitivity to killing by X-rays, Methyl methanesulfonate and the DNA cross-linking agent 8-methoxypsoralen-plus-UVA, and show reduced meiotic recombination. These findings suggest that recombination repair during meiosis and mitosis is needed for repair of the different damages caused by these agents.
Ruderfer et al. (2006) analyzed the ancestry of natural S. cerevisiae strains and concluded that outcrossing occurs only about once every 50,000 cell divisions. Thus, it appears that in nature, mating is likely most often between closely related yeast cells. Mating occurs when haploid cells of opposite mating type MATa and MATα come into contact. Ruderfer et al. pointed out that such contacts are frequent between closely related yeast cells for two reasons. The first is that cells of opposite mating type are present together in the same ascus, the sac that contains the cells directly produced by a single meiosis, and these cells can mate with each other. The second reason is that haploid cells of one mating type, upon cell division, often produce cells of the opposite mating type with which they can mate. The relative rarity in nature of meiotic events that result from outcrossing is inconsistent with the idea that production of genetic variation is the main selective force maintaining meiosis in this organism. However, this finding is consistent with the alternative idea that the main selective force maintaining meiosis is enhanced recombinational repair of DNA damage, since this benefit is realized during each meiosis, whether or not out-crossing occurs.
Genome sequencing
S. cerevisiae was the first eukaryotic genome to be completely sequenced. The genome sequence was released to the public domain on April 24, 1996. Since then, regular updates have been maintained at the Saccharomyces Genome Database. This database is a highly annotated and cross-referenced database for yeast researchers. Another important S. cerevisiae database is maintained by the Munich Information Center for Protein Sequences (MIPS). Further information is located at the Yeastract curated repository.
The S. cerevisiae genome is composed of about 12,156,677 base pairs and 6,275 genes, compactly organized on 16 chromosomes. Only about 5,800 of these genes are believed to be functional. It is estimated at least 31% of yeast genes have homologs in the human genome. Yeast genes are classified using gene symbols (such as Sch9) or systematic names. In the latter case the 16 chromosomes of yeast are represented by the letters A to P, then the gene is further classified by a sequence number on the left or right arm of the chromosome, and a letter showing which of the two DNA strands contains its coding sequence.
Examples:
YBR134C (aka SUP45 encoding eRF1, a translation termination factor) is located on the right arm of chromosome 2 and is the 134th open reading frame (ORF) on that arm, starting from the centromere. The coding sequence is on the Crick strand of the DNA.
YDL102W (aka POL3 encoding a subunit of DNA polymerase delta) is located on the left arm of chromosome 4; it is the 102nd ORF from the centromere and codes from the Watson strand of the DNA.
Gene function and interactions
The availability of the S. cerevisiae genome sequence and a set of deletion mutants covering 90% of the yeast genome has further enhanced the power of S. cerevisiae as a model for understanding the regulation of eukaryotic cells. A project underway to analyze the genetic interactions of all double-deletion mutants through synthetic genetic array analysis will take this research one step further. The goal is to form a functional map of the cell's processes.
a model of genetic interactions is most comprehensive yet to be constructed, containing "the interaction profiles for ~75% of all genes in the Budding yeast". This model was made from 5.4 million two-gene comparisons in which a double gene knockout for each combination of the genes studied was performed. The effect of the double knockout on the fitness of the cell was compared to the expected fitness. Expected fitness is determined from the sum of the results on fitness of single-gene knockouts for each compared gene. When there is a change in fitness from what is expected, the genes are presumed to interact with each other. This was tested by comparing the results to what was previously known. For example, the genes Par32, Ecm30, and Ubp15 had similar interaction profiles to genes involved in the Gap1-sorting module cellular process. Consistent with the results, these genes, when knocked out, disrupted that process, confirming that they are part of it.
From this, 170,000 gene interactions were found and genes with similar interaction patterns were grouped together. Genes with similar genetic interaction profiles tend to be part of the same pathway or biological process. This information was used to construct a global network of gene interactions organized by function. This network can be used to predict the function of uncharacterized genes based on the functions of genes they are grouped with.
Other tools in yeast research
Approaches that can be applied in many different fields of biological and medicinal science have been developed by yeast scientists. These include yeast two-hybrid for studying protein interactions and tetrad analysis. Other resources, include a gene deletion library including ~4,700 viable haploid single gene deletion strains. A GFP fusion strain library used to study protein localisation and a TAP tag library used to purify protein from yeast cell extracts.
Stanford University's yeast deletion project created knockout mutations of every gene in the S. cerevisiae genome to determine their function.
Synthetic yeast chromosomes and genomes
The yeast genome is highly accessible to manipulation, hence it is an excellent model for genome engineering.
The international Synthetic Yeast Genome Project (Sc2.0 or Saccharomyces cerevisiae version 2.0) aims to build an entirely designer, customizable, synthetic S. cerevisiae genome from scratch that is more stable than the wild type. In the synthetic genome all transposons, repetitive elements and many introns are removed, all UAG stop codons are replaced with UAA, and transfer RNA genes are moved to a novel neochromosome. , 6 of the 16 chromosomes have been synthesized and tested. No significant fitness defects have been found.
All 16 chromosomes can be fused into one single chromosome by successive end-to-end chromosome fusions and centromere deletions. The single-chromosome and wild-type yeast cells have nearly identical transcriptomes and similar phenotypes. The giant single chromosome can support cell life, although this strain shows reduced growth across environments, competitiveness, gamete production and viability.
Astrobiology
Among other microorganisms, a sample of living S. cerevisiae was included in the Living Interplanetary Flight Experiment, which would have completed a three-year interplanetary round-trip in a small capsule aboard the Russian Fobos-Grunt spacecraft, launched in late 2011. The goal was to test whether selected organisms could survive a few years in deep space by flying them through interplanetary space. The experiment would have tested one aspect of transpermia, the hypothesis that life could survive space travel, if protected inside rocks blasted by impact off one planet to land on another. Fobos-Grunt's mission ended unsuccessfully, however, when it failed to escape low Earth orbit. The spacecraft along with its instruments fell into the Pacific Ocean in an uncontrolled re-entry on January 15, 2012. The next planned exposure mission in deep space using S. cerevisiae is BioSentinel. (see: List of microorganisms tested in outer space)
In commercial applications
Brewing
Saccharomyces cerevisiae is used in brewing beer, when it is sometimes called a top-fermenting or top-cropping yeast. It is so called because during the fermentation process its hydrophobic surface causes the flocs to adhere to CO2 and rise to the top of the fermentation vessel. Top-fermenting yeasts are fermented at higher temperatures than the lager yeast Saccharomyces pastorianus, and the resulting beers have a different flavor from the same beverage fermented with a lager yeast. "Fruity esters" may be formed if the yeast undergoes temperatures near , or if the fermentation temperature of the beverage fluctuates during the process. Lager yeast normally ferments at a temperature of approximately or 278 k, where Saccharomyces cerevisiae becomes dormant. A variant yeast known as Saccharomyces cerevisiae var. diastaticus is a beer spoiler which can cause secondary fermentations in packaged products.
In May 2013, the Oregon legislature made S. cerevisiae the official state microbe in recognition of the impact craft beer brewing has had on the state economy and the state's identity.
Baking
S. cerevisiae is used in baking; the carbon dioxide generated by the fermentation is used as a leavening agent in bread and other baked goods. Historically, this use was closely linked to the brewing industry's use of yeast, as bakers took or bought the barm or yeast-filled foam from brewing ale from the brewers (producing the barm cake); today, brewing and baking yeast strains are somewhat different.
Nutritional yeast
Saccharomyces cerevisiae is the main source of nutritional yeast, which is sold commercially as a food product. It is popular with vegans and vegetarians as an ingredient in cheese substitutes, or as a general food additive as a source of vitamins and minerals, especially amino acids and B-complex vitamins.
Uses in aquaria
Owing to the high cost of commercial CO2 cylinder systems, CO2 injection by yeast is one of the most popular DIY approaches followed by aquaculturists for providing CO2 to underwater aquatic plants. The yeast culture is, in general, maintained in plastic bottles, and typical systems provide one bubble every 3–7 seconds. Various approaches have been devised to allow proper absorption of the gas into the water.
Direct use in medicine
Saccharomyces cerevisiae is used as a probiotic in humans and animals. The strain Saccharomyces cerevisiae var. boulardii is industrially manufactured and used clinically as a medication.
Several clinical and experimental studies have shown that S. cerevisiae var. boulardii is, to lesser or greater extent, useful for prevention or treatment of several gastrointestinal diseases. Moderate quality evidence has shown S. cerevisiae var. boulardii reduces risk of antibiotic-associated diarrhoea both in adults and in children and to reduce risk of adverse effects of Helicobacter pylori eradication therapy. There is some evidence to support efficacy of S. cerevisiae var. boulardii in prevention (but not treatment) of traveler's diarrhoea and, at least as an adjunct medication, in treatment of acute diarrhoea in adults and children and of persistent diarrhoea in children. It may also reduce symptoms of allergic rhinitis.
Administration of S. cerevisiae var. boulardii is considered generally safe. In clinical trials it was well tolerated by patients, and adverse effects rate was similar to that in control groups (i. e. groups with placebo or no treatment). No case of S. cerevisiae var. boulardii fungemia has been reported during clinical trials.
In clinical practice, however, cases of fungemia, caused by S. cerevisiae var. boulardii are reported. Patients with compromised immunity or those with central vascular catheters are at special risk. Some researchers have recommended avoiding use of S. cerevisiae var. boulardii as treatment in such patients. Others suggest only that caution must be exercised with its use in risk group patients.
A human pathogen
Saccharomyces cerevisiae is proven to be an opportunistic human pathogen, though of relatively low virulence. Despite widespread use of this microorganism at home and in industry, contact with it very rarely leads to infection.
Saccharomyces cerevisiae was found in the skin, oral cavity, oropharinx, duodenal mucosa, digestive tract, and vagina of healthy humans (one review found it to be reported for 6% of samples from human intestine). Some specialists consider S. cerevisiae to be a part of the normal microbiota of the gastrointestinal tract, the respiratory tract, and the vagina of humans, while others believe that the species cannot be called a true commensal because it originates in food. Presence of S. cerevisiae in the human digestive system may be rather transient; for example, experiments show that in the case of oral administration to healthy individuals it is eliminated from the intestine within 5 days after the end of administration.
Under certain circumstances, such as degraded immunity, Saccharomyces cerevisiae can cause infection in humans. Studies show that it causes 0.45–1.06% of the cases of yeast-induced vaginitis. In some cases, women suffering from S. cerevisiae-induced vaginal infection were intimate partners of bakers, and the strain was found to be the same that their partners used for baking. As of 1999, no cases of S. cerevisiae-induced vaginitis in women, who worked in bakeries themselves, were reported in scientific literature. Some cases were linked by researchers to the use of the yeast in home baking. Cases of infection of oral cavity and pharynx caused by S. cerevisiae are also known.
Invasive and systemic infections
Occasionally Saccharomyces cerevisiae causes invasive infections (i. e. gets into the bloodstream or other normally sterile body fluid or into a deep site tissue, such as lungs, liver or spleen) that can go systemic (involve multiple organs). Such conditions are life-threatening. More than 30% cases of S. cerevisiae invasive infections lead to death even if treated. S. cerevisiae invasive infections, however, are much rarer than invasive infections caused by Candida albicans even in patients weakened by cancer. S. cerevisiae causes 1% to 3.6% nosocomial cases of fungemia. A comprehensive review of S. cerevisiae invasive infection cases found all patients to have at least one predisposing condition.
Saccharomyces cerevisiae may enter the bloodstream or get to other deep sites of the body by translocation from oral or enteral mucosa or through contamination of intravascular catheters (e. g. central venous catheters). Intravascular catheters, antibiotic therapy and compromised immunity are major predisposing factors for S. cerevisiae invasive infection.
A number of cases of fungemia were caused by intentional ingestion of living S. cerevisiae cultures for dietary or therapeutic reasons, including use of Saccharomyces boulardii (a strain of S. cerevisiae which is used as a probiotic for treatment of certain forms of diarrhea). Saccharomyces boulardii causes about 40% cases of invasive Saccharomyces infections and is more likely (in comparison to other S. cerevisiae strains) to cause invasive infection in humans without general problems with immunity, though such adverse effect is very rare relative to Saccharomyces boulardii therapeutic administration.
S. boulardii may contaminate intravascular catheters through hands of medical personnel involved in administering probiotic preparations of S. boulardii to patients.
Systemic infection usually occurs in patients who have their immunity compromised due to severe illness (HIV/AIDS, leukemia, other forms of cancer) or certain medical procedures (bone marrow transplantation, abdominal surgery).
A case was reported when a nodule was surgically excised from a lung of a man employed in baking business, and examination of the tissue revealed presence of Saccharomyces cerevisiae. Inhalation of dry baking yeast powder is supposed to be the source of infection in this case.
Virulence of different strains
Not all strains of Saccharomyces cerevisiae are equally virulent towards humans. Most environmental strains are not capable of growing at temperatures above 35 °C (i. e. at temperatures of living body of humans and other mammalian). Virulent strains, however, are capable of growing at least above 37 °C and often up to 39 °C (rarely up to 42 °C). Some industrial strains are also capable of growing above 37 °C. European Food Safety Authority (as of 2017) requires that all S. cerevisiae strains capable of growth above 37 °C that are added to the food or feed chain in viable form must, as to be qualified presumably safe, show no resistance to antimycotic drugs used for treatment of yeast infections.
The ability to grow at elevated temperatures is an important factor for strain's virulence but not the sole one.
Other traits that are usually believed to be associated with virulence are: ability to produce certain enzymes such as proteinase and phospholipase, invasive growth (i.e. growth with intrusion into the nutrient medium), ability to adhere to mammalian cells, ability to survive in the presence of hydrogen peroxide (that is used by macrophages to kill foreign microorganisms in the body) and other abilities allowing the yeast to resist or influence immune response of the host body. Ability to form branching chains of cells, known as pseudohyphae is also sometimes said to be associated with virulence, though some research suggests that this trait may be common to both virulent and non-virulent strains of Saccharomyces cerevisiae.
| Biology and health sciences | Fungi | null |
237708 | https://en.wikipedia.org/wiki/Rydberg%20formula | Rydberg formula | In atomic physics, the Rydberg formula calculates the wavelengths of a spectral line in many chemical elements. The formula was primarily presented as a generalization of the Balmer series for all atomic electron transitions of hydrogen. It was first empirically stated in 1888 by the Swedish physicist Johannes Rydberg, then theoretically by Niels Bohr in 1913, who used a primitive form of quantum mechanics. The formula directly generalizes the equations used to calculate the wavelengths of the hydrogen spectral series.
History
In 1890, Rydberg proposed on a formula describing the relation between the wavelengths in spectral lines of alkali metals. He noticed that lines came in series and he found that he could simplify his calculations using the wavenumber (the number of waves occupying the unit length, equal to 1/λ, the inverse of the wavelength) as his unit of measurement. He plotted the wavenumbers (n) of successive lines in each series against consecutive integers which represented the order of the lines in that particular series. Finding that the resulting curves were similarly shaped, he sought a single function which could generate all of them, when appropriate constants were inserted.
First he tried the formula: , where n is the line's wavenumber, n0 is the series limit, m is the line's ordinal number in the series, m′ is a constant different for different series and C0 is a universal constant. This did not work very well.
Rydberg was trying: when he became aware of Balmer's formula for the hydrogen spectrum In this equation, m is an integer and h is a constant (not to be confused with the later Planck constant).
Rydberg therefore rewrote Balmer's formula in terms of wavenumbers, as .
This suggested that the Balmer formula for hydrogen might be a special case with and , where , the reciprocal of Balmer's constant (this constant h is written B in the Balmer equation article, again to avoid confusion with the Planck constant).
The term was found to be a universal constant common to all elements, equal to 4/h. This constant is now known as the Rydberg constant, and m′ is known as the quantum defect.
As stressed by Niels Bohr, expressing results in terms of wavenumber, not wavelength, was the key to Rydberg's discovery. The fundamental role of wavenumbers was also emphasized by the Rydberg-Ritz combination principle of 1908. The fundamental reason for this lies in quantum mechanics. Light's wavenumber is proportional to frequency , and therefore also proportional to light's quantum energy E. Thus, (in this formula the h represents the Planck constant). Modern and legitimate understanding is that Rydberg's findings were a reflection of the underlying simplicity of the behavior of spectral lines, in terms of fixed (quantized) energy differences between electron orbitals in atoms. Rydberg's 1888 classical expression for the form of the spectral series was not accompanied by a physical explanation. Walther Ritz's pre-quantum 1908 explanation for the mechanism underlying the spectral series was that atomic electrons behaved like magnets and that the magnets could vibrate with respect to the atomic nucleus (at least temporarily) to produce electromagnetic radiation, but this theory was superseded in 1913 by Niels Bohr's model of the atom.
Bohr's interpretation and derivation of the constant
Rydberg's published formula was
where is the observed wavenumber, is a constant for all spectral series and elements, and the remaining values, are integers indexing the various lines. When Bohr analyzes his model for the atom he writes
where he uses frequency (proportional to wavenumber).
Thus he has been able to compute the value of Rydberg's heuristic constant from his atom theory and set the integers and to zero. The effect is to predict new series corresponding to in the extreme ultraviolet unknown to Rydberg.
In Bohr's conception of the atom, the integer Rydberg (and Balmer) n numbers represent electron orbitals at different integral distances from the atom. A frequency (or spectral energy) emitted in a transition from n1 to n2 therefore represents the photon energy emitted or absorbed when an electron makes a jump from orbital 1 to orbital 2.
Later models found that the values for n1 and n2 corresponded to the principal quantum numbers of the two orbitals.
For hydrogen
where
is the wavelength of electromagnetic radiation emitted in vacuum,
is the Rydberg constant for hydrogen, approximately ,
is the principal quantum number of an energy level, and
is the principal quantum number of an energy level for the atomic electron transition.
Note: Here,
By setting to 1 and letting run from 2 to infinity, the spectral lines known as the Lyman series converging to 91 nm are obtained, in the same manner:
For any hydrogen-like element
The formula above can be extended for use with any hydrogen-like chemical elements with
where
is the wavelength (in vacuum) of the light emitted,
is the Rydberg constant for this element,
is the atomic number, i.e. the number of protons in the atomic nucleus of this element,
is the principal quantum number of the lower energy level, and
is the principal quantum number of the higher energy level for the atomic electron transition.
This formula can be directly applied only to hydrogen-like, also called hydrogenic atoms of chemical elements, i.e. atoms with only one electron being affected by an effective nuclear charge (which is easily estimated). Examples would include He+, Li2+, Be3+ etc., where no other electrons exist in the atom.
But the Rydberg formula also provides correct wavelengths for distant electrons, where the effective nuclear charge can be estimated as the same as that for hydrogen, since all but one of the nuclear charges have been screened by other electrons, and the core of the atom has an effective positive charge of +1.
Finally, with certain modifications (replacement of Z by Z − 1, and use of the integers 1 and 2 for the ns to give a numerical value of for the difference of their inverse squares), the Rydberg formula provides correct values in the special case of K-alpha lines, since the transition in question is the K-alpha transition of the electron from the 1s orbital to the 2p orbital. This is analogous to the Lyman-alpha line transition for hydrogen, and has the same frequency factor. Because the 2p electron is not screened by any other electrons in the atom from the nucleus, the nuclear charge is diminished only by the single remaining 1s electron, causing the system to be effectively a hydrogenic atom, but with a diminished nuclear charge Z − 1. Its frequency is thus the Lyman-alpha hydrogen frequency, increased by a factor of (Z − 1)2. This formula of f = c / λ = (Lyman-alpha frequency) ⋅ (Z − 1)2 is historically known as Moseley's law (having added a factor c to convert wavelength to frequency), and can be used to predict wavelengths of the Kα (K-alpha) X-ray spectral emission lines of chemical elements from aluminum to gold. See the biography of Henry Moseley for the historical importance of this law, which was derived empirically at about the same time it was explained by the Bohr model of the atom.
For other spectral transitions in multi-electron atoms, the Rydberg formula generally provides incorrect results, since the magnitude of the screening of inner electrons for outer-electron transitions is variable and cannot be compensated for in the simple manner above. The correction to the Rydberg formula for these atoms is known as the quantum defect.
| Physical sciences | Atomic physics | Physics |
237746 | https://en.wikipedia.org/wiki/Taser | Taser | A TASER (also variously "Taser" or "taser") is a conducted energy device (CED) primarily used to incapacitate people by delivering an intense electric shock that briefly disrupts voluntary control of the muscles, allowing the person to be approached and handled without resistance. The brand-name product, sold by Axon (formerly TASER International), is a handheld device which fires two small barbed darts intended to puncture the skin and remain attached to the target until removed by the user of the device. The darts are connected to the main unit by thin wires that achieve a high dielectric strength and durability given the extremely high-voltage electric current they conduct (typically 50,000 volts, or 2,000 volts under load), which can be delivered in short-duration pulses from a core of copper wire in the main unit. This enormous rush of current into the body produces effects ranging from localized pain to strong involuntary long muscle contractions, causing "neuromuscular incapacitation" (NMI), based on the mode of use (tasing frequency and environmental factors) and connectivity of the darts. When successfully used, the target is said to have been "tased".
The first TASER conducted energy weapon was introduced in 1993 as a less-lethal option for police to use to subdue belligerent or fleeing suspects, who might otherwise need to be subdued with more lethal means such as firearms. , according to one study, over 15,000 law enforcement and military agencies around the world used TASERs as part of their use of force continuum. In the United States, TASERs are marketed as less-lethal (as opposed to non-lethal), since the possibility of serious injury or death still exists whenever the weapon is deployed. At least 49 people died in 2018 after being shocked by police with a TASER. Personal-use TASERs are marketed in the US but prohibited in Canada, where there is a categorical ban on all conducted energy weapons such as stun guns and TASERs, except for use by law enforcement.
A 2009 report by the Police Executive Research Forum in the United States found that police officer injuries dropped by 76% in large law enforcement agencies that deployed TASER devices in the first decade of the 21st century compared with those that did not use them at all. Axon and its CEO Rick Smith have claimed that unspecified "police surveys" show that the device has "saved 75,000 lives through 2011". A more recent academic study suggested police use of conducted energy weapons in the United States was less risky to police officers than hands-on tactics, and showed officer injury rates equal to those experienced by officers using alternative incapacitation methods such as pepper spray.
History
TASERs have a long history of use to prevent the escape of dangerous suspects without needing to resort to lethal force, or used to capture suspects without risking serious injuries to either the officer or the suspect. A United States patent by Kunio Shimizu titled "Arrest device" filed in 1966 describes an electrical discharge gun with a projectile connected to a wire with a pair of electrode needles for skin attachment.
Jack Cover, a NASA researcher, began developing the first TASER in 1969. By 1974, Cover had completed the device, which he named TASER, using a loose acronym inspired by the title of the 1911 novel Tom Swift and His Electric Rifle, a book written by the Stratemeyer Syndicate under the pseudonym Victor Appleton and featuring Cover's childhood hero, Tom Swift. The name made sense, given that the TASER delivers an electric shock. This was also done on the pattern of laser, as both a TASER and a laser fire a "beam" of energy at an object.
The first TASER model that was offered for sale, called the TASER Public Defender, used gunpowder as its propellant, which led the Bureau of Alcohol, Tobacco and Firearms to classify it as a firearm in 1976.
Former TASER International CEO Patrick Smith testified in a TASER-related lawsuit that the catalyst for the development of the device was the "shooting death of two of his high school acquaintances" by a "guy with a legally licensed gun who lost his temper". The two decedents, Todd Bogers and Cory Holmes, died in 1991 not 1990 as Smith has claimed. Family members and friends of the two state that Smith was not friends with them, as Smith has claimed, and they were never "football teammates", as Smith has claimed. The two graduated before Smith attended Chaparral High School. Family members of the two have criticized his use of their deaths for profit.
In 1993, Rick Smith and his brother Thomas founded the original company, TASER, and began to investigate what they called "safer use of force option[s] for citizens and law enforcement". At their Scottsdale, Arizona, facilities, the brothers worked with Cover to develop a "non-firearm TASER electronic control device". The 1994 Air TASER Model 34000 conducted energy device had an "anti-felon identification (AFID) system" to prevent the likelihood that the device would be used by criminals; upon use, it released many small pieces of paper containing the serial number of the TASER device. The U.S. Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) stated that the Air TASER conducted energy device was not a firearm.
In 1999, TASER International developed an "ergonomically handgun-shaped device called the Advanced TASER M-series systems", which used a "patented neuromuscular incapacitation (NMI) technology". In May 2003, TASER International released a new weapon called the TASER X26 conducted energy device, which used "shaped pulse technology". On July 27, 2009, TASER International released a new type of TASER device called the X3, which can fire three shots before reloading. It holds three new type cartridges, which are much thinner than the previous model. On April 5, 2017, TASER announced that it was rebranding itself as Axon to reflect its expanded business into body cameras and software. In 2018, TASER 7 conducted energy device was released, the seventh generation of TASER devices from Axon.
Function
A TASER device fires two small dart-like electrodes, which stay connected to the main unit by thin insulated copper wire as they are propelled by small compressed nitrogen charges. The cartridge contains a pair of electrodes and propellant for a single shot and is replaced after each use. Once fired the probes travel at per second, spread apart for every they travel, and must land at least apart from each other to complete the circuit and channel an electric pulse into the target's body. They deliver a modulated electric current designed to disrupt voluntary control of muscles, causing "neuromuscular incapacitation". The effects of a TASER device may only be localized pain or strong involuntary long muscle contractions, based on the mode of use, connectivity and location of the darts on the body. The TASER device is marketed as "less-lethal", since the possibility of serious injury or death exists whenever the weapon is deployed.
There are a number of cartridges designated by range, with the maximum at . Cartridges available to non-law enforcement consumers are limited to . Practically speaking, police officers must generally be within to use a Taser, though the X26's probes can travel as far as 35feet.
The electrodes are pointed to penetrate clothing and barbed to prevent removal once in place. The original TASER device probes unspool the wire from the cartridge, causing a yaw effect before the dart stabilizes, which made it difficult to penetrate thick clothing. Newer versions (X26, C2) use a "shaped pulse" that increases effectiveness in the presence of barriers.
The TASER 7 conducted energy device is a two-shot device with increased reliability over legacy products. The conductive wires spool from the dart when the TASER 7 conducted energy device is fired, instead of spooling from the TASER cartridge which increases stability while in flight and therefore increases accuracy. The spiral darts fly straighter and faster with nearly twice the kinetic energy for better connection to the target and penetration through thicker clothing. The body of the dart breaks away to allow for containment at tough angles. TASER 7 has a 93% increased probe spread at close range, where 85% of deployments occur, according to agency reports. Rapid arc technology with adaptive cross-connection helps enable full incapacitation even at close range. TASER 7 wirelessly connects to the Axon network, allowing for easier updates and inventory management.
A TASER device may provide a safety benefit to police officers. The use of a TASER device has a greater deployment range than batons, pepper spray, or empty hand techniques. This allows police to maintain a greater distance. A 2008 study of use-of-force incidents by the Calgary Police Service conducted by the Canadian Police Research Centre found that the use of the TASER device resulted in fewer injuries than the use of batons or empty hand techniques. The study found that only pepper spray was a safer intervention option.
A typical TASER device can operate with a peak voltage of 50 kilovolts (1200 Volts to the body), an electric current of 1.9 milliamps, at for example 19 100 microsecond pulses per second. A supplier quotes a current of 3-4 milliamps.
Models
As of September 30, 2024, Axon has three main models of TASER conducted electrical weapons (CEWs) available for law enforcement use but not necessarily civilian use. Civilians, however, have access to the TASER Pulse, which runs at a 30 second cycle once fired to allow the victim the opportunity to escape.
The TASER X26P device is a single-shot CEW that is the smallest, most compact SMART WEAPON of all four Axon models.
The TASER X2 device is a two-shot TASER CEW with a warning arc and dual lasers. The warning arc is a function the officer can utilize with the push of a button to intimidate an aggressor, warn a potential assailant, and gain compliance of a suspect without having to deploy the loaded cartridges. During the warning arc mode, the TASER CEW will display an arc of electricity at the front of the device.
The TASER 7 device is the second newest of all four CEWs. It is a two-shot device with spiral darts that spool from the dart allowing the probes to fly straighter. The TASER 7 device's rapid arc technology with adaptive cross connections allows for full incapacitation. The TASER 7 CEW connects wirelessly to the Axon Evidence network that includes inventory management capabilities among other things.
The TASER 10 device was officially announced by Axon on January 24, 2023. The TASER 10 was dubbed the "less-lethal weapon of its era" by Axon. In addition to the functions of the TASER 7, the TASER 10 features an increased probe distance of up to 45 feet, waterproof capabilities, increased probe velocity (205 feet per second), and ability to deploy the probes individually allowing the officer to create their own "spread" unlike previous models, which relied heavily on precise aiming of the prongs at a fixed angle with the assistance of two lasers.
Lethality
As with all less-lethal weapons, use of the TASER system is never risk-free. Sharp metal projectiles and electricity are in use, so misuse or abuse of the weapon increases the likelihood that serious injury or death may occur. In addition, the manufacturer has identified other risk factors that may increase the risks of use. Children, pregnant women, the elderly, and very thin individuals are considered at higher risk. Persons with known medical problems, such as heart disease, history of seizure, or have a pacemaker are also at greater risk. Axon also warns that repeated, extended, or continuous exposure to the weapon is not safe. Because of this, the Police Executive Research Forum says that total exposure should not exceed 15 seconds.
There are other circumstances that pose higher secondary risks of serious injury or death, including:
Uncontrolled falls or subjects falling from elevated positions
Persons running on hard or rough surfaces, like asphalt
Persons operating machinery or conveyances (cars, motorcycles, bikes, skateboards)
Places where explosive or flammable substances are present
Fulton County, Georgia District Attorney Paul Howard Jr. said in 2020 that "under Georgia law, a taser is considered as a deadly weapon." A 2012 study published in the American Heart Association's journal Circulation found that Tasers can cause "ventricular arrhythmias, sudden cardiac arrest and even death". In 2014, NAACP State Conference President Scot X. Esdaile and the Connecticut NAACP argued that Tasers cause lethal results. Reuters reported that more than 1,000 people shocked with a Taser by police died through the end of 2018, nearly all of them since the early 2000s. At least 49 people died in the US in 2018 after being shocked by police with a Taser.
Drive Stun capability
Some TASER device models, particularly those used by police departments, also have a "Drive Stun" capability, where the TASER device is held against the target without firing the projectiles, and is intended to cause pain without incapacitating the target. "Drive Stun" is "the process of using the EMD (Electro Muscular Disruption) weapon as a pain compliance technique. This is done by activating the TASER [device] and placing it against an individual's body. This can be done without an air cartridge in place or after an air cartridge has been deployed."
Guidelines released in 2011 by the U.S. Department of Justice recommend that use of Drive Stun as a pain compliance technique be avoided. The guidelines were issued by a joint committee of the Police Executive Research Forum and the U.S. Department of Justice Office of Community Oriented Policing Services. The guidelines state "Using the CEW to achieve pain compliance may have limited effectiveness and, when used repeatedly, may even exacerbate the situation by inducing rage in the subject."
A study of U.S. police and sheriff departments found that 29.6% of the jurisdictions allowed the use of Drive Stun for gaining compliance in a passive resistance arrest scenario, with no physical contact between the officer and the subject. For a scenario that also includes non-violent physical contact, this number is 65.2%.
A Las Vegas police document says "The Drive Stun causes significant localized pain in the area touched by the TASER [CEW], but does not have a significant effect on the central nervous system. The Drive Stun does not incapacitate a subject but may assist in taking a subject into custody." The UCLA Taser incident and the University of Florida Taser incident involved university police officers using their TASER device's "Drive Stun" capability (referred to as a "contact tase" in the University of Florida Offense Report).
Amnesty International has expressed particular concern about Drive Stun, noting that "the potential to use TASERs in drive-stun mode—where they are used as 'pain compliance' tools when individuals are already effectively in custody—and the capacity to inflict multiple and prolonged shocks, renders the weapons inherently open to abuse."
Users
According to a 2011 study by the United States Department of Justice's National Institute of Justice entitled Police Use of Force, TASERs and Other Less-Lethal Weapons, over 15,000 law enforcement and military agencies around the world used TASER devices as part of their use of force continuum. Just as the number of agencies deploying TASER conducted energy weapons has continued to increase each year, so too the number of TASER device related "incidents" between law enforcement officers and suspects has been on the rise.
Excited delirium syndrome
Some of the deaths associated with TASER devices have been blamed on excited delirium, a controversial medical diagnosis that supposedly involves extreme agitation and aggressiveness. It has typically been diagnosed postmortem in young adult black males who were physically restrained by law enforcement at the time of death. The diagnosis was supported by the American College of Emergency Physicians from 2009 to 2023 and the National Association of Medical Examiners until 2023.
Excited delirium is thought to involve delirium, psychomotor agitation, anxiety, hallucinations, speech disturbances, disorientation, violent and bizarre behavior, insensitivity to pain, elevated body temperature, and increased strength. Excited delirium is associated with sudden death (usually via cardiac or respiratory arrest), particularly following the use of physical control measures, including police restraint and TASER devices. Excited delirium is most commonly diagnosed in male subjects with a history of serious mental illness or acute or chronic drug abuse, particularly stimulant drugs such as cocaine. Alcohol withdrawal or head trauma may also contribute to the condition.
The diagnosis of excited delirium has been controversial. Excited delirium has been listed as a cause of death by some medical examiners for several years, mainly as a diagnosis of exclusion established on autopsy. Additionally, academic discussion of excited delirium has been largely confined to forensic science literature, providing limited documentation about patients that survive the condition. These circumstances have led some civil liberties groups to question the cause of death diagnosis, claiming that excited delirium has been used to "excuse and exonerate" law enforcement authorities following the death of detained subjects, a possible "conspiracy or cover-up for brutality" when restraining agitated individuals. Also contributing to the controversy is the role of TASER device use in excited delirium deaths.
Excited delirium is not found in the current version of the Diagnostic and Statistical Manual of Mental Disorders. The term excited delirium was accepted by the National Association of Medical Examiners and the American College of Emergency Physicians, who argued in a 2009 white paper that excited delirium may be described by several codes within the ICD-9. In 2017, investigative reporters from Reuters reported that three of the 19 members of the 2009 task force were paid consultants for Axon, the manufacturer of Tasers.
Usage worldwide
Australia
Tasers are prohibited for civilian ownership in Australia in every state and territory. A weapons permit is required to purchase and own a taser.
Canada
Only members of law enforcement are allowed to own a taser legally. However, according to an article by The Globe and Mail, many Canadians illegally purchase tasers from the US, where they are legal.
China
Under the Law of the People's Republic of China on the Control of Firearms and Public Security Punishment Law, tasers are prohibited for civilian ownership in China without an application for a state licence. A weapons permit is required to purchase and own a taser.
Germany
Since April 2008, tasers can be legally purchased by persons 18 and older, but can only be carried by persons with a firearm carry permit (), which is only issued under very restricted conditions.
In 2001, Germany approved a pilot project allowing individual states to issue tasers to their SEK teams (police tactical units); by 2018, 13 out of 16 states had done so. A number of states have also provided a limited number of tasers to their general police forces. Some states, such as Berlin, have use of force guidelines that only permit taser use where firearm use would also be justified.
The Bundeswehr (German armed forces) does not issue tasers nor are they used in training.
Ireland
Under the Firearms Act of 1925, tasers, pepper spray and stun guns are illegal to possess or purchase in Ireland, even with a valid firearms certificate.
Jamaica
Tasers are legal for civilians to own, provided they possess a valid permit under the Customs Act. Currently, police in Jamaica do not have access to tasers, but in February 2021, Corporal James Rohan, Chairman of the Police Federation, requested access to non-lethal weaponry in order to deal more effectively with encounters with mentally ill individuals.
Japan
Under the Firearm and Sword Possession Control Law, import, carrying, purchase and use of stun guns or tasers is prohibited in Japan.
Poland
Any electroshock weapon, including stun guns and tasers, with amperage under 10 mA can be purchased by anyone over the age of 18 without permit or background checks. As most tasers fulfil those criteria, they are widely available in self-defence stores.
Russia
Stun guns and tasers made in Russia can be purchased for self-defense without special permission, however, under the Federal Law No. 150 "On Weapons" of the Russian Federation it's illegal to import and subsequent sale of any foreign stun devices or tasers into the country. The ban has been in place since the first version of the law was approved in 1996.
Saudi Arabia
Tasers are classified as weapons under Federal Law No. 3 of 2009, and therefore require a valid license to own or import.
South Korea
Adopted and used since 2004 by Korean National Police.
United Kingdom
Tasers have been in use by UK police forces since 2001, and they require 18 hours of initial training, followed by six hours of annual top-up training, in order for a police officer to be allowed to carry and use one. Members of the general public are not allowed to own tasers, with possession or sale of a taser punishable by up to 10 years in prison. As of September 2019, 30,548 (19%) of police officers were trained to use tasers. Tasers were deployed 23,000 times from March 2018 to March 2019, compared to only 10,000 times in 2013; however the UK police definition of "deployed" means that the weapon has been drawn; in the majority of cases it will not have been fired. In March 2020, extra funding was provided to purchase devices to allow more than 8,000 extra British police officers to carry a taser.
Use on children
There has been considerable controversy over the use of Taser devices on children and in schools.
Criminal use
The earliest known case of a taser being used on a child was on June 10, 1991, when one was used to incapacitate an 11-year-old girl in order to kidnap her. According to Jaycee Dugard, whenever she tried to escape, her kidnapper threatened to use the taser again.
Police use
In 2004, the parents of a 6-year-old boy in Miami sued the Miami-Dade County Police department for firing a Taser device at their child. The police said the boy was threatening to injure his own leg with a shard of glass, and said that using the device was the safest option to prevent the boy from injuring himself. The boy's mother told CNN that the three officers involved probably found it easier not to reason with her child. In the same county two weeks later, a 12-year-old girl skipping school and drinking alcohol was tased while she was running from police. The Miami-Dade County Police reported that the girl had started to run into traffic and that the Taser device was deployed to stop her from being hit by cars or causing an automobile accident. In March 2008, an 11-year-old girl was subdued with a Taser device. In March 2009, a 15-year-old boy in Michigan died from alcohol-induced excited delirium coupled with application of an electromuscular disruption device.
Police claim that the use of TASER conducted energy weapons on smaller subjects and elderly subjects is safer than alternative methods of subduing suspects, alleging that striking them or falling on them will cause much more injury than a TASER device, because the device is designed to only cause the contraction of muscles. Critics counter that TASER devices may interact with pre-existing medical complications such as medications, and may even contribute to someone's death as a result. Critics also suggest that using a Taser conducted electrical weapon on a minor, particularly a young child, is effectively cruel and abusive punishment, or unnecessary.
In May 2023, in Cooma, NSW, Australia, police tasered a 95-yr old dementia patient from less than away after apparently giving up on negotiations with her to drop the knife she was holding. At the time, she was standing upright & holding onto her 4-wheel walker. She survived the incident, but succumbed to head injuries sustained in the subsequent fall and died a week later. Her Estate sued the NSW Government, and, in April 2024, the accused & suspended police officer plead not guilty to manslaughter & remains free on bail awaiting trial.
Excessive use by law enforcement
In 2019, two Oklahoma police officers used Tasers over 50 times on an unarmed man resulting in death. Both officers were later convicted of second-degree murder. In January 2023, Los Angeles Police Department officers tasered a teacher at least 6 times resulting in the man's death. In 2014, Catasauqua, PA police officers inflicted serious injuries on a man, during a DUI arrest, when they tasered him eleven times while he was handcuffed and restrained in the back of a police vehicle.
A New York Times study published in 2025 collected Taser log documents from 36 police departments in Mississippi from 2020 through 2024. Data collection was incomplete, since several departments submitted no data or only partial data. The study identified 44,000 incidents in which one or more Tasers were triggered for at least one second each over the course of an hour. Reporters manually reviewed the nearly 1,000 cases that lasted at least 15 seconds. Once training operations were eliminated, the review found 611 incidents that lasted at least 15 seconds (the maximum shock duration per encounter recommended under national standards). In addition to 44 allegations of "Taser abuse over the past decade from lawsuits and department records", the Times reporters found hundreds more "incidents that raise red flags by examining Taser logs across the state". Cases described in the article include 11 people who were shocked while they were pinned down or handcuffed, such as Vivian Burks, an unarmed 65-year-old great-grandmother accused of marijuana use who was shocked 4 times in under one minute, and Keith Murriel, who died after being shocked at least 40 times for refusal to leave a hotel parking lot.
Use on non-human subjects
Tasers are used to immobilize wildlife for research, relocation, or treatment. Since they are classified as a form of torture, it is more common to use tranquilizer darts.
Use in torture
A report from a meeting of the United Nations Committee Against Torture states that "The Committee was worried that the use of TASER X26 weapons, provoking extreme pain, constituted a form of torture, and that in certain cases it could also cause death, as shown by several reliable studies and by certain cases that had happened after practical use." Amnesty International has also raised extensive concerns about the use of other electro-shock devices by American police and in American prisons, as they can be (and according to Amnesty International, sometimes are) used to inflict cruel pain on individuals.
In response to the claims that the pain inflicted by the use of the TASER device could potentially constitute torture, Tom Smith, the Chairman of the TASER Board, stated that the U.N. is "out of touch" with the needs of modern policing and asserted that "Pepper spray goes on for hours and hours, hitting someone with a baton breaks limbs, shooting someone with a firearm causes permanent damage, even punching and kicking—the intent of those tools is to inflict pain, ... with the TASER device, the intent is not to inflict pain; it's to end the confrontation. When it's over, it's over."
Legality
| Technology | Less-lethal weapons | null |
237782 | https://en.wikipedia.org/wiki/White%20stork | White stork | The white stork (Ciconia ciconia) is a large bird in the stork family, Ciconiidae. Its plumage is mainly white, with black on the bird's wings. Adults have long red legs and long pointed red beaks, and measure on average from beak tip to end of tail, with a wingspan. The two subspecies, which differ slightly in size, breed in Europe north to Finland, northwestern Africa, Palearctic east to southern Kazakhstan and southern Africa. The white stork is a long-distance migrant, wintering in Africa from tropical Sub-Saharan Africa to as far south as South Africa, or on the Indian subcontinent. When migrating between Europe and Africa, it avoids crossing the Mediterranean Sea and detours via the Levant in the east or the Strait of Gibraltar in the west, because the air thermals on which it depends for soaring do not form over water.
A carnivore, the white stork eats a wide range of animal prey, including insects, fish, amphibians, reptiles, small mammals and small birds. It takes most of its food from the ground, among low vegetation, and from shallow water. It is a monogamous breeder, and both members of the pair build a large stick nest, which may be used for several years. Each year the female can lay one clutch of usually four eggs, which hatch asynchronously 33–34 days after being laid. Both parents take turns incubating the eggs and both feed the young. The young leave the nest 58–64 days after hatching, and continue to be fed by the parents for a further 7–20 days.
The white stork has been rated as least concern by the International Union for Conservation of Nature (IUCN). It benefited from human activities during the Middle Ages as woodland was cleared, but changes in farming methods and industrialisation saw it decline and disappear from parts of Europe in the 19th and early 20th centuries. Conservation and reintroduction programs across Europe have resulted in the white stork resuming breeding in the Netherlands, Belgium, Switzerland, Sweden and the United Kingdom. It has few natural predators, but may harbour several types of parasite; the plumage is home to chewing lice and feather mites, while the large nests maintain a diverse range of mesostigmatic mites. This conspicuous species has given rise to many legends across its range, of which the best-known is the story of babies being brought by storks.
Taxonomy and evolution
English naturalist Francis Willughby wrote about the white stork in the 17th century, having seen a drawing sent to him by Sir Thomas Browne of Norwich. He named it Ciconia alba. They noted they were occasional vagrants to England, blown there by storms. It was one of the many bird species originally described by Carl Linnaeus in the 10th edition of Systema Naturae, where it was given the binomial name of Ardea ciconia. It was reclassified to and designated the type species of the new genus Ciconia by Mathurin Jacques Brisson in 1760. Both the genus and specific epithet, cĭcōnia, are the Latin word for "stork".
There are two subspecies:
C. c. ciconia, the nominate subspecies described by Linnaeus in 1758, breeds from Europe to northwestern Africa and westernmost Asia and in southern Africa, and winters mainly in Africa south of the Sahara Desert, though some birds winter in India.
C. c. asiatica, described by Russian naturalist Nikolai Severtzov in 1873, breeds in Turkestan and winters from Iran to India. It is slightly larger than the nominate subspecies.
The stork family contains six genera in three broad groups: the open-billed and wood storks (Mycteria and Anastomus), the giant storks (Ephippiorhynchus, Jabiru and Leptoptilos) and the "typical" storks (Ciconia). The typical storks include the white stork and six other extant species, which are characterised by straight pointed beaks and mainly black and white plumage. Its closest relatives are the larger, black-billed Oriental stork (Ciconia boyciana) of East Asia, which was formerly classified as a subspecies of the white stork, and the maguari stork (C. maguari) of South America. Close evolutionary relationships within Ciconia are suggested by behavioural similarities and, biochemically, through analysis of both mitochondrial cytochrome b gene sequences and DNA-DNA hybridization.
A Ciconia fossil representing the distal end of a right humerus has been recovered from Miocene beds of Rusinga Island, Lake Victoria, Kenya. The 24–6 million year old fossil could have originated from either a white stork or a black stork (C. nigra), which are species of about the same size with very similar bone structures. The Middle Miocene beds of Maboko Island have yielded further remains.
Description
The white stork is a large bird. It has a length of , and a standing height of . The wingspan is and its weight is . Like all storks, it has long legs, a long neck and a long straight pointed beak. The sexes are identical in appearance, except that males are larger than females on average. The plumage is mainly white with black flight feathers and wing coverts; the black is caused by the pigment melanin. The breast feathers are long and shaggy forming a ruff which is used in some courtship displays. The irises are dull brown or grey, and the peri-orbital skin is black. The adult has a bright red beak and red legs, the colouration of which is derived from carotenoids in the diet. In parts of Spain, studies have shown that the pigment is based on astaxanthin obtained from an introduced species of crayfish (Procambarus clarkii) and the bright red beak colours show up even in nestlings, in contrast to the duller beaks of young white storks elsewhere.
As with other storks, the wings are long and broad enabling the bird to soar. In flapping flight its wingbeats are slow and regular. It flies with its neck stretched forward and with its long legs extended well beyond the end of its short tail. It walks at a slow and steady pace with its neck upstretched. In contrast, it often hunches its head between its shoulders when resting. Moulting has not been extensively studied, but appears to take place throughout the year, with the primary flight feathers replaced over the breeding season.
Upon hatching, the young white stork is partly covered with short, sparse, whitish down feathers. This early down is replaced about a week later with a denser coat of woolly white down. By three weeks, the young bird acquires black scapulars and flight feathers. On hatching the chick has pinkish legs, which turn to greyish-black as it ages. Its beak is black with a brownish tip. By the time it fledges, the juvenile bird's plumage is similar to that of the adult, though its black feathers are often tinged with brown, and its beak and legs are a duller brownish-red or orange. The beak is typically orange or red with a darker tip. The bills gain the adults' red colour the following summer, although the black tips persist in some individuals. Young storks adopt adult plumage by their second summer.
Similar species
Within its range the white stork is distinctive when seen on the ground. The winter range of C. c. asiatica overlaps that of the Asian openbill, which has similar plumage but a different bill shape. When seen at a distance in flight, the white stork can be confused with several other species with similar underwing patterns, such as the yellow-billed stork, great white pelican and Egyptian vulture. The yellow-billed stork is identified by its black tail and a longer, slightly curved, yellow beak. The white stork also tends to be larger than the yellow-billed stork. The great white pelican has short legs which do not extend beyond its tail, and it flies with its neck retracted, keeping its head near to its stocky body, giving it a different flight profile. Pelicans also behave differently, soaring in orderly, synchronised flocks rather than in disorganised groups of individuals as the white stork does. The Egyptian vulture is much smaller, with a long wedge-shaped tail, shorter legs and a small yellow-tinged head on a short neck. The common crane, which can also look black and white in strong light, shows longer legs and a longer neck in flight.
Distribution and habitat
The nominate race of the white stork has a wide although disjunct summer range across Europe, clustered in the Iberian Peninsula and North Africa in the west, and much of eastern and central Europe, with 25% of the world's population concentrated in Poland, as well as parts of western Asia. The asiatica population of about 1450 birds is restricted to a region in central Asia between the Aral Sea and Xinjiang in western China. The Xinjiang population is believed to have become extinct around 1980. Migration routes extend the range of this species into many parts of Africa and India. Some populations adhere to the eastern migration route, which passes across Israel into eastern and central Africa.
In Africa the white stork may spend the winter in Tunisia, Morocco, Uganda, Angola, Zimbabwe, Djibouti, Botswana, Mozambique, Zambia, Swaziland, Gambia, Guinea, Algeria, and Ghana. A few records of breeding from South Africa have been known since 1933 at Calitzdorp, and about 10 birds have been known to breed since the 1990s around Bredasdorp. A small population of white storks winters in India and is thought to derive principally from the C. c. asiatica population as flocks of up to 200 birds have been observed on spring migration in the early 1900s through the Kurram Valley. However, birds ringed in Germany have been recovered in western (Bikaner) and southern (Tirunelveli) India. An atypical specimen with red orbital skin, a feature of the Oriental white stork, has been recorded and further study of the Indian population is required. North of the breeding range, it is a passage migrant or vagrant in Finland, Iceland, Ireland, Norway and Sweden, and west to the Azores and Madeira. Despite their geographical proximity, in Finland the species is rare, while in Estonia there are an estimated 5,000 breeding pairs. In recent years, the range has expanded into western Russia.
The white stork's preferred feeding grounds are grassy meadows, farmland and shallow wetlands. It avoids areas overgrown with tall grass and shrubs. In the Chernobyl area of northern Ukraine, white stork populations declined after the 1986 nuclear accident there as farmland was succeeded by tall grass and shrubs. In parts of Poland, poor natural foraging grounds have forced birds to seek food at rubbish dumps since 1999. White storks have also been reported foraging in rubbish dumps in the Middle East, North Africa and South Africa. Anthropogenic litter was found in the pellets of one third of breeding pairs in Poland, even though all pairs nested far from major dumps and landfills.
The white stork breeds in greater numbers in areas with open grasslands, particularly grassy areas which are wet or periodically flooded, and less in areas with taller vegetation cover such as forest and shrubland. They make use of grasslands, wetlands, and farmland on the wintering grounds in Africa. White storks were probably aided by human activities during the Middle Ages as woodland was cleared and new pastures and farmland were created, and they were found across much of Europe, breeding as far north as Sweden. The population in Sweden is thought to have established in the 16th century after forests were cut down for agriculture. About 5000 pairs were estimated to breed in the 18th century which declined subsequently. The first accurate census in 1917 found 25 pairs and the last pair failed to breed around 1955. A similar pattern was seen in Denmark where the white stork appears to have become established in the 15th century when forests were being replaced by farmland and meadows, followed by a rapid population increase in the next centuries and then a rapid decline due mainly to modern, high-intensity agriculture in the last 200 years. The white stork has been a rare visitor to the British Isles, with about 20 birds seen in Britain every year, and prior to 2020 there were no records of nesting since a pair nested atop St Giles High Kirk in Edinburgh, Scotland in 1416. In 2020, a pair bred in the United Kingdom for the first time in over 600 years, as part of a re-introduction initiative in West Sussex called the White Stork Project.
A decline in population began in the 19th century due to industrialisation and changes in agricultural methods. White storks no longer nest in many countries, and the current strongholds of the western population are in Portugal, Spain, Ukraine and Poland. In the Iberian Peninsula, populations are concentrated in the southwest, and have also declined due to agricultural practices. A study published in 2005 found that the Podhale region in the uplands of southern Poland had seen an influx of white storks, which first bred there in 1931 and have nested at progressively higher altitudes since, reaching 890 m (3000 ft) in 1999. The authors proposed that this was related to climate warming and the influx of other animals and plants to higher altitudes. White storks arriving in Poznań province (Greater Poland Voivodeship) in western Poland in spring to breed did so some 10 days earlier in the last twenty years of the 20th century than at the end of the 19th century.
Migration
Systematic research into migration of the white stork began with German ornithologist Johannes Thienemann who commenced bird ringing studies in 1906 at the Rossitten Bird Observatory, on the Curonian Spit in what was then East Prussia. Although not many storks passed through Rossitten itself, the observatory coordinated the large-scale ringing of the species throughout Germany and elsewhere in Europe. Between 1906 and the Second World War about 100,000, mainly juvenile, white storks were ringed, with over 2,000 long-distance recoveries of birds wearing Rossitten rings reported between 1908 and 1954.
Routes
White storks fly south from their summer breeding grounds in Europe in August and September, heading for Africa. There, they spend the winter in savannah from Kenya and Uganda south to the Cape Province of South Africa. In these areas, they congregate in large flocks which may exceed a thousand individuals. Some diverge westwards into western Sudan and Chad, and may reach Nigeria.
In spring, the birds return north; they are recorded from Sudan and Egypt from February to April. They arrive back in Europe around late March and April, after an average journey of 49 days. By comparison, the autumn journey is completed in about 26 days. Tailwinds and scarcity of food and water en route (birds fly faster over regions lacking resources) increase average speed.
To avoid a long sea crossing over the Mediterranean, birds from central Europe either follow an eastern migration route by crossing the Bosphorus in Turkey, traversing the Levant, then bypassing the Sahara Desert by following the Nile valley southwards, or follow a western route over the Strait of Gibraltar. These migration corridors maximise help from the thermals and thus save energy.
In winter 2013–2014, white storks were observed in southern India's Mudumalai National Park for the first time.
The eastern route is by far the more important with 530,000 white storks using it annually, making the species the second commonest migrant there (after the European honey buzzard). The flocks of migrating raptors, white storks and great white pelicans can stretch for 200 km (125 mi). The eastern route is twice as long as the western, but storks take the same time to reach the wintering grounds by either.
Juvenile white storks set off on their first southward migration in an inherited direction but, if displaced from that bearing by weather conditions, they are unable to compensate, and may end up in a new wintering location. Adults can compensate for strong winds and adjust their direction to finish at their normal winter sites, because they are familiar with the location. For the same reason, all spring migrants, even those from displaced wintering locations, can find their way back to the traditional breeding sites. An experiment with young birds raised in captivity in Kaliningrad and released in the absence of wild storks to show them the way revealed that they appeared to have an instinct to fly south, although the scatter in direction was large.
Energetics
White storks rely on the uplift of air thermals to soar and glide the long distances of their annual migrations between Europe and Sub-Saharan Africa. For many, the shortest route would take them over the Mediterranean Sea; however, since air thermals do not form over water, they generally detour over land to avoid the trans-Mediterranean flights that would require prolonged energetic wing flapping. It has been estimated that flapping flight metabolises 23 times more body fat than soaring flight per distance travelled. Thus, flocks spiral upwards on rising warm air until they emerge at the top, up to above the ground (though one record from Western Sudan observed an altitude of ).
Long flights over water may occasionally be undertaken. A young white stork ringed at the nest in Denmark subsequently appeared in England, where it spent some days before moving on. It was later seen flying over St Mary's, Isles of Scilly, and arrived in a poor condition in Madeira three days later. That island is 500 km (320 mi) from Africa, and twice as far from the European mainland. Migration through the Middle East may be hampered by the khamsin, winds bringing gusty overcast days unsuitable for flying. In these situations, flocks of white storks sit out the adverse weather on the ground, standing and facing into the wind.
Behaviour
The white stork is a gregarious bird; flocks of thousands of individuals have been recorded on migration routes and at wintering areas in Africa. Non-breeding birds gather in groups of 40 or 50 during the breeding season. The smaller dark-plumaged Abdim's stork is often encountered with white stork flocks in southern Africa. Breeding pairs of white stork may gather in small groups to hunt, and colony nesting has been recorded in some areas. However, groups among white stork colonies vary widely in size and the social structure is loosely defined; young breeding storks are often restricted to peripheral nests, while older storks attain higher breeding success while occupying the better quality nests toward the centres of breeding colonies. Social structure and group cohesion is maintained by altruistic behaviours such as allopreening. White storks exhibit this behaviour exclusively at the nest site. Standing birds preen the heads of sitting birds, sometimes these are parents grooming juveniles, and sometimes juveniles preen each other. Unlike most storks, it never adopts a spread-winged posture, though it is known to droop its wings (holding them away from its body with the primary feathers pointing downwards) when its plumage is wet.
A white stork's droppings, containing faeces and uric acid, are sometimes directed onto its own legs, making them appear white. The resulting evaporation provides cooling and is termed urohidrosis. Birds that have been ringed can sometimes be affected by the accumulation of droppings around the ring leading to constriction and leg trauma. The white stork has also been noted for tool use by squeezing moss in the beak to drip water into the mouths of its chicks.
Communication
The adult white stork's main sound is noisy bill-clattering, which has been likened to distant machine gun fire. The bird makes these sounds by rapidly opening and closing its beak so that a knocking sound is made each time its beak closes. The clattering is amplified by its throat pouch, which acts as a resonator. Used in a variety of social interactions, bill-clattering generally grows louder the longer it lasts, and takes on distinctive rhythms depending on the situation—for example, slower during copulation and briefer when given as an alarm call. The only vocal sound adult birds generate is a weak barely audible hiss; however, young birds can generate a harsh hiss, various cheeping sounds, and a cat-like mew they use to beg for food. Like the adults, young also clatter their beaks. The up-down display is used for a number of interactions with other members of the species. Here a stork quickly throws its head backwards so that its crown rests on its back before slowly bringing its head and neck forwards again, and this is repeated several times. The display is used as a greeting between birds, post coitus, and also as a threat display. Breeding pairs are territorial over the summer, and use this display, as well as crouching forward with the tails cocked and wings extended.
Breeding and lifespan
The white stork breeds in open farmland areas with access to marshy wetlands, building a large stick nest in trees, on buildings, or on purpose-built man-made platforms. Each nest is in depth, in diameter, and in weight. Nests are built in loose colonies. Not persecuted as it is viewed as a good omen, it often nests close to human habitation; in southern Europe, nests can be seen on churches and other buildings. The nest is typically used year after year especially by older males. The males arrive earlier in the season and choose the nests. Larger nests are associated with greater numbers of young successfully fledged, and appear to be sought after. Nest change is often related to a change in the pairing and failure to raise young the previous year, and younger birds are more likely to change nesting sites. Although a pair may be found to occupy a nest, partners may change several times during the early stages and breeding activities begin only after a stable pairing is achieved.
Several bird species often nest within the large nests of the white stork. Regular occupants are house sparrows, tree sparrows, and common starlings; less common residents include Eurasian kestrels, little owls, European rollers, white wagtails, black redstarts, Eurasian jackdaws, and Spanish sparrows. Active nests may attract insectivorous birds such as swallows, martins, and swifts, where they prey on insects flying around. Paired birds greet by engaging in up-down and head-shaking crouch displays, and clattering the beak while throwing back the head. Pairs copulate frequently throughout the month before eggs are laid. High-frequency pair copulation is usually associated with sperm competition and high frequency of extra-pair copulation. It has been considered that extra-pair copulation rates were low but a 2016 DNA sample study suggests that extra-pair copulation occasionally occurs in white storks. Despite the relatively high extra-pair paternity occurrence compared to other long-lived monogamous birds, white storks form strong pair bonds and high nest fidelity maintained across years.
A white stork pair raises a single brood a year. The female typically lays four eggs, though clutches of one to seven have been recorded. The eggs are white, but often look dirty or yellowish due to a glutinous covering. They typically measure , and weigh , of which about is shell. Incubation begins as soon as the first egg is laid, so the brood hatches asynchronously, beginning 33 to 34 days later. The first hatchling typically has a competitive edge over the others. While stronger chicks are not aggressive towards weaker siblings, as is the case in some species, weak or small chicks are sometimes killed by their parents. This behavior occurs in times of food shortage to reduce brood size and hence increase the chance of survival of the remaining nestlings. White stork nestlings do not attack each other, and their parents' feeding method (disgorging large amounts of food at once) means that stronger siblings cannot outcompete weaker ones for food directly, hence parental infanticide is an efficient way of reducing brood size. Despite this, this behavior has not commonly been observed.
The temperature and weather around the time of hatching in spring is important; cool temperatures and wet weather increase chick mortality and reduce breeding success rates. Somewhat unexpectedly, studies have found that later-hatching chicks which successfully reach adulthood produce more chicks than do their earlier-hatching nestmates. The body weight of the chicks increases rapidly in the first few weeks and reaches a plateau of about in 45 days. The length of the beak increases linearly for about 50 days. Young birds are fed with earthworms and insects, which are regurgitated by the parents onto the floor of the nest. Older chicks reach into the mouths of parents to obtain food. Chicks fledge 58 to 64 days after hatching.
White storks generally begin breeding when about four years old, although the age of first breeding has been recorded as early as two years and as late as seven years. The oldest known wild white stork lived for 39 years after being ringed in Switzerland, while captive birds have lived for more than 35 years.
Feeding
White storks consume a wide variety of animal prey. They prefer to forage in meadows that are within roughly 5 km (3 mi) of their nest and sites where the vegetation is shorter so that their prey is more accessible. Their diet varies according to season, locality and prey availability. Common food items include insects (primarily beetles, grasshoppers, locusts and crickets), earthworms, reptiles, amphibians, particularly frog species such as the edible frog (Pelophylax kl. esculentus) and common frog (Rana temporaria) and small mammals such as voles, moles and shrews. Less commonly, they also eat bird eggs and young birds, fish, molluscs, crustaceans and scorpions. They hunt mainly during the day, swallowing small prey whole, but killing and breaking apart larger prey before swallowing. Rubber bands are mistaken for earthworms and consumed, occasionally resulting in fatal blockage of the digestive tract.
Birds returning to Latvia during spring have been shown to locate their prey, moor frogs (Rana arvalis), by homing in on the mating calls produced by aggregations of male frogs.
The diet of non-breeding birds is similar to that of breeding birds, but food items are more often taken from dry areas. White storks wintering in western India have been observed to follow blackbuck to capture insects disturbed by them. Wintering white storks in India sometimes forage along with the woolly-necked stork (Ciconia episcopus). Food piracy has been recorded in India with a rodent captured by a western marsh harrier appropriated by a white stork, while Montagu's harrier is known to harass white storks foraging for voles in some parts of Poland. White storks can exploit landfill sites for food during the breeding season, migration period and winter.
Parasites and diseases
White stork nests are habitats for an array of small arthropods, particularly over the warmer months after the birds arrive to breed. Nesting over successive years, the storks bring more material to line their nests and layers of organic material accumulate within them. Not only do their bodies tend to regulate temperatures within the nest, but excrement, food remains and feather and skin fragments provide nourishment for a large and diverse population of free-living mesostigmatic mites. A survey of twelve nests found 13,352 individuals of 34 species, the most common being Macrocheles merdarius, M. robustulus, Uroobovella pyriformis and Trichouropoda orbicularis, which together represented almost 85% of all the specimens collected. These feed on the eggs and larvae of insects and on nematodes, which are abundant in the nest litter. These mites are dispersed by coprophilous beetles, often of the family Scarabaeidae, or on dung brought by the storks during nest construction. Parasitic mites do not occur, perhaps being controlled by the predatory species. The overall effect of the mite population is unclear, the mites may have a role in suppressing harmful organisms (and hence be beneficial), or they may themselves have an adverse effect on nestlings.
The birds themselves host species belonging to more than four genera of feather mites. These mites, including Freyanopterolichus pelargicus, and Pelargolichus didactylus live on fungi growing on the feathers. The fungi found on the plumage may feed on the keratin of the outer feathers or on feather oil. Chewing lice such as Colpocephalum zebra tend to be found on the wings, and Neophilopterus incompletus elsewhere on the body.
The white stork also carries several types of internal parasites, including Toxoplasma gondii and intestinal parasites of the genus Giardia. A study of 120 white stork carcasses from Saxony-Anhalt and Brandenburg in Germany yielded eight species of trematode (fluke), four cestode (tapeworm) species, and at least three species of nematode. One species of fluke, Chaunocephalus ferox, caused lesions in the wall of the small intestine in a number of birds admitted to two rehabilitation centres in central Spain, and was associated with reduced weight. It is a recognised pathogen and cause of morbidity in the Asian openbill (Anastomus oscitans). More recently, the thorough study performed by J. Sitko and P. Heneberg in the Czech Republic in 1962–2013 suggested that the central European white storks host 11 helminth species. Chaunocephalus ferox, Tylodelphys excavata and Dictymetra discoidea were reported to be the dominant ones. The other species found included Cathaemasia hians, Echinochasmus spinulosus, Echinostoma revolutum, Echinostoma sudanense, Duboisia syriaca, Apharyngostrigea cornu, Capillaria sp. and Dictymetra discoidea. Juvenile white storks were shown to host less species, but the intensity of infection was higher in the juveniles than in the adult storks.
West Nile virus (WNV) is mainly a bird infection that is transmitted between birds by mosquitos. Migrating birds appear to be important in spread of the virus, the ecology of which remains poorly known. On 26 August 1998, a flock of about 1,200 migrating white storks that had been blown off course on their southward journey landed in Eilat, in southern Israel. The flock was stressed as it had resorted to flapping flight to return to its migratory route, and a number of birds died. A virulent strain of West Nile virus was isolated from the brains of eleven dead juveniles. Other white storks subsequently tested in Israel have shown anti-WNV antibodies. In 2008 three juvenile white storks from a Polish wildlife refuge yielded seropositive results indicating exposure to the virus, but the context or existence of the virus in Poland is unclear.
Conservation
The white stork's decline due to industrialisation and agricultural changes (principally the draining of wetlands and conversion of meadows to crops such as maize) began in the 19th century: the last wild individual in Belgium was seen in 1895, in Sweden in 1955, in Switzerland in 1950 and in the Netherlands in 1991. However, the species has since been reintroduced to many regions. It has been rated as least concern by the IUCN since 1994, after being evaluated as near threatened in 1988. The white stork is one of the species to which the Agreement on the Conservation of African-Eurasian Migratory Waterbirds (AEWA) applies. Parties to the agreement are required to engage in a wide range of conservation strategies described in a detailed action plan. The plan is intended to address key issues such as species and habitat conservation, management of human activities, research, education, and implementation. Threats include the continued loss of wetlands, collisions with overhead power lines, use of persistent pesticides (such as DDT) to combat locusts in Africa, and largely illegal hunting on migration routes and wintering grounds.
A large population of white storks breeds in central (Poland, Ukraine and Germany) and southern Europe (Spain and Turkey). In a 2004/05 census, there were 52,500 pairs in Poland, 30,000 pairs in Ukraine, 20,000 pairs in Belarus, 13,000 pairs in Lithuania (the highest known density of this species in the world), 10,700 pairs in Latvia, and 10,200 in Russia. There were around 5,500 pairs in Romania, 5,300 in Hungary, and an estimated 4,956 breeding pairs in Bulgaria. In former Yugoslavia there are 1,700 in Croatia, 1,400 in Serbia, 236 in Slovenia and an estimated 40 breeding pairs in Bosnia and Herzegovina. In Germany, the majority of the total 4,482 pairs were in the eastern region, especially in the states of Brandenburg and Mecklenburg-Vorpommern (1296 and 863 pairs in 2008 respectively). Apart from Spain and Portugal (33,217 and 7,684 pairs in 2004/05 respectively), populations are generally much less stable. In the eastern Mediterranean region Turkey has a sizeable population of 6,195 pairs, and Greece 2,139 pairs. In Western Europe the white stork remains a rare bird despite conservation efforts. In 2004 France had only 973 pairs, and the Netherlands 528 pairs. In Denmark, the species had consistently bred since the 15th century, peaking at several thousands pairs around 1800. Afterwards it began declining mainly due to habitat loss (especially conversion of wetlands and meadows into modern farming), with only a few tens of breeding pairs in 1974 and none in 2008. Since then, it has reestablished itself and the population has slowly started to increase, reaching ten pairs in 2023. In Armenia the population of the white stork slightly increased in the period between 2005 and 2015, and by last data reached 652 pairs.
In the early 1980s, the population had fallen to fewer than nine pairs in the entire upper Rhine River valley, an area closely identified with the white stork for centuries. Conservation efforts successfully increased the population of birds there to 270 pairs (in 2008), largely due to the actions of the Association for the Protection and Reintroduction of Storks in Alsace and Lorraine. The reintroduction of zoo-reared birds has halted further declines in Italy, the Netherlands, and Switzerland. There were 601 pairs breeding in Armenia and around 700 pairs in the Netherlands in 2008, and few pairs also breed in South Africa, typically recent colonists from within the normal wintering population. In Poland, electric poles have been modified with a platform at the top to prevent the white stork's large nest from disrupting the electricity supply, and sometimes nests are moved from an electric pole to a man-made platform. Introductions of zoo-reared birds in the Netherlands has been followed up by feeding and nest-building programs by volunteers. Similar reintroduction programs are taking place in Sweden, and Switzerland, where 175 pairs were recorded breeding in 2000. Long-term viability of the population in Switzerland is unclear as breeding success rates are low, and supplementary feeding does not appear to be of benefit. However, as of 2017, 470 adults and 757 young ones were recorded in Switzerland. Historically, the species' northern breeding limit was at Estonia, but it has moved slowly northwards (possibly due to warmer temperatures) into Karelia and in 2015 the first ever known breeding happened in Finland.
In August 2019, 24 juveniles were released at the Knepp Estate in West Sussex, and others at a site near Tunbridge Wells and at the Wintershall Estate, near Godalming, as part of a project to reintroduce the white stork as a breeding species in South East England, for the first time since 1416. In 2020, the program was successful with the birth of five baby storks.
Cultural associations
Due to its large size, predation on vermin, and nesting behaviour close to human settlements and on rooftops, the white stork has an imposing presence that has influenced human culture and folklore. The Hebrew word for the white stork is chasidah (חסידה), meaning "merciful" or "kind". Greek and Roman mythology portray storks as models of parental devotion. The 3rd century Roman writer Aelian citing the authority of Alexander of Myndus noted in his De natura animalium (book 3, chapter 23) that aged storks flew away to oceanic islands where they were transformed into humans as a reward for their piety towards their parents. The bird is featured in at least three of Aesop's Fables: The Fox and the Stork, The Farmer and the Stork, and The Frogs Who Desired a King. Storks were also thought to care for their aged parents, feeding them and even transporting them, and children's books depicted them as a model of filial values. A Greek law called Pelargonia, from the Ancient Greek word pelargos for stork, required citizens to take care of their aged parents. The Greeks also held that killing a stork could be punished with death. Storks were allegedly protected in Ancient Thessaly as they hunted snakes, and widely held to be Virgil's "white bird". Roman writers noted the white stork's arrival in spring, which alerted farmers to plant their vines. On occasion ancient Egyptians mummified White storks.
Followers of Islam revered storks because they made an annual pilgrimage to Mecca on their migration. Some of the earliest understanding on bird migration were initiated by an interest in white storks; Pfeilstörche ("arrow storks") were found in Europe with African arrows embedded in their bodies. A well-known example of such a stork found in the summer of 1822 in the German town of Klütz in Mecklenburg was made into a mounted taxidermy specimen, complete with the ornate African arrow, that is now in the University of Rostock.
Storks have little fear of humans if not disturbed, and often nest on buildings in Europe. In Germany, the presence of a nest on a house was believed to protect against fires. They were also protected because of the belief that their souls were human. German, Dutch and Polish households would encourage storks to nest on houses, sometimes by constructing purpose-built high platforms, to bring good luck. Across much of Central and Eastern Europe it is believed that storks bring harmony to a family on whose property they nest.
The white stork is a popular motif on postage stamps, and it is featured on more than 120 stamps issued by more than 60 stamp-issuing entities. It is the national bird of Lithuania, Belarus and Poland, and it was a Polish mascot at the Expo 2000 Fair in Hanover. Storks nesting in Polish villages such as Żywkowo have made them tourist attractions, drawing 2000–5000 visitors a year in 2014. In the 19th century, storks were also thought to only live in countries having a republican form of government. Polish poet Cyprian Kamil Norwid mentioned storks in his poem Moja piosnka (II) ("My Song (II)"):
In 1942 Heinrich Himmler sought to use storks to carry Nazi propaganda leaflets so as to win support from the Boers in South Africa. The idea for this "Storchbein-Propaganda" plan was a secret that was transmitted by Walter Schellenberg to be examined by the German ornithologist Ernst Schüz at the Rossiten bird observatory, who pointed out that the probability of finding marked storks in Africa was less than one percent, requiring a 1000 birds to transmit 10 leaflets successfully. The plan was then dropped.
Storks and delivery of babies
According to European folklore, the stork is responsible for bringing babies to new parents. The legend is very ancient, but was popularised by a 19th-century Hans Christian Andersen story called "The Storks". German folklore held that storks found babies in caves or marshes and brought them to households in a basket on their backs or held in their beaks. These caves contained adebarsteine or "stork stones". The babies would then be given to the mother or dropped down the chimney. Households would notify when they wanted children by placing sweets for the stork on the window sill. From there the folklore has spread around the world to the Philippines and countries in South America. Birthmarks on the back of the head of newborn baby, nevus flammeus nuchae, are sometimes referred to as stork-bite.
In Slavic mythology and pagan religion, storks were thought to carry unborn souls from Vyraj to Earth in spring and summer. This belief still persists in the modern folk culture of many Slavic countries, in the simplified child story that "storks bring children into the world". Storks were seen by Early Slavs as bringing luck, and killing one would bring misfortune.
Likewise, in Norse mythology, the god Hœnir, responsible for giving reason to the first humans, Ask and Embla, has been connected with the stork through his epithets long-legs and mud-king, along with Indo-European cognates such as Greek κύκνος 'swan' and Sanskrit शकुन.
A long-term study that showed a spurious correlation between the numbers of stork nests and human births is widely used in the teaching of basic statistics as an example to highlight that correlation does not necessarily indicate causation.
Psychoanalyst Marvin Margolis suggested that the enduring nature of the stork fable of the newborn is linked to its addressing a psychological need, in that it allays the discomfort of discussing sex and procreation with children. Birds have long been associated with the maternal symbols from pagan goddesses, such as Juno, to the Holy Spirit, and the stork may have been chosen for its white plumage depicting purity, size, and flight at high altitude likened to flying between Earth and Heaven. The fable and its relation to the internal world of the child have been discussed by Sigmund Freud and Carl Jung. In fact, Jung recalled being told the story himself upon the birth of his own sister. The traditional link with the newborn continues with their use in advertising for such products as nappies and baby announcements.
There were negative aspects to stork folklore as well; a Polish folk tale relates how God made the stork's plumage white, while the Devil gave it black wings, imbuing it with both good and evil impulses. They were also associated with handicapped or stillborn babies in Germany, explained as the stork having dropped the baby en route to the household, or as revenge or punishment for past wrongdoing. A mother who was confined to bed around the time of childbirth was said to have been "bitten" by the stork. In Denmark, storks were said to toss a nestling off the nest and then an egg in successive years. In medieval England, storks were also associated with adultery, possibly inspired by their courtship rituals. Their preening and posture saw them linked with the attribute of self-conceit. Children of African American slaves were sometimes told that white babies were brought by storks, while black babies were born from buzzard eggs.
| Biology and health sciences | Pelecanimorphae | Animals |
237790 | https://en.wikipedia.org/wiki/Black%20stork | Black stork | The black stork (Ciconia nigra) is a large bird in the stork family Ciconiidae. It was first described by Carl Linnaeus in the 10th edition of his Systema Naturae. Measuring on average from beak tip to end of tail with a wingspan, the adult black stork has mainly black plumage, with white underparts, long red legs and a long pointed red beak. A widespread but uncommon species, it breeds in scattered locations across Europe (predominantly in Portugal and Spain, and central and eastern parts), and east across the Palearctic to the Pacific Ocean. It is a long-distance migrant, with European populations wintering in tropical Sub-Saharan Africa, and Asian populations in the Indian subcontinent. When migrating between Europe and Africa, it avoids crossing broad expanses of the Mediterranean Sea and detours via the Levant in the east, the Strait of Sicily in the center, or the Strait of Gibraltar in the west. An isolated non-migratory population lives in Southern Africa.
Unlike the closely related white stork, the black stork is a shy and wary species. It is seen singly or in pairs, usually in marshy areas, rivers or inland waters. It feeds on amphibians, small fish and insects, generally wading slowly in shallow water stalking its prey. Breeding pairs usually build nests in large forest trees—most commonly deciduous but also coniferous—which can be seen from long distances, as well as on large boulders, or under overhanging ledges in mountainous areas. The female lays two to five greyish-white eggs, which become soiled over time in the nest. Incubation takes 32 to 38 days, with both sexes sharing duties, and fledging takes 60 to 71 days.
The black stork is considered to be a species of least concern by the International Union for Conservation of Nature, but its actual status is uncertain. Despite its large range, it is nowhere abundant, and it appears to be declining in parts of its range, such as in India, China and parts of Western Europe, though increasing in others such as the Iberian Peninsula. Various conservation measures have been taken for the black stork, like the Conservation Action Plan for African black storks by Wetlands International. It is also protected under the African-Eurasian Waterbird Agreement and the Convention on International Trade in Endangered Species of Wild Fauna and Flora. On 31 May 1968, South Korea designated the species as natural monument 200.
Taxonomy and etymology
English naturalist Francis Willughby wrote about the black stork in the 17th century, having seen one in Frankfurt. He named it Ciconia nigra, from the Latin words for "stork" and "black" respectively. It was one of the many species originally described by Swedish zoologist Carl Linnaeus in the landmark 1758 10th edition of his Systema Naturae, where it was given the binomial name of Ardea nigra. It was moved to the new genus Ciconia by French zoologist Mathurin Jacques Brisson two years later. The word stork is derived from the Old English word storc, thought to be related to the Old High German storah, meaning "stork", and the Old English stearc, meaning "stiff".
The black stork is a member of the genus Ciconia, or typical storks, a group of seven extant species, characterised by straight bills and mainly black and white plumage. The black stork was long thought to be most closely related to the white stork (C. ciconia). However, genetic analysis via DNA–DNA hybridization and mitochondrial cytochrome b DNA by Beth Slikas in 1997 found that it was basal (an early offshoot) in the genus Ciconia. Fossil remains have been recovered from Miocene beds on Rusinga and Maboko Islands in Kenya, which are indistinguishable from the white and black storks.
Description
The black stork is a large bird, measuring between in length with a wingspan, and weighing around . Standing as tall as , it has long red legs, a long neck and a long, straight, pointed red beak. It bears some resemblance to Abdim's stork (C. abdimii), which can be distinguished by its much smaller build, predominantly green bill, legs and feet, and white rump and lower back. The plumage is black with a purplish green sheen, except for the white lower breast, belly, armpits, axillaries and undertail coverts. The breast feathers are long and shaggy, forming a ruff which is used in some courtship displays. The black stork has brown irises, and bare red skin around its eyes. The sexes are identical in appearance, except that males are larger than females on average. Moulting takes place in spring, with the iridescent sheen brighter in new plumage. It walks slowly and steadily on the ground and like all storks, it flies with its neck outstretched.
The juvenile resembles the adult in plumage, but the areas corresponding to the adult black feathers are browner and less glossy. The scapulars, wing and upper tail coverts have pale tips. The legs, bill and bare skin around the eyes are greyish green. It could possibly be confused with the juvenile yellow-billed stork, but the latter has paler wings and mantle, a longer bill and white under the wings.
Distribution and habitat
During the summer, the black stork is found from Eastern Asia (Siberia and northern China) west to Central Europe, reaching Estonia in the north, Poland, Lower Saxony and Bavaria in Germany, the Czech Republic, Hungary, Italy and Greece in the south, with an outlying population in the central-southwest region of the Iberian Peninsula (Extremadura and surrounding provinces of Spain, plus Portugal). It is migratory, wintering in tropical Africa and Asia, although certain populations of black storks are sedentary or dispersive. An isolated population exists in Southern Africa, where the species is more numerous in the east, in eastern South Africa and Mozambique, and is also found in Zimbabwe, Eswatini, Botswana and less commonly Namibia.
Most of the black storks that summer in Europe migrate to Africa, with those from western Germany and points west heading south via the Iberian Peninsula and the rest via Turkey and the Levant. Those flying via Spain spend winter in the Falémé River basin of eastern Senegal, Guinea, southern Mauritania, Ivory Coast, Sierra Leone and western and central Mali, while those flying via the Sinai end up in northern Ethiopia, the Kotto River basin in the Central African Republic, the Mbokou river basin in Chad and northeastern Nigeria. Black storks summering in West Asia migrate to northern and northeastern India, ranging mainly from Punjab south to Karnataka, and Africa. They are occasional visitors to Sri Lanka. Those summering further east in eastern Russia and China winter mainly in southern China, and occasionally in Hong Kong, Myanmar, northern Thailand, and Laos. They were first recorded in western Myanmar in 1998.
The black stork prefers more wooded areas than the better-known white stork, and breeds in large marshy wetlands with interspersed coniferous or broadleaved woodlands, but also inhabits hills and mountains with sufficient networks of creeks. It usually inhabits ponds, rivers, edges of lakes, estuaries and other freshwater wetlands. The black stork does inhabit more agricultural areas in the Caspian lowlands, but even here it avoids close contact with people. Its wintering habitat in India comprises reservoirs or rivers with nearby scrub or forest, which provide trees that black storks can roost in at night. In southern Africa it is found in shallow water in rivers or lakes, or swamps, but is occasionally encountered on dry land.
After disappearing from Belgium before the onset of the 20th century, it has returned to breed in the Belgian Ardennes, Luxembourg and Burgundy, France, by 2000. It appears to be increasing in numbers in Spain and Portugal, where the population was estimated at 405 to 483 pairs in 2006. The black stork is a rare vagrant to the British Isles, turning up in the warmer months—particularly in spring—generally in the south and east. Sightings have become more common since the 1970s as its breeding range moves northwards. It has been recorded in Scotland six times between 1946 and 1983, including from Shetland, Orkney and the Highlands, as well as the Scottish Borders (Peebles). It is not abundant in the western parts of its distribution, but more densely inhabits eastern Transcaucasia. Further east, it has been recorded from locations across Iran, though little is known about its habits there; breeding has been recorded from near Aliabad in Fars province, Khabr National Park in Kerman province, Karun river in Khuzestan province, Qaranqu River in East Azarbaijan province, and Aliabad river in Razavi Khorasan province. The population has declined in Iran due to draining of wetlands. East of the Ural Mountains, the black stork is patchily found in forested and mountainous areas up to 60°‒63° N across Siberia to the Pacific Ocean. South of Siberia, it breeds in Xinjiang, northwestern China, northern Mongolia south to the Altai Mountains, and northeastern China south to the vicinity of Beijing. In the Korean peninsula, the black stork is an uncommon summer visitor, no longer breeding in the south since 1966. Birds have been seen in the northeast but it is not known whether they breed there. Similarly it has been seen in the summer in Afghanistan, but its breeding status is uncertain.
Migration
Migration takes place from early August to October, with a major exodus in September. Some of the Iberian populations, and also those in southern Africa, are essentially non-migratory, though they may wander freely in the non-breeding areas. A broad-winged soaring bird, the black stork is assisted by thermals of hot air for long-distance flight, although is less dependent on them than is the white stork. Since thermals only form over land, the black stork, together with large raptors, must cross the Mediterranean at the narrowest points, and many black storks travel south through the Bosphorus and on through the Sinai, as well as through Gibraltar. The trip is around via the western route and via the eastern route, with satellite tracking yielding an average travel time of 37 and 80 days respectively. The western route goes over the Rock of Gibraltar or over the Bay of Gibraltar, generally on a southwesterly track that takes them to the central part of the strait, from where they reach Morocco. Many birds then fly around the Sahara next to the coast. About 10% of the western storks choose the passage between Sicily (Italy) and Cap Bon (Tunisia), crossing the 145 km wide Strait of Sicily.
Spain contains several important areas—Monfragüe National Park, Sierra de Gredos Regional Park, National Hunting Reserve in Cíjara, Natural Park of the Sierra Hornachuelos and Doñana National Park—where black storks stop over on the western migration route. Pesticide use has threatened birdlife in nearby Doñana. Further south, Lake Faguibine in Mali is another stopover point but it has been affected by drought in recent years.
Behaviour
A wary species, the black stork avoids contact with people. It is generally found alone or in pairs, or in flocks of up to 100 birds when migrating or during winter.
The black stork has a wider range of calls than the white stork, its main call being a chee leee, which sounds like a loud inhalation. It makes a hissing call as a warning or threat. Displaying males produce a long series of wheezy raptor-like squealing calls rising in volume and then falling. It rarely indulges in mutual bill-clattering when adults meet at the nest. Adults will do so as part of their mating ritual or when angered. The young clatter their bills when aroused.
The up-down display is used for a number of interactions with other members of the species. Here a stork positions its body horizontally and quickly bobs its head up from down-facing to around 30 degrees above horizontal and back again, while displaying the white segments of its plumage prominently, and this is repeated several times. The display is used as a greeting between birds, and—more vigorously—as a threat display. The species' solitary nature means that this threat display is rarely witnessed.
Breeding
The black stork breeds between April and May in the Northern Hemisphere, with eggs usually laid in late April. In southern Africa, breeding takes place in the months between September and March, possibly to take advantage of abundant water prey rendered easier to catch as the rivers dry up and recede—from April and May in Zimbabwe, Botswana and northern South Africa, and as late as July further south.
Pairs in courtship have aerial displays that appear to be unique among the storks. Paired birds soared in parallel, usually over the nest territory early in the mornings or late afternoons with one bird splaying the white undertail coverts to the sides of the narrowed black tail and the pair calls to each other. These courtship flights are difficult to see due to the densely forested habitat in which they breed. The nest is large, constructed from sticks and twigs, and sometimes also large branches, at an elevation of . The black stork prefers to construct its nest in forest trees with large canopies where the nest can be built far from the main trunk—generally in places far from human disturbance. For the most part, deciduous trees are chosen for nesting sites, though conifers are used as well. A 2003 field study in Estonia found that the black stork preferred oak (Quercus robur), European aspen (Populus tremula), and to a lesser extent Scots pine (Pinus sylvestris), and ignored Norway spruce (Picea abies), in part due to the canopy structure of the trees. Trees with nests averaged around high and had a diameter at breast height of . Furthermore, 90% of the trees chosen were at least 80 years old, highlighting the importance of conserving old-growth forests. A 2004 field study of nesting sites in Dadia-Lefkimi-Soufli National Park in north-eastern Greece found that it preferred the Calabrian pine (Pinus brutia), which had large side branches that allowed it to build the nest away from the trunk, as well as black pine (Pinus nigra) and to a lesser extent Turkey oak (Quercus cerris). It chose the largest trees in an area, generally on steeper ground and near streams. Trees chosen were on average over 90 years old. In the Iberian peninsula it nests in pine and cork oak (Quercus suber).
In steeply mountainous areas such as parts of Spain, South Africa and the Carpathian Mountains it nests on cliffs, on large boulders, in caves and under overhanging ledges. The black stork's solitary nests are usually at least 1 km (0.6 mi) apart, even where the species is numerous. Although newly constructed nests may be significantly smaller, older nests can be in diameter. In southern Africa, the black stork may occupy the nests of other bird species such as hamerkop (Scopus umbretta) or Verreaux's eagle (Aquila verreauxi) and commonly reuses them in successive years. They are repaired with earth and grass, and lined with leaves, moss, grass, animal fur, paper, clay and rags. In a clutch, there are two to five, or rarely even six large oval grey-white eggs, which become soiled during incubation. They can be long and wide, averaging about in length and in width. The eggs are laid with an interval of two days. Hatching is asynchronous, and takes place at the end of May. Incubation takes 32 to 38 days, with both sexes sharing duties, which commence after the first or second egg is laid. The young start flying by the end of July. Fledging takes 60 to 71 days, after which the young joins the adults at their feeding grounds. However, for another two weeks, the young continue to return to the nest, to be fed and to roost at night.
At least one adult remains in the nest for two to three weeks after hatching to protect the young. Both parents feed the young by regurgitating onto the floor of the nest. Black stork parents have been known to kill one of their fledglings, generally the weakest, in times of food shortage to reduce brood size and hence increase the chance of survival of the remaining nestlings. Stork nestlings do not attack each other, and their parents' method of feeding them (disgorging large amounts of food at once) means that stronger siblings cannot outcompete weaker ones for food directly, hence parental infanticide is an efficient way of reducing brood size. This behaviour has only rarely been observed in the species, although the shyness of the species and difficulties in studying its nesting habits mean that it might not be an uncommon phenomenon.
Ringing recovery studies in Europe suggests that nearly 20% of chicks reach the breeding stage, around 3 years, and about 10% live beyond 10 years and about 5% beyond 20 years. Captive individuals have lived for as long as 36 years.
Feeding
The black stork mainly eats fish, including small cyprinids, pikes, roaches, eels, budds, perches, burbots, sticklebacks and muddy loaches (Misgurnus and Cobitis). It may feed on amphibians, small reptiles, crabs, mammals and birds, and invertebrates such as snails, molluscs, earthworms, and insects like water beetles and their larvae.
Foraging for food takes place mostly in fresh water, though the black stork may look for food on dry land at times. The black stork wades patiently and slowly in shallow water, often alone or in a small group if food is plentiful. It has been observed shading the water with its wings while hunting. In India, it often forages in mixed species flocks with the white stork, woolly-necked stork (Ciconia episcopus), demoiselle crane (Grus virgo) and bar-headed goose (Anser indicus). The black stork also follows large mammals such as deer and livestock, presumably to eat the invertebrates and small animals flushed by their presence.
Parasites and symbionts
More than 12 species of parasitic helminth have been recorded from black storks with Cathaemasia hians and Dicheilonema ciconiae reported to be the most dominant. The juvenile black stork, although having a less diverse helminth population, is parasitized more frequently than the adult. A species of Corynebacterium—C. ciconiae—was isolated and described from the trachea of healthy black storks, and is thought to be part of the natural flora of the species. A herpes virus is known from black storks. Birdlice that have been recorded on the species include Neophilopterus tricolor, Colpocephalum nigrae, and Ardeicola maculatus. A diverse array of predatory mesostigmatid mites—particularly the genera Dendrolaelaps and Macrocheles—have been recovered from black stork nests. Their role is unknown, though they could prey on parasitic arthropods.
Status and conservation
Since 1998, the black stork has been rated as a species of least concern on the IUCN Red List of Endangered Species. This is because it has a large range—more than 20,000 km2 (7,700 mi2)—and because its population is thought not to have declined by 30% over ten years or three generations and thus is not a rapid enough decline to warrant a vulnerable rating. Even so, the state of the population overall is unclear, and although it is widespread, it is not abundant anywhere. Black stork numbers have declined for many years in western Europe, and the species has been extirpated as a breeding bird from the northwestern edge of its range, including the Netherlands and Scandinavia (for example, small numbers used to breed in Denmark and Sweden, but none verified after the 1950s). The population in India—a major wintering ground—is declining. Previously a regular winter visitor to the Mai Po Marshes, it is now seldom seen there, and appears to be in decline in China overall. Its habitat is changing rapidly in much of eastern Europe and Asia. Various conservation measures have been taken, including Wetlands International's Conservation Action Plan for African black storks, which focuses on improving the wintering conditions of the birds which breed in Europe. It is protected by the Agreement on the Conservation of African-Eurasian Migratory Waterbirds (AEWA) and the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES).
Hunters threaten the black stork in some countries of southern Europe and Asia, such as Pakistan, and breeding populations may have been eliminated there. The black stork vanished from the Ticino River valley in northern Italy, with hunting a likely contributor. In 2005, black storks were released into the Parco Lombardo del Ticino in an attempt to re-establish the species there.
Since October 2021, the black stork has been classified as Moderately Depleted by the IUCN Green Status of Species.
| Biology and health sciences | Pelecanimorphae | Animals |
237868 | https://en.wikipedia.org/wiki/Product%20%28category%20theory%29 | Product (category theory) | In category theory, the product of two (or more) objects in a category is a notion designed to capture the essence behind constructions in other areas of mathematics such as the Cartesian product of sets, the direct product of groups or rings, and the product of topological spaces. Essentially, the product of a family of objects is the "most general" object which admits a morphism to each of the given objects.
Definition
Product of two objects
Fix a category Let and be objects of A product of and is an object typically denoted equipped with a pair of morphisms satisfying the following universal property:
For every object and every pair of morphisms there exists a unique morphism such that the following diagram commutes:
Whether a product exists may depend on or on and If it does exist, it is unique up to canonical isomorphism, because of the universal property, so one may speak of the product. This has the following meaning: if is another product, there exists a unique isomorphism such that and .
The morphisms and are called the canonical projections or projection morphisms; the letter alliterates with projection. Given and the unique morphism is called the product of morphisms and and is denoted
Product of an arbitrary family
Instead of two objects, we can start with an arbitrary family of objects indexed by a set
Given a family of objects, a product of the family is an object equipped with morphisms satisfying the following universal property:
For every object and every -indexed family of morphisms there exists a unique morphism such that the following diagrams commute for all
The product is denoted If then it is denoted and the product of morphisms is denoted
Equational definition
Alternatively, the product may be defined through equations. So, for example, for the binary product:
Existence of is guaranteed by existence of the operation
Commutativity of the diagrams above is guaranteed by the equality: for all and all
Uniqueness of is guaranteed by the equality: for all
As a limit
The product is a special case of a limit. This may be seen by using a discrete category (a family of objects without any morphisms, other than their identity morphisms) as the diagram required for the definition of the limit. The discrete objects will serve as the index of the components and projections. If we regard this diagram as a functor, it is a functor from the index set considered as a discrete category. The definition of the product then coincides with the definition of the limit, being a cone and projections being the limit (limiting cone).
Universal property
Just as the limit is a special case of the universal construction, so is the product. Starting with the definition given for the universal property of limits, take as the discrete category with two objects, so that is simply the product category The diagonal functor assigns to each object the ordered pair and to each morphism the pair The product in is given by a universal morphism from the functor to the object in This universal morphism consists of an object of and a morphism which contains projections.
Examples
In the category of sets, the product (in the category theoretic sense) is the Cartesian product. Given a family of sets the product is defined as
with the canonical projections
Given any set with a family of functions
the universal arrow is defined by
Other examples:
In the category of topological spaces, the product is the space whose underlying set is the Cartesian product and which carries the product topology. The product topology is the coarsest topology for which all the projections are continuous.
In the category of modules over some ring the product is the Cartesian product with addition defined componentwise and distributive multiplication.
In the category of groups, the product is the direct product of groups given by the Cartesian product with multiplication defined componentwise.
In the category of graphs, the product is the tensor product of graphs.
In the category of relations, the product is given by the disjoint union. (This may come as a bit of a surprise given that the category of sets is a subcategory of the category of relations.)
In the category of algebraic varieties, the product is given by the Segre embedding.
In the category of semi-abelian monoids, the product is given by the history monoid.
In the category of Banach spaces and short maps, the product carries the norm.
A partially ordered set can be treated as a category, using the order relation as the morphisms. In this case the products and coproducts correspond to greatest lower bounds (meets) and least upper bounds (joins).
Discussion
An example in which the product does not exist: In the category of fields, the product does not exist, since there is no field with homomorphisms to both and
Another example: An empty product (that is, is the empty set) is the same as a terminal object, and some categories, such as the category of infinite groups, do not have a terminal object: given any infinite group there are infinitely many morphisms so cannot be terminal.
If is a set such that all products for families indexed with exist, then one can treat each product as a functor How this functor maps objects is obvious. Mapping of morphisms is subtle, because the product of morphisms defined above does not fit. First, consider the binary product functor, which is a bifunctor. For we should find a morphism We choose This operation on morphisms is called Cartesian product of morphisms. Second, consider the general product functor. For families we should find a morphism We choose the product of morphisms
A category where every finite set of objects has a product is sometimes called a Cartesian category
(although some authors use this phrase to mean "a category with all finite limits").
The product is associative. Suppose is a Cartesian category, product functors have been chosen as above, and denotes a terminal object of We then have natural isomorphisms
These properties are formally similar to those of a commutative monoid; a Cartesian category with its finite products is an example of a symmetric monoidal category.
Distributivity
For any objects of a category with finite products and coproducts, there is a canonical morphism where the plus sign here denotes the coproduct. To see this, note that the universal property of the coproduct guarantees the existence of unique arrows filling out the following diagram (the induced arrows are dashed):
The universal property of the product then guarantees a unique morphism induced by the dashed arrows in the above diagram. A distributive category is one in which this morphism is actually an isomorphism. Thus in a distributive category, there is the canonical isomorphism
| Mathematics | Category theory | null |
237876 | https://en.wikipedia.org/wiki/Deformation%20%28engineering%29 | Deformation (engineering) | In engineering, deformation (the change in size or shape of an object) may be elastic or plastic.
If the deformation is negligible, the object is said to be rigid.
Main concepts
Occurrence of deformation in engineering applications is based on the following background concepts:
Displacements are any change in position of a point on the object, including whole-body translations and rotations (rigid transformations).
Deformation are changes in the relative position between internals points on the object, excluding rigid transformations, causing the body to change shape or size.
Strain is the relative internal deformation, the dimensionless change in shape of an infinitesimal cube of material relative to a reference configuration. Mechanical strains are caused by mechanical stress, see stress-strain curve.
The relationship between stress and strain is generally linear and reversible up until the yield point and the deformation is elastic. Elasticity in materials occurs when applied stress does not surpass the energy required to break molecular bonds, allowing the material to deform reversibly and return to its original shape once the stress is removed. The linear relationship for a material is known as Young's modulus. Above the yield point, some degree of permanent distortion remains after unloading and is termed plastic deformation. The determination of the stress and strain throughout a solid object is given by the field of strength of materials and for a structure by structural analysis.
In the above figure, it can be seen that the compressive loading (indicated by the arrow) has caused deformation in the cylinder so that the original shape (dashed lines) has changed (deformed) into one with bulging sides. The sides bulge because the material, although strong enough to not crack or otherwise fail, is not strong enough to support the load without change. As a result, the material is forced out laterally. Internal forces (in this case at right angles to the deformation) resist the applied load.
Types of deformation
Depending on the type of material, size and geometry of the object, and the forces applied, various types of deformation may result. The image to the right shows the engineering stress vs. strain diagram for a typical ductile material such as steel. Different deformation modes may occur under different conditions, as can be depicted using a deformation mechanism map.
Permanent deformation is irreversible; the deformation stays even after removal of the applied forces, while the temporary deformation is recoverable as it disappears after the removal of applied forces.
Temporary deformation is also called elastic deformation, while the permanent deformation is called plastic deformation.
Elastic deformation
The study of temporary or elastic deformation in the case of engineering strain is applied to materials used in mechanical and structural engineering, such as concrete and steel, which are subjected to very small deformations. Engineering strain is modeled by infinitesimal strain theory, also called small strain theory, small deformation theory, small displacement theory, or small displacement-gradient theory where strains and rotations are both small.
For some materials, e.g. elastomers and polymers, subjected to large deformations, the engineering definition of strain is not applicable, e.g. typical engineering strains greater than 1%, thus other more complex definitions of strain are required, such as stretch, logarithmic strain, Green strain, and Almansi strain. Elastomers and shape memory metals such as Nitinol exhibit large elastic deformation ranges, as does rubber. However, elasticity is nonlinear in these materials.
Normal metals, ceramics and most crystals show linear elasticity and a smaller elastic range.
Linear elastic deformation is governed by Hooke's law, which states:
where
is the applied stress;
is a material constant called Young's modulus or elastic modulus;
is the resulting strain.
This relationship only applies in the elastic range and indicates that the slope of the stress vs. strain curve can be used to find Young's modulus (). Engineers often use this calculation in tensile tests. The area under this elastic region is known as resilience.
Note that not all elastic materials undergo linear elastic deformation; some, such as concrete, gray cast iron, and many polymers, respond in a nonlinear fashion. For these materials Hooke's law is inapplicable.
Plastic deformation
This type of deformation is not undone simply by removing the applied force. An object in the plastic deformation range, however, will first have undergone elastic deformation, which is undone simply be removing the applied force, so the object will return part way to its original shape. Soft thermoplastics have a rather large plastic deformation range as do ductile metals such as copper, silver, and gold. Steel does, too, but not cast iron. Hard thermosetting plastics, rubber, crystals, and ceramics have minimal plastic deformation ranges. An example of a material with a large plastic deformation range is wet chewing gum, which can be stretched to dozens of times its original length.
Under tensile stress, plastic deformation is characterized by a strain hardening region and a necking region and finally, fracture (also called rupture). During strain hardening the material becomes stronger through the movement of atomic dislocations. The necking phase is indicated by a reduction in cross-sectional area of the specimen. Necking begins after the ultimate strength is reached. During necking, the material can no longer withstand the maximum stress and the strain in the specimen rapidly increases. Plastic deformation ends with the fracture of the material.
Failure
Compressive failure
Usually, compressive stress applied to bars, columns, etc. leads to shortening.
Loading a structural element or specimen will increase the compressive stress until it reaches its compressive strength. According to the properties of the material, failure modes are yielding for materials with ductile behavior (most metals, some soils and plastics) or rupturing for brittle behavior (geomaterials, cast iron, glass, etc.).
In long, slender structural elements — such as columns or truss bars — an increase of compressive force F leads to structural failure due to buckling at lower stress than the compressive strength.
Fracture
A break occurs after the material has reached the end of the elastic, and then plastic, deformation ranges. At this point forces accumulate until they are sufficient to cause a fracture. All materials will eventually fracture, if sufficient forces are applied.
Types of stress and strain
Engineering stress and engineering strain are approximations to the internal state that may be determined from the external forces and deformations of an object, provided that there is no significant change in size. When there is a significant change in size, the true stress and true strain can be derived from the instantaneous size of the object.
Engineering stress and strain
Consider a bar of original cross sectional area being subjected to equal and opposite forces pulling at the ends so the bar is under tension. The material is experiencing a stress defined to be the ratio of the force to the cross sectional area of the bar, as well as an axial elongation:
Subscript 0 denotes the original dimensions of the sample. The SI derived unit for stress is newtons per square metre, or pascals (1 pascal = 1 Pa = 1 N/m2), and strain is unitless. The stress–strain curve for this material is plotted by elongating the sample and recording the stress variation with strain until the sample fractures. By convention, the strain is set to the horizontal axis and stress is set to vertical axis. Note that for engineering purposes we often assume the cross-section area of the material does not change during the whole deformation process. This is not true since the actual area will decrease while deforming due to elastic and plastic deformation. The curve based on the original cross-section and gauge length is called the engineering stress–strain curve, while the curve based on the instantaneous cross-section area and length is called the true stress–strain curve. Unless stated otherwise, engineering stress–strain is generally used.
True stress and strain
In the above definitions of engineering stress and strain, two behaviors of materials in tensile tests are ignored:
the shrinking of section area
compounding development of elongation
True stress and true strain are defined differently than engineering stress and strain to account for these behaviors. They are given as
Here the dimensions are instantaneous values. Assuming volume of the sample conserves and deformation happens uniformly,
The true stress and strain can be expressed by engineering stress and strain. For true stress,
For the strain,
Integrate both sides and apply the boundary condition,
So in a tension test, true stress is larger than engineering stress and true strain is less than engineering strain. Thus, a point defining true stress–strain curve is displaced upwards and to the left to define the equivalent engineering stress–strain curve. The difference between the true and engineering stresses and strains will increase with plastic deformation. At low strains (such as elastic deformation), the differences between the two is negligible. As for the tensile strength point, it is the maximal point in engineering stress–strain curve but is not a special point in true stress–strain curve. Because engineering stress is proportional to the force applied along the sample, the criterion for necking formation can be set as
This analysis suggests nature of the ultimate tensile strength (UTS) point. The work strengthening effect is exactly balanced by the shrinking of section area at UTS point.
After the formation of necking, the sample undergoes heterogeneous deformation, so equations above are not valid. The stress and strain at the necking can be expressed as:
An empirical equation is commonly used to describe the relationship between true stress and true strain.
Here, is the strain-hardening exponent and is the strength coefficient. is a measure of a material's work hardening behavior. Materials with a higher have a greater resistance to necking. Typically, metals at room temperature have ranging from 0.02 to 0.5.
Discussion
Since we disregard the change of area during deformation above, the true stress and strain curve should be re-derived. For deriving the stress strain curve, we can assume that the volume change is 0 even if we deformed the materials. We can assume that:
Then, the true stress can be expressed as below:
Additionally, the true strain can be expressed as below:
Then, we can express the value as
Thus, we can induce the plot in terms of and as right figure.
Additionally, based on the true stress-strain curve, we can estimate the region where necking starts to happen. Since necking starts to appear after ultimate tensile stress where the maximum force applied, we can express this situation as below:
so this form can be expressed as below:
It indicates that the necking starts to appear where reduction of area becomes much significant compared to the stress change. Then the stress will be localized to specific area where the necking appears.
Additionally, we can induce various relation based on true stress-strain curve.
1) True strain and stress curve can be expressed by the approximate linear relationship by taking a log on true stress and strain. The relation can be expressed as below:
Where is stress coefficient and is strain-hardening coefficient. Usually, the value of has range around 0.02 to 0.5 at room temperature. If is 1, we can express this material as perfect elastic material.
2) In reality, stress is also highly dependent on the rate of strain variation. Thus, we can induce the empirical equation based on the strain rate variation.
Where is constant related to the material flow stress. indicates the derivative of strain by the time, which is also known as strain rate. is the strain-rate sensitivity. Moreover, value of is related to the resistance toward the necking. Usually, the value of is at the range of 0-0.1 at room temperature and as high as 0.8 when the temperature is increased.
By combining the 1) and 2), we can create the ultimate relation as below:
Where is the global constant for relating strain, strain rate and stress.
3) Based on the true stress-strain curve and its derivative form, we can estimate the strain necessary to start necking. This can be calculated based on the intersection between true stress-strain curve as shown in right.
This figure also shows the dependency of the necking strain at different temperature. In case of FCC metals, both of the stress-strain curve at its derivative are highly dependent on temperature. Therefore, at higher temperature, necking starts to appear even under lower strain value.
All of these properties indicate the importance of calculating the true stress-strain curve for further analyzing the behavior of materials in sudden environment.
4) A graphical method, so-called "Considere construction", can help determine the behavior of stress-strain curve whether necking or drawing happens on the sample. By setting as determinant, the true stress and strain can be expressed with engineering stress and strain as below:
Therefore, the value of engineering stress can be expressed by the secant line from made by true stress and value where to . By analyzing the shape of diagram and secant line, we can determine whether the materials show drawing or necking.
On the figure (a), there is only concave upward Considere plot. It indicates that there is no yield drop so the material will be suffered from fracture before it yields. On the figure (b), there is specific point where the tangent matches with secant line at point where . After this value, the slope becomes smaller than the secant line where necking starts to appear. On the figure (c), there is point where yielding starts to appear but when , the drawing happens. After drawing, all the material will stretch and eventually show fracture. Between and , the material itself does not stretch but rather, only the neck starts to stretch out.
Misconceptions
A popular misconception is that all materials that bend are "weak" and those that do not are "strong". In reality, many materials that undergo large elastic and plastic deformations, such as steel, are able to absorb stresses that would cause brittle materials, such as glass, with minimal plastic deformation ranges, to break.
| Physical sciences | Solid mechanics | null |
237945 | https://en.wikipedia.org/wiki/Paddlefish | Paddlefish | Paddlefish (family Polyodontidae) are a family of ray-finned fish belonging to order Acipenseriformes, and one of two living groups of the order alongside sturgeons (Acipenseridae). They are distinguished from other fish by their elongated rostra, which are thought to enhance electroreception to detect prey. Paddlefish have been referred to as "primitive fish" because the Acipenseriformes are among the earliest diverging lineages of ray-finned fish, having diverged from all other living groups over 300 million years ago. Both living and fossil paddlefish are found almost exclusively in North America and China.
Eight species are known: Six of those species are extinct, and known only from fossils (five from North America, one from China), one of the extant species, the American paddlefish (Polyodon spathula), is native to the Mississippi River basin in the U.S. The other is the Chinese paddlefish (Psephurus gladius), which was declared extinct in 2022 following a 2019 recommendation; the species has not been sighted in the Yangtze River Basin in China since 2003. Chinese paddlefish are also commonly referred to as "Chinese swordfish", or "elephant fish". The earliest known paddlefish is Protopsephurus, from the early Cretaceous (Aptian) of China, dating to around 120 million years ago.
Paddlefish populations have declined dramatically throughout their historic range as a result of overfishing, pollution, and the encroachment of human development, including the construction of dams that have blocked their seasonal upward migration to ancestral spawning grounds. Other detrimental effects include alterations of rivers which have changed natural flows resulting in the loss of spawning habitat and nursery areas.
Morphology
Paddlefish as a group are one of the few organisms that retain a notochord past the embryonic stage. Paddlefish have very few bones and their bodies mostly consist of cartilage with the notochord functioning as a soft spine. During the initial stages of development from embryo to fry, paddlefish have no rostrum (snout). It begins to form shortly after hatching. The rostrum of the Chinese paddlefish was narrow and sword-like whereas the rostrum of the American paddlefish is broad and paddle-like. Some common morphological characteristics of paddlefish include a spindle-shaped, smooth-skinned scaleless body, heterocercal tail, and small poorly developed eyes. Unlike the filter-feeding American paddlefish, Chinese paddlefish were piscivores, and highly predatory. Their jaws were more forward pointing which suggested they foraged primarily on small fishes in the water column, and occasionally on shrimp, benthic fishes, and crabs. The jaws of the American paddlefish are distinctly adapted for filter feeding only. They are ram suspension filter feeders with a diet that consists primarily of zooplankton, and occasionally small insects, insect larvae, and small fish.
The largest Chinese paddlefish on record measured in length, and was estimated to weigh a few thousand pounds. They commonly reached and . Although the American paddlefish is one of the largest freshwater fishes in North America, their recorded lengths and weights fell short in comparison to the larger Chinese paddlefish. American paddlefish commonly reach or more in length and can weigh more than . The largest American paddlefish on record was caught in 1916 in Okoboji Lake, Iowa. The fish was taken with a spear, and measured long and in the girth. A report published by J.R. Harlan and E.B. Speaker (1969) in Iowa Fish and Fishing states that the fish weighed over . The world record paddlefish caught on rod and reel weighed and was long. The fish was caught by Clinton Boldridge in a 5 acre pond in Atchison County, Kansas on 5 May 2004. However, the record would be broken an additional two times in 2020: On 28 June 2020, an Oklahoma man caught a 146 pound paddlefish in Keystone Lake, west of Tulsa. Later on 23 July 2020, the record was broken again when another Oklahoma man caught a 151 pound, nearly 6 foot long paddlefish in the same lake.
Scientists once believed paddlefish used their rostrums to excavate bottom substrate, but have since determined with the aid of electron microscopy that paddlefish rostrums are covered in electroreceptors called ampullae. These ampullae are densely packed within star-shaped bone projections that branch out from the rostrum. The electroreceptors can detect weak electrical fields which not only signal the presence of prey items in the water column, such as zooplankton which is the primary diet of the American paddlefish, but they can also detect the individual feeding and swimming movements of zooplankton's appendages. Paddlefish have poorly developed eyes, and rely on their electroreceptors for foraging. However, the rostrum is not the paddlefish's sole means of food detection. Some reports incorrectly suggest that a damaged rostrum would render paddlefish less capable of foraging efficiently to maintain good health. Laboratory experiments, and field research indicate otherwise. In addition to electroreceptors on the rostrum, paddlefish also have sensory pores covering nearly half of the skin surface extending from the rostrum to the top of the head down to the tips of the operculum (gill flaps). Paddlefish with damaged or abbreviated rostrums are still able to forage adequately.
Habitat and historic range
Over the past half century, paddlefish populations have been on the decline. Attributable causes are overfishing, pollution, and the encroachment of human development, including the construction of dams which block their seasonal upward migration to ancestral spawning grounds. Other detrimental effects include alterations of rivers which have changed the natural flow, and resulted in the loss of spawning habitat and nursery areas. American paddlefish have been extirpated from much of their Northern peripheral range, including the Great Lakes and Canada, New York, Maryland and Pennsylvania. There is growing concern about their populations in other states.
The Chinese paddlefish was considered anadromous with upstream migration, however little is known about their migration habits and population structure. They were endemic to the Yangtze River Basin in China where they lived primarily in the broad surfaced main stem rivers and shoal zones along the East China Sea. Research suggests they preferred to navigate the middle and lower layers of the water column, and occasionally swam into large lakes. There have been no sightings of Chinese paddlefish since 2003, and were declared extinct in 2019. Past attempts of artificial propagation for restoration purposes failed because of difficulties encountered in keeping captive fish alive.
American paddlefish are native to the Mississippi River basin from New York to Montana and south to the Gulf of Mexico. They have been found in several Gulf Slope drainages in medium to large rivers with long, deep sluggish pools, as well as in backwater lakes and bayous. In Texas, paddlefish occurred historically in the Angelina River, Big Cypress Bayou, Neches River, tributaries of the Red River, Sabine River, San Jacinto River, Sulphur River, and Trinity River. Their historical range also included occurrences in Canada in Lake Huron and Lake Helen, and in 26~27 other states in the United States. The Ontario Ministry of Natural Resources listed the paddlefish as extirpated from Ontario, Canada under their Endangered Species Act. The IUCN Red List lists the Canadian populations of paddlefish as extirpated, noting there have been no Canadian records since the early 1900s and distribution in Canada was highly peripheral. As a species, the American paddlefish is classified as vulnerable (VU) on the IUCN Red List, and its international trade has been restricted since June 1992 under Appendix II of the Convention of International Trade in Endangered Species of Wild Flora and Fauna (CITES).
Life cycle
Paddlefish are long-lived, and sexually late maturing. Females do not begin spawning until they are six to twelve years old, some even as late as sixteen to eighteen years old. Males begin spawning around age four to seven, some as late as nine or ten years of age. Paddlefish spawn in late spring if the proper combination of events occur; these include water flow, temperature, photoperiod, and availability of gravel substrates suitable for spawning. If all the conditions are not met, paddlefish do not spawn. Research suggests females do not spawn every year, rather they spawn every second or third year while males spawn more frequently, typically every year or every other year.
Paddlefish migrate upstream to spawn, and prefer silt-free gravel bars that would otherwise be exposed to air, or covered by very shallow water were it not for the rises in the river from snow melt and annual spring rains that cause flooding. They are broadcast spawners, also referred to as mass spawners or synchronous spawners. Fertilization occurs externally: Gravid females release their eggs into the water over bare rocks or gravel at the same time males release their sperm. The eggs are adhesive and stick to the rocky substrate. The young are swept downstream after hatching and grow to adulthood in deep freshwater pools.
Propagation and culture
The advancements in biotechnology in paddlefish propagation and rearing of captive stock indicate significant improvements in reproduction success, adaptation and survival rates of paddlefish cultured for broodstock development and stock rehabilitation. Such improvements have led to successful practices in reservoir ranching and pond rearing, creating an increasing interest in the global market for paddlefish polyculture.
In a cooperative scientific effort in the early 1970s between the U.S. Fish & Wildlife Service and its former USSR counterpart, American paddlefish were imported into the former USSR for aquaculture, beginning with five-thousand hatched larvae from Missouri hatcheries in the United States. They were introduced into several rivers in Europe and Asia, and provided the first brood stock that were successfully reproduced in 1984–1986 in Russia. Paddlefish are now being raised in Germany, Austria, the Czech Republic, and the Plovdiv and Vidin regions in Bulgaria. Reproduction was successful in 1988 and 1989, and resulted in the exportation of juvenile paddlefish to Romania and Hungary. In May 2006, specimens of different sizes and weights were caught by professional fisherman near Prahovo in the Serbian part of the Danube River.
In 1988, fertilized paddlefish eggs and larvae from Missouri hatcheries were first introduced into China. Since that time, China imports approximately 4.5 million fertilized eggs and larvae every year from hatcheries in Russia, and the United States. Some of the paddlefish are polycultured in carp ponds, and sold to restaurants while others are cultured for brood stock and caviar production. China has also exported paddlefish to Cuba, where they are farmed for caviar production.
Classification
There is one currently extant genus in this family, one recently extinct and five extinct genera known exclusively from fossils.
Classification following , with Parapsephurus and Pugiopsephurus added in :
genus Protopsephurus Lu, 1994 (Early Cretaceous, China)
species Protopsephurus liui Lu, 1994
genus Pugiopsephurus Hilton et al., 2023 (Late Cretaceous, North America) (Incertae sedis)
species Pugiopsephurus inundatus Hilton et al., 2023
clade Polyodonti
genus Paleopsephurus MacAlpin, 1947 (Late Cretaceous, North America)
species Paleopsephurus wilsoni MacAlpin, 1947
genus Parapsephurus Hilton et al., 2023 (Late Cretaceous, North America)
species Parapsephurus willybemisi Hilton et al., 2023
subfamily Polyodontinae
genus Psephurus Günther, 1873
Psephurus gladius E. von Martens, 1862 Chinese paddlefish (extinct c. 2003)
tribe Polyodontini
genus Crossopholis Cope, 1883 (Paleogene, North America)
species Crossopholis magnicaudatus Cope, 1883
genus Polyodon Lacépède, 1797 (Paleocene-Recent, North America)
Polyodon spathula Walbaum, 1792 American paddlefish
Grande & Bemis, 1991
Relationships of the genera are from .
| Biology and health sciences | Acipenseriformes | Animals |
237949 | https://en.wikipedia.org/wiki/Aerial%20refueling | Aerial refueling | Aerial refueling (en-us), or aerial refuelling (en-gb), also referred to as air refueling, in-flight refueling (IFR), air-to-air refueling (AAR), and tanking, is the process of transferring aviation fuel from one aircraft (the tanker) to another (the receiver) while both aircraft are in flight. The two main refueling systems are probe-and-drogue, which is simpler to adapt to existing aircraft and the flying boom, which offers faster fuel transfer, but requires a dedicated boom operator station.
The procedure allows the receiving aircraft to remain airborne longer, extending its range or loiter time. A series of air refuelings can give range limited only by crew fatigue/physical needs and engineering factors such as engine oil consumption. Because the receiver aircraft is topped-off with extra fuel in the air, air refueling can allow a takeoff with a greater payload which could be weapons, cargo, or personnel: the maximum takeoff weight is maintained by carrying less fuel and topping up once airborne. Aerial refueling has also been considered as a means to reduce fuel consumption on long-distance flights greater than . Potential fuel savings in the range of 35–40% have been estimated for long-haul flights (including the fuel used during the tanker missions).
Usually, the aircraft providing the fuel is specially designed for the task, although refueling pods may be fitted to existing aircraft designs in the case of "probe-and-drogue" systems. The cost of the refueling equipment on both tanker and receiver aircraft and the specialized aircraft handling of the aircraft to be refueled (very close "line astern" formation flying) has resulted in the activity only being used in military operations; there are no regular civilian in-flight refueling activities. Originally trialed shortly before World War II on a limited scale to extend the range of British civilian transatlantic flying boats, and then employed after World War II on a large scale to extend the range of strategic bombers, aerial refueling since the Vietnam War has been extensively used in large-scale military operations.
Development history
Early experiments
Some of the earliest experiments in aerial refueling took place in the 1920s; two slow-flying aircraft flew in formation, with a hose run down from a hand-held fuel tank on one aircraft and placed into the usual fuel filler of the other. The first mid-air refueling, based on the development of Alexander P. de Seversky, between two planes occurred on 25 June 1923, between two Airco DH-4B biplanes of the United States Army Air Service. An endurance record was set by three DH-4Bs (a receiver and two tankers) on 27–28 August 1923, in which the receiver airplane remained aloft for more than 37 hours using nine mid-air refueling to transfer of aviation gasoline and of engine oil. The same crews demonstrated the utility of the technique on 25 October 1923, when a DH-4 flew from Sumas, Washington, on the Canada–United States border, to Tijuana, Mexico, landing in San Diego, using mid-air refuelings at Eugene, Oregon, and Sacramento, California.
Similar trial demonstrations of mid-air refueling technique took place at the Royal Aircraft Establishment in England and by the Armée de l'Air in France in the same year, but these early experiments were not yet regarded as a practical proposition, and were generally dismissed as stunts.
As the 1920s progressed, greater numbers of aviation enthusiasts vied to set new aerial long-distance records, using inflight air refueling. One such enthusiast, who would revolutionize aerial refueling was Sir Alan Cobham, member of the Royal Flying Corps in World War I, and a pioneer of long-distance aviation. During the 1920s, he made long-distance flights to places as far afield as Africa and Australia and he began experimenting with the possibilities of in-flight refueling to extend the range of flight.
Cobham was one of the founding directors of Airspeed Limited, an aircraft manufacturing company that went on to produce a specially-adapted Airspeed Courier that Cobham used for his early experiments with in-flight refueling. This craft was eventually modified by Airspeed to Cobham's specification, for a non-stop flight from London to India, using in-flight refueling to extend the plane's flight duration.
Meanwhile, in 1929, a group of US Army Air Corps fliers, led by then Major Carl Spaatz, set an endurance record of over 150 hours with a Fokker C-2A named the Question Mark over Los Angeles. Between 11 June and 4 July 1930, the brothers John, Kenneth, Albert, and Walter Hunter set a new record of 553 hours 40 minutes over Chicago using two Stinson SM-1 Detroiters as refueler and receiver. Aerial refueling remained a very dangerous process until 1935, when brothers Fred and Al Key demonstrated a spill-free refueling nozzle, designed by A. D. Hunter. They exceeded the Hunters' record by nearly 100 hours in a Curtiss Robin monoplane, staying aloft for more than 27 days.
The US was mainly concerned about transatlantic flights for faster postal service between Europe and America. In 1931 W. Irving Glover, the second assistant postmaster, wrote an extensive article for Popular Mechanics concerning the challenges and the need for such a regular service. In his article he even mentioned the use of aerial refueling after takeoff as a possible solution.
At Le Bourget Airport near Paris, the Aéro-Club de France and the 34th Aviation Regiment of the French Air Force were able to demonstrate passing fuel between machines at the annual aviation fete at Vincennes in 1928. The UK's Royal Aircraft Establishment was also running mid-air refueling trials, with the aim to use this technique to extend the range of the long-distance flying boats that serviced the British Empire. By 1931 they had demonstrated refueling between two Vickers Virginias, with fuel flow controlled by an automatic valve on the hose which would cut off if contact was lost.
Royal Air Force officer Richard Atcherley had observed the dangerous aerial-refueling techniques in use at barnstorming events in the US and determined to create a workable system. While posted to the Middle East he developed and patented his 'crossover' system in 1934, in which the tanker trailed a large hooked line that would reel in a similar dropped line from the receiver, allowing the refueling to commence. In 1935, Cobham sold off the airline Cobham Air Routes Ltd to Olley Air Service and turned to the development of inflight refueling, founding the company Flight Refuelling Ltd. Atcherly's system was bought up by Cobham's company, and with some refinement and continuous improvement through the late '30s, it became the first practical refueling system.
Grappled-line looped-hose
Sir Alan Cobham's grappled-line looped-hose air-to-air refueling system borrowed from techniques patented by David Nicolson and John Lord, and was publicly demonstrated for the first time in 1935. In the system the receiver aircraft, at one time an Airspeed Courier, trailed a steel cable which was then grappled by a line shot from the tanker, a Handley Page Type W10. The line was then drawn back into the tanker where the receiver's cable was connected to the refueling hose. The receiver could then haul back in its cable bringing the hose to it. Once the hose was connected, the tanker climbed sufficiently above the receiver aircraft to allow the fuel to flow under gravity.
When Cobham was developing his system, he saw the need as purely for long-range transoceanic commercial aircraft flights, but modern aerial refueling is used exclusively by military aircraft.
In 1934, Cobham had founded Flight Refuelling Ltd (FRL) and by 1938 had used its looped-hose system to refuel aircraft as large as the Short Empire flying boat Cambria from an Armstrong Whitworth AW.23. Handley Page Harrows were used in the 1939 trials to perform aerial refueling of the Empire flying boats for regular transatlantic crossings. From 5 August to 1 October 1939, sixteen crossings of the Atlantic were made by Empire flying boats, with fifteen crossings using FRL's aerial refueling system. After the sixteen crossings further trials were suspended due to the outbreak of World War II.
During the closing months of World War II, it had been intended that Tiger Force's Lancaster and Lincoln bombers would be in-flight refueled by converted Halifax tanker aircraft, fitted with the FRL's looped-hose units, in operations against the Japanese homelands, but the war ended before the aircraft could be deployed. After the war ended, the USAF bought a small number of FRL looped-hose units and fitted a number of B-29s as tankers to refuel specially equipped B-29s and later B-50s. The USAF made only one major change in the system used by the RAF. The USAF version had auto-coupling of the refueling nozzle, where the leader line with the refueling hose is pulled to the receiver aircraft and a refueling receptacle on the belly of the aircraft, allowing high-altitude air-to-air refueling and doing away with the aircraft having to fly to a lower altitude to be depressurized so a crew member could manually do the coupling.
This air-to-air refueling system was used by the B-50 Superfortress Lucky Lady II of the 43rd Bomb Wing to make its famous first non-stop around-the-world flight in 1949. From 26 February to 3 March 1949, Lucky Lady II flew non-stop around the world in 94 hours and 1 minute, a feat made possible by four aerial refuelings from four pairs of KB-29M tankers of the 43d ARS. Before the mission, crews of the 43rd had experienced only a single operational air refueling contact. The flight started and ended at Carswell Air Force Base in Fort Worth, Texas with the refuelings accomplished over the Azores, Saudi Arabia, the Pacific Ocean near Guam, and between Hawaii and the West Coast.
Probe-and-drogue system
Cobham's company FRL soon realized that their looped-hose system left much to be desired and began work on an improved system that is now commonly called the probe-and-drogue air-to-air refueling system and today is one of the two systems chosen by air forces for air-to-air refueling, the other being the flying-boom system. In post-war trials the RAF used a modified Lancaster tanker employing the much improved probe-and-drogue system, with a modified Gloster Meteor F.3 jet fighter, serial EE397, fitted with a nose-mounted probe. On 7 August 1949, the Meteor flown by FRL test pilot Pat Hornidge took off from Tarrant Rushton and remained airborne for 12 hours and 3 minutes, receiving of fuel in ten refuelings from a Lancaster tanker. Hornidge flew an overall distance of , achieving a new jet endurance record. FRL still exists as part of Cobham plc.
Modern specialized tanker aircraft have equipment specially designed for the task of offloading fuel to the receiver aircraft, based on drogue and probe, even at the higher speeds modern jet aircraft typically need to remain airborne.
In January 1948, General Carl Spaatz, then the first Chief of Staff of the new United States Air Force, made aerial refueling a top priority of the service. In March 1948, the USAF purchased two sets of FRL's looped-hose in-flight refueling equipment, which had been in practical use with British Overseas Airways Corporation (BOAC) since 1946, and manufacturing rights to the system. FRL also provided a year of technical assistance. The sets were immediately installed in two Boeing B-29 Superfortresses, with plans to equip 80 B-29s.
Flight testing began in May 1948 at Wright-Patterson Air Force Base, Ohio, and was so successful that in June orders went out to equip all new B-50s and subsequent bombers with receiving equipment. Two dedicated air refueling units were formed on 30 June 1948: the 43d Air Refueling Squadron at Davis-Monthan Air Force Base, Arizona, and the 509th Air Refueling Squadron at Walker Air Force Base, New Mexico. The first ARS aircraft used FRL's looped-hose refueling system, but testing with a boom system followed quickly in the autumn of 1948.
The first use of aerial refueling in combat took place during the Korean War, involving F-84 fighter-bombers flying missions from Japanese airfields, due to Chinese-North Korean forces overrunning many of the bases for jet aircraft in South Korea, refueling from converted B-29s using the drogue-and-probe in-flight refueling system with the probe located in one of the F-84's wing-tip fuel tanks.
Systems
Flying boom
The flying boom is a rigid, telescoping tube with movable flight control surfaces that a boom operator on the tanker aircraft extends and inserts into a receptacle on the receiving aircraft. All boom-equipped tankers (e.g. KC-135 Stratotanker, KC-10 Extender, KC-46 Pegasus) have a single boom and can refuel one aircraft at a time with this mechanism.
History
In the late 1940s, General Curtis LeMay, commander of the Strategic Air Command (SAC), asked Boeing to develop a refueling system that could transfer fuel at a higher rate than had been possible with earlier systems using flexible hoses, resulting in the flying boom system. The B-29 was the first to employ the boom, and between 1950 and 1951, 116 original B-29s, designated KB-29Ps, were converted at the Boeing plant at Renton, Washington. Boeing went on to develop the world's first production aerial tanker, the KC-97 Stratofreighter, a piston-engined Boeing Stratocruiser (USAF designation C-97 Stratofreighter) with a Boeing-developed flying boom and extra kerosene (jet fuel) tanks feeding the boom. The Stratocruiser airliner itself was developed from the B-29 bomber after World War II. In the KC-97, the mixed gasoline/kerosene fuel system was clearly not desirable and it was obvious that a jet-powered tanker aircraft would be the next development, having a single type of fuel for both its own engines and for passing to receiver aircraft. The 230 mph (370 km/h) cruise speed of the slower, piston-engined KC-97 was also a serious issue, as using it as an aerial tanker forced the newer jet-powered military aircraft to slow down to mate with the tanker's boom, a highly serious issue with the newer supersonic aircraft coming into service at that time, which could force such receiving aircraft in some situations to slow down enough to approach their stall speed during the approach to the tanker. It was no surprise that, after the KC-97, Boeing began receiving contracts from the USAF to build jet tankers based on the Boeing 367-80 (Dash-80) airframe. The result was the Boeing KC-135 Stratotanker, of which 732 were built.
The flying boom is attached to the rear of the tanker aircraft. The attachment is gimballed, allowing the boom to move with the receiver aircraft. The boom contains a rigid pipe to transfer fuel. The fuel pipe ends in a nozzle with a flexible ball joint. The nozzle mates to the "receptacle" in the receiver aircraft during fuel transfer. A poppet valve in the end of the nozzle prevents fuel from exiting the tube until the nozzle properly mates with the receiver's refueling receptacle. Once properly mated, toggles in the receptacle engage the nozzle, holding it locked during fuel transfer.
The "flying" boom is so named because flight control surfaces, small movable airfoils that are often in a V-tail configuration, are used to move the boom by creating aerodynamic forces. They are actuated hydraulically and controlled by the boom operator using a control stick. The boom operator also telescopes the boom to make the connection with the receiver's receptacle.
To complete an aerial refueling, the tanker and receiver aircraft rendezvous, flying in formation. The receiver moves to a position behind the tanker, within safe limits of travel for the boom, aided by director lights or directions radioed by the boom operator. Once in position, the operator extends the boom to make contact with the receiver aircraft. Once in contact, fuel is pumped through the boom into the receiver aircraft.
While in contact, the receiver pilot must continue to fly within the "air refueling envelope", the area in which contact with the boom is safe. Moving outside of this envelope can damage the boom or lead to mid-air collision, for example the 1966 Palomares B-52 crash. If the receiving aircraft approaches the outer limits of the envelope, the boom operator will command the receiver pilot to correct their position and disconnect the boom if necessary.
When the desired amount of fuel has been transferred, the two aircraft disconnect and the receiver aircraft departs the formation. When not in use, the boom is stored flush with the bottom of the tanker's fuselage to minimize drag.
In the KC-97 and KC-135 the boom operator lies prone, while the operator is seated in the KC-10, all viewing operations through a window at the tail. The KC-46 seats two operators at the front of the aircraft viewing camera video on 3D screens.
The US Air Force fixed-wing aircraft use the flying boom system, along with Australia (KC-30A), the Netherlands (KDC-10), Israel (modified Boeing 707), Japan (KC-767), Turkey (KC-135Rs), and Iran (Boeing 707 and 747). The system allows higher fuel flow rates (up to / per minute for the KC-135, but does require a boom operator, and can only refuel one aircraft at a time.
Probe-and-drogue
The probe-and-drogue refueling method employs a flexible hose that trails from the tanker aircraft. The drogue (or para-drogue), sometimes called a basket, is a fitting resembling a shuttlecock, attached at its narrow end (like the "cork" nose of a shuttlecock) with a valve to a flexible hose. The drogue stabilizes the hose in flight and provides a funnel to aid insertion of the receiver aircraft probe into the hose. The hose connects to a Hose Drum Unit (HDU). When not in use, the hose/drogue is reeled completely into the HDU.
The receiver has a probe, which is a rigid, protruding or pivoted retractable arm placed on the aircraft's nose or fuselage to make the connection. Most modern versions of the probe are usually designed to be retractable, and are retracted when not in use, particularly on high-speed aircraft.
At the end of the probe is a valve that is closed until it mates with the drogue's forward internal receptacle, after which it opens and allows fuel to pass from tanker to receiver. The valves in the probe and drogue that are most commonly used are to a NATO standard and were originally developed by the company Flight Refuelling Limited in the UK and deployed in the late 1940s and 1950s. This standardization enables drogue-equipped tanker aircraft from many nations to refuel probe-equipped aircraft from other nations.
The NATO-standard probe system incorporates shear rivets that attach the refueling valve to the end of the probe. This is so that if a large side or vertical load develops while in contact with the drogue, the rivets shear and the fuel valve breaks off, rather than the probe or receiver aircraft suffering structural damage. A so-called "broken probe" (actually a broken fuel valve, as described above) may happen if poor flying technique is used by the receiver pilot, or in turbulence. Sometimes the valve is retained in the tanker drogue and prevents further refueling from that drogue until removed during ground maintenance.
Buddy store
A "buddy store" or "buddy pod" is an external pod loaded on an aircraft hardpoint that contains a hose and drogue system (HDU). Buddy stores allow fighterbomber aircraft to be reconfigured for "buddy tanking" other aircraft. This allows an air combat force without dedicated/specialized tanker support (for instance, a carrier air wing) to extend the range of its strike aircraft. In other cases, using the buddy store method allows a carrier-based aircraft to take-off with a heavier than usual load less fuel than might be necessary for its tasking. The aircraft would then topped-up with fuel from an HDU-equipped "buddy" tanker, a method previously used by the Royal Navy in operating its Supermarine Scimitar, de Havilland Sea Vixen, and Blackburn Buccaneers; in the Buccaneer's case using a bomb-bay-mounted tank and HDU.
The tanker aircraft flies straight and level and extends the hose/drogue, which is allowed to trail out behind and below the tanker under normal aerodynamic forces. The pilot of the receiver aircraft extends the probe (if required) and uses normal flight controls to "fly" the refueling probe directly into the basket. This requires a closure rate of about two knots (walking speed) to push the hose several feet into the HDU and solidly couple the probe and drogue. Too little closure will cause an incomplete connection and no fuel flow (or occasionally leaking fuel). Too much closure is dangerous because it can trigger a strong transverse oscillation in the hose, severing the probe tip.
The optimal approach is from behind and below (not level with) the drogue. Because the drogue is relatively light (typically soft canvas webbing) and subject to aerodynamic forces, it can be pushed around by the bow wave of approaching aircraft, exacerbating engagement even in smooth air. After initial contact, the hose and drogue is pushed forward by the receiver a certain distance (typically, a few feet), and the hose is reeled slowly back onto its drum in the HDU. This opens the tanker's main refueling valve allowing fuel to flow to the drogue under the appropriate pressure (assuming the tanker crew has energized the pump). Tension on the hose is aerodynamically 'balanced' by a motor in the HDU so that as the receiver aircraft moves fore and aft, the hose retracts and extends, thus preventing bends in the hose that would cause undue side loads on the probe. Fuel flow is typically indicated by illumination of a green light near the HDU. If the hose is pushed in too far or not far enough, a cutoff switch will inhibit fuel flow, which is typically accompanied by an amber light. Disengagement is commanded by the tanker pilot with a red light.
The US Navy, Marine Corps, and some Army aircraft refuel using the "hose-and-drogue" system, as do most aircraft flown by western European militaries. The Soviet Union also used a hose-and-drogue system, dubbed UPAZ, and thus later Russian aircraft may be equipped with probe and drogue. The Chinese PLAF has a fleet of Xian H-6 bombers modified for aerial refueling, and plans to add Russian Ilyushin Il-78 aerial refueling tankers.
Tankers can be equipped with multipoint hose-and-drogue systems, allowing them to refuel two (or more) aircraft simultaneously, reducing time spent refueling by as much as 75% for a four-aircraft strike package.
Boom drogue adapter units
USAF KC-135 and French Air Force KC-135FR refueling-boom equipped tankers can be field-converted to a probe-and-drogue system using a special adapter unit. In this configuration, the tanker retains its articulated boom, but has a hose/drogue at the end of it instead of the usual nozzle. The tanker boom operator holds the boom still while the receiver aircraft flies the probe into the basket. Unlike the soft canvas basket used in most drogue systems, the adapter units use a steel basket, grimly known as the "iron maiden" by naval aviators because of its unforgiving nature. Soft drogues can be contacted slightly off center, wherein the probe is guided into the hose receptacle by the canvas drogue. The metal drogue, when contacted even slightly off center, will pivot out of place, potentially "slapping" the aircraft's fuselage and causing damage.
The other major difference with this system is that when contacted, the hose does not "retract" into an HDU. Instead, the hose bends depending on how far it is pushed toward the boom. If it is pushed too far, it can loop around the probe or nose of the aircraft, damage the windscreen, or cause contact with the rigid boom. If not pushed far enough, the probe will disengage, halting fueling. Because of a much smaller position-keeping tolerance, staying properly connected to a KC-135 adapter unit is considerably more difficult than staying in a traditional hose/drogue configuration. When fueling is complete, the receiver carefully backs off until the probe refueling valve disconnects from the valve in the basket. Off center disengagements, like engagements, can cause the drogue to "prang" the probe and/or strike the aircraft's fuselage.
Multiple systems
Some tankers have both a boom and one or more complete hose-and-drogue systems. The USAF KC-10 has both a flying boom and a separate hose-and-drogue system manufactured by Cobham. Both are on the aircraft centerline at the tail of the aircraft, so only one can be used at once. However, such a system allows all types of probe- and receptacle-equipped aircraft to be refueled in a single mission, without landing to install an adapter. Other tankers are equipped with hose-and-drogue attachments that do not interfere with the operation of the centerline boom: many KC-135s are equipped with dual under-wing attachments known as Multi-point Refueling System (MPRSs), while some KC-10s and A330 MRTTs have similar under-wing refueling pods (referred to as Wing Air Refueling Pods or WARPs on the KC-10).
Wing-to-wing
A small number of Soviet Tu-4s and Tu-16s (the tanker variant was Tu-16Z). used a wing-to-wing method. Similar to the probe-and-drogue method but more complicated, the tanker aircraft released a flexible hose from its wingtip. An aircraft flying alongside had to catch the hose with a special lock under its wingtip. After the hose was locked and the connection was established, the fuel was pumped.
Simple grappling
Some historic systems used for pioneering aerial refueling used the grappling method, where the tanker aircraft unreeled the fuel hose and the receiver aircraft would grapple the hose midair, reel it in and connect it so that fuel can be transferred either with the assistance of pumps or simply by gravity feed. This was the method used on the Question Mark endurance flight in 1929.
Compatibility issues
The probe-and-drogue system is not compatible with flying boom equipment, creating a problem for military planners where mixed forces are involved. Incompatibility can also complicate the procurement of new systems. The Royal Canadian Air Force currently wish to purchase the F-35A, which can only refuel via the flying boom, but only possess probe-and-drogue refuelers. The potential cost of converting F-35As to probe-and-drogue refueling (as is used on US Navy & Marine Corps F-35Bs and F-35Cs) added to the early-2010s political controversy which surrounded F-35 procurement within the RCAF.
These concerns can be addressed by drogue adapters (see section "Boom drogue adapter units" above) that allow drogue aircraft to refuel from boom-equipped aircraft, and by refuelers that are equipped with both drogue and boom units and can thus refuel both types in the same flight, such as the KC-10, MPRS KC-135, or Airbus A330 MRTT.
Strategic
The development of the KC-97 and Boeing KC-135 Stratotankers was pushed by the Cold War requirement of the United States to be able to keep fleets of nuclear-armed B-47 Stratojet and B-52 Stratofortress strategic bombers airborne around-the-clock either to threaten retaliation against a Soviet strike for mutual assured destruction, or to bomb the USSR first had it been ordered to do so. The bombers would fly orbits around their assigned positions from which they were to enter Soviet airspace if they received the order, and the tankers would refill the bombers' fuel tanks so that they could keep a force in the air 24 hours a day, and still have enough fuel to reach their targets in the Soviet Union. This also ensured that a first strike against the bombers' airfields could not obliterate the US's ability to retaliate by bomber.
In 1958, Valiant tankers in the UK were developed with one HDU mounted in the bomb-bay. Valiant tankers of 214 Squadron were used to demonstrate radius of action by refueling a Valiant bomber non-stop from UK to Singapore in 1960 and a Vulcan bomber to Australia in 1961. Other UK exercises involving refueling aircraft from Valiant tankers included Javelin and Lightning fighters, also Vulcan and Victor bombers. For instance, in 1962 a squadron of Javelin air defense aircraft was refueled in stages from the UK to India and back (exercise "Shiksha"). After the retirement of the Valiant in 1965, the Handley Page Victor took over the UK refueling role and had three hoses (HDUs). These were a fuselage-mounted HDU and a refueling pod on each wing. The center hose could refuel any probe-equipped aircraft, the wing pods could refuel the more maneuverable fighter/ground attack types.
A byproduct of this development effort and the building of large numbers of tankers was that these tankers were also available to refuel cargo aircraft, fighter aircraft, and ground attack aircraft, in addition to bombers, for ferrying to distant theaters of operations. This was much used during the Vietnam War, when many aircraft could not have covered the transoceanic distances without aerial refueling, even with intermediate bases such as Hickam Air Force Base, Hawaii and Kadena Air Base, Okinawa. In addition to allowing the transport of the aircraft themselves, the cargo aircraft could also carry matériel, supplies, and personnel to Vietnam without landing to refuel. KC-135s were also frequently used for refueling of air combat missions from air bases in Thailand.
The USAF SR-71 Blackbird strategic reconnaissance aircraft made frequent use of air-to-air refueling. Indeed, design considerations of the aircraft made its mission impossible without aerial refueling. Based at Beale AFB in central California, SR-71s had to be forward-deployed to Europe and Japan prior to flying actual reconnaissance missions. These trans-Pacific and trans-Atlantic flights during deployment were impossible without aerial refueling. The SR-71's designers traded takeoff performance for better high-speed, high-altitude performance, necessitating takeoff with less-than-full fuel tanks from even the longest runways. Once airborne, the Blackbird would accelerate to supersonic speed using afterburners to facilitate structural heating and expansion. The magnitude of temperature changes experienced by the SR-71, from parked to its maximum speed, resulted in significant expansion of its structural parts in cruise flight. To allow for the expansion, the Blackbird's parts had to fit loosely when cold, so loosely, in fact, that the Blackbird constantly leaked fuel before heating expanded the airframe enough to seal its fuel tanks. Following the supersonic dash the SR-71 would then rendezvous with a tanker to fill its now nearly empty tanks before proceeding on its mission. This was referred to as the LTTR (for "Launch To Tanker Rendezvous") profile. LTTR had the added advantage of providing an operational test of the Blackbird's refueling capability within minutes after takeoff, enabling a Return-To-Launch-Site abort capability if necessary. At its most efficient altitude and speed, the Blackbird was capable of flying for many hours without refueling. The SR-71 used a special fuel, JP-7, with a very high flash point to withstand the extreme skin temperatures generated during Mach 3+ cruise flight. While JP-7 could be used by other aircraft, its burn characteristics posed problems in certain situations (such as high-altitude, emergency engine starts) that made it less than optimal for aircraft other than the SR-71.
Normally, all the fuel aboard a tanker aircraft may be either offloaded, or burned by the tanker as necessary. To make this possible, the KC-135 fuel system incorporated gravity draining and pumps to allow moving fuel from tank to tank depending on mission needs. Mixing JP-7 with JP-4 or Jet A, however, rendered it unsuitable for use by the SR-71, so the Air Force commissioned a specially modified KC-135 variant, the KC-135Q, which included changes to the fuel system and operating procedures preventing inadvertent inflight mixing of fuel intended for offload with fuel intended for use by the tanker. SR-71 aircraft were refueled exclusively by KC-135Q tankers.
Tactical
Tankers are considered "force multipliers", because they convey considerable tactical advantages. Primarily, aerial refueling adds to the combat radius of attack, fighter and bombers aircraft, and allows patrol aircraft to remain airborne longer, thereby reducing the number of aircraft necessary to accomplish a given mission. Aerial refueling can also mitigate basing issues that might otherwise place limitations on combat payload. Combat aircraft operating from airfields with shorter runways must limit their takeoff weight, which could mean a choice between range (fuel) and combat payload (munitions). Aerial refueling, however, eliminates many of these basing difficulties because a combat aircraft can take off with a full combat payload and refuel immediately.
Operational history
Cold War
Even as the first practical methods for aerial refueling were being developed, military planners had already envisioned what missions could be greatly enhanced by using such techniques. In the emerging Cold War climate of the late 1940s, the ability for bombers to perform increasingly long distance missions would enable targets to be struck even from air bases on a different continent. Thus, it became commonplace for nuclear-armed strategic bombers to be equipped with aerial refueling apparatus and for it to be used to facilitate long distance patrols.
During the late 1950s, aerial refueling had become so prevalent amongst the bombers operated by the US Air Force's Strategic Air Command that many, such as the Convair B-58 Hustler, would operate largely or entirely out of bases in the continental United States while maintaining strategic reach. This practice was promoted to address security concerns as well as diplomatic objections from some overseas states that did not want nuclear weapons kept on their soil. In one early demonstration of the Boeing B-52 Stratofortress's global reach, performed between 16 and 18 January 1957, three B-52Bs made a non-stop flight around the world during Operation Power Flite, during which 24,325 miles (21,145 nmi, 39,165 km) was covered in 45 hours 19 minutes (536.8 smph) with multiple in-flight refuelings being performed from KC-97s.
While development of the Avro Vulcan strategic bomber was underway, British officials recognised that its operational flexibility could be improved by the provision of in-flight refueling equipment. Accordingly, from the 16th aircraft to be completed onwards, the Vulcan was furnished with in-flight refueling receiving equipment. While continuous airborne patrols were flown by the RAF for a time, these were deemed to be untenable, and the refueling mechanisms across the Vulcan fleet largely fell into disuse during the 1960s. When the RAF chose to optimise its bomber fleet away from high-altitude flight and towards low-level penetration missions, bombers such as the Handley Page Victor were fitted with aerial refueling probes and additional fuel tanks to counter the decreased range from the shift in flight profile.
During the mid-1950s, to deliver France's independent nuclear deterrent, work commenced on what would become the Dassault Mirage IV supersonic bomber. The dimensions of this bomber was greatly determined by the viability of aerial refueling, with work on an enlarged variant of the Mirage IV ultimately being aborted in favour of a greater reliance upon aerial tanker aircraft instead. In order to refuel the Mirage IVA fleet, France purchased 14 (12 plus 2 spares) US Boeing C-135F tankers. Mirage IVAs also often operated in pairs, with one aircraft carrying a weapon and the other carrying fuel tanks and a buddy refueling pack, allowing it to refuel its partner en route to the target. While able to strike at numerous targets inside of the Soviet Union, the inability for the Mirage IV to return from some missions had been a point of controversy during the aircraft's design phase.
Korean War
On 6 July 1951, the first combat air refueling of fighter-type aircraft took place over Korea. Three RF-80As launched from Taegu with the modified tip-tanks and rendezvoused with a tanker offshore of Wonsan, North Korea. Through in-flight refueling, the RF-80s effectively doubled their range, which enabled them to photograph valuable targets in North Korea.
Vietnam War
During the Vietnam War, it was common for USAF fighter-bombers flying from Thailand to North Vietnam to refuel from KC-135s en route to their target. Besides extending their range, this enabled the F-105s and F-4 Phantoms to carry more bombs and rockets. Tankers were also available for refueling on the way back if necessary. In addition to ferrying aircraft across the Pacific Ocean, aerial refueling made it possible for battle-damaged fighters, with heavily leaking fuel tanks, to hook up to the tankers and let the tanker feed its engine(s) until the point where they could glide to the base and land. This saved numerous aircraft.
The US Navy frequently used carrier-based aerial tankers like the KA-3 Skywarrior to refuel Navy and Marine aircraft such as the F-4, A-4 Skyhawk, A-6 Intruder, and A-7 Corsair II. This was particularly useful when a pilot returning from an airstrike was having difficulty landing and was running low on jet fuel. This gave them fuel for more attempts at landing for a successful "trap" on an aircraft carrier. The KA-3 could also refuel fighters on extended Combat Air Patrol. USMC jets based in South Vietnam and Thailand also used USMC KC-130 Hercules transports for air-to-air refueling on missions.
During late August 1970, a pair of HH-53C helicopters performed the first Trans-Pacific flight by a helicopter, flying from Eglin AFB in Florida to Danang in South Vietnam. In addition to making multiple en route stops to refuel on the ground, aerial refueling was also used in this display of the type's long-range capabilities. The flight proved to be roughly four times faster than the traditional dispatching of rotorcraft to the theatre by ship.
Middle East
During the 1980s Iran–Iraq War, the Iranian Air Force maintained at least one KC 707-3J9C aerial tanker, which the Islamic Republic had inherited from the Shah's government. This was used most effectively on 4 April 1981, refueling eight IRIAF F-4 Phantoms on long-range sorties into Iraq to bomb the H-3 Al Walid airfield near the Jordanian border, destroying 27–50 Iraqi fighter jets and bombers. However, the Iranian Air Force was forced to cancel its 180-day air offensive and attempts to control Iranian airspace due to unsustainable rates of attrition.
The Israeli Air Force has a fleet of Boeing 707s equipped with a boom refueling system similar to the KC-135, this system has the Israeli name Ram, used to refuel and extend the range of fighter bombers such as the F-15I and F-16I for deterrent and strike missions, they are nearing 60 years old and Israel does not disclose the number of tankers in their fleet. In 1985, Israeli F-15s used heavily modified Boeing 707 aircraft to provide aerial refueling over the Mediterranean Sea in order to extend their range for Operation Wooden Leg, an air raid on the headquarters of the Palestine Liberation Organization (PLO) near Tunis, Tunisia, that necessitated a 2,000 km flight. As of 2021 Israel has ordered four of a planned eight Boeing KC-46 Pegasus boom refueling tankers and has requested that the first two aircraft be fast-tracked for delivery in 2022 when they were to be delivered in 2023. The Jerusalem Post reports that Israeli commanders have made this request to enhance the strategic deterrence against Iran, the same article reports that the US, whose air force is also taking its first deliveries of the aircraft type, has refused to move forward the deliveries while supporting Israel's deterrence; the Jpost editor writing "The US State Department approved the possible sale of up to eight KC-46 tanker aircraft and related equipment to Israel for an estimated cost of $2.4 billion last March(i.e. 5/2020), marking the first time that Washington has allowed Jerusalem to buy new tankers."
Falklands War
During the Falklands War, aerial refueling played a vital role in all of the successful Argentine attacks against the Royal Navy. The Argentine Air Force had only two KC-130H Hercules available and they were used to refuel both Air Force and Navy A-4 Skyhawks and Navy Super Etendards in their Exocet strikes. The Hercules on several occasions approached the islands (where the Sea Harriers were in patrol) to search and guide the A-4s in their returning flights. On one of those flights (callsign Jaguar) one of the KC-130s went to rescue a damaged A-4 and delivered of fuel while carrying it to its airfield at San Julian. However, the Mirage IIIs and Daggers lack of air refueling capability prevented them from achieving better results. The Mirages were unable to reach the islands with a strike payload, and the Daggers could do so only for a five-minute strike flight.
On the British side, air refueling was carried out by the Handley Page Victor K.2 and, after the Argentine surrender, by modified C-130 Hercules tankers. These aircraft aided deployments from the UK to the Ascension Island staging post in the Atlantic and further deployments south of bomber, transport and maritime patrol aircraft. The most famous refueling missions were the 8,000 nmi (15,000 km) "Operation Black Buck" sorties which used 14 Victor tankers to allow an Avro Vulcan bomber (with a flying reserve bomber) to attack the Argentine-captured airfield at Port Stanley on the Falkland Islands. With all the aircraft flying from Ascension, the tankers themselves needed refueling. The raids were the longest-range bombing raids in history until surpassed by the Boeing B-52s flying from the States to bomb Iraq in the 1991 Gulf War and later B-2 flights.
Gulf War
During the time of Operation Desert Shield, the military buildup to the Persian Gulf War, US Air Force Boeing KC-135s & McDonnell Douglas KC-10As, and USMC KC-130 Hercules aircraft were deployed to forward air bases in England, Diego Garcia, and Saudi Arabia. Aircraft stationed in Saudi Arabia normally maintained an orbit in the Saudi–Iraqi neutral zone, informally known as "Frisbee", and refueled coalition aircraft whenever necessary. Two side by side tracks over central Saudi Arabia called "Prune" and "Raisin" featured 2–4 basket equipped KC-135 tankers each and were used by Navy aircraft from the Red Sea Battle Force. Large Navy strike groups from the Red Sea would send A-6 tankers to the Prune and Raisin tracks ahead of the strike aircraft arriving to top off and take up station to the right of the Air Force tankers thereby providing an additional tanking point. RAF Handley Page Victor and Vickers VC10 tankers were also used to refuel British and coalition aircraft and were popular with the US Navy for their docile basket behavior and having three point refueling stations. An additional track was maintained close to the northwest border for the E-3 AWACS aircraft and any Navy aircraft needing emergency fuel. These 24-hour air-refueling zones enabled the intense air campaign during Desert Storm. An additional 24/7 tanker presence was maintained over the Red Sea itself to refuel Navy F-14 Tomcats maintaining Combat Air Patrol tracks. During the conflict's final week, KC-10s moved inside Iraq to support barrier CAP missions set up to block Iraqi fighters from escaping to Iran.
On 16–17 January 1991, the first combat sortie of Operation Desert Storm, and the longest combat sortie in history at that time, was launched from Barksdale AFB, Louisiana. Seven B-52Gs flew a thirty-five-hour mission to the region and back to launch 35 Boeing Air Launched Cruise Missiles (ALCMs) with the surprise use of conventional warheads. This attack, which successfully destroyed 85–95 percent of intended targets, would have been impossible without the support of refueling tankers.
An extremely useful tanker in Desert Storm was the USAF's KC-10A Extender. Besides being larger than the other tankers deployed, the KC-10A is equipped with the USAF "boom" refueling and also the "hose-and-drogue" system, enabling it to refuel not only USAF aircraft, and also USMC and US Navy jets that use the "probe-and-drogue" system, and also allied aircraft, such as those from the UK and Saudi Arabia. KC-135s may be equipped with a drogue depending on the mission profile. With a full jet fuel load, the KC-10A is capable of flying from a base on the American east coast, flying nonstop to Europe, transferring a considerable amount of fuel to other aircraft, and returning to its home base without landing anywhere else.
On 24 January 1991, the Iraqi Air Force launched the Attack on Ras Tanura, an attempt to bomb the Ras Tanura oil facility in Saudi Arabia. On their way to the target, the Iraqi attack aircraft were refueled by tanker at an altitude of 100 meters. The attack ultimately failed, with two aircraft turning back and the remaining two shot down.
Centennial Contact
On 27 June 2023 the United States Air Force, including the Air National Guard, commemorated the 100-year anniversary of the first aerial refueling by holding "Operation Centennial Contact." Aircraft from various bases conducted aerial refueling exercises across the United States, as well as conducting flyovers in 50 states. 152 aircraft were slated to participate in the operation, with 82 tanker aircraft providing refueling support to 70 other participating aircraft.
The aircraft participating included 49 Boeing KC-135 Stratotanker; 29 Boeing KC-46 Pegasus; four McDonnell Douglas KC-10 Extenders, and aircraft receiving fuel included
21 Lockheed C-130 Hercules; 13 Boeing C-17 Globemaster III; ten McDonnell Douglas F-15 Eagle; eight Lockheed Martin F-22 Raptor; six Fairchild Republic A-10 Thunderbolt II; five General Dynamics F-16 Fighting Falcons; four Lockheed Martin F-35 Lightning II; two Boeing B-52 Stratofortresses; and one Lockheed C-5 Galaxy.
Helicopters
Helicopter in-flight refueling (HIFR) is a variation of aerial refueling when a naval helicopter approaches a warship (not necessarily suited for landing operations) and receives fuel through the cabin while hovering. Alternatively, some helicopters like the HH-60 Pave Hawk are equipped with a probe extending out the front can be refueled from a drogue-equipped tanker aircraft in a similar manner to fixed-wing aircraft by matching a high forward speed for a helicopter to a slow speed for the fixed-wing tanker.
Longest crewed flight record
A mission modified Cessna 172 Skyhawk with a crew of two set the world record for the longest continuous crewed flight without landing of 64 days, 22 hours, 19 minutes, and five seconds in 1958. A Ford truck was outfitted with a fuel pump, tank, and other paraphernalia required to support the aircraft in flight. The publicity flight for a Las Vegas area hotel ended when the aircraft's performance had degraded to the point where the Cessna had difficulty climbing away from the refueling vehicle.
Developments
Commercial tankers are occasionally used by military forces. The Omega Aerial Refueling Services company and Metrea Strategic Mobility are contracted by the US Navy. In June 2023, Metrea completed the first commercial refueling of the US Air Force.
Autonomous (hands off) refueling using probe/drogue systems was investigated by NASA, potentially for use by unmanned aerial vehicles in the KQ-X program.
Operators
Algerian Air Force
Argentine Air Force
Royal Australian Air Force
Brazilian Air Force
Royal Canadian Air Force
Chilean Air Force
People's Liberation Army Air Force
Colombian Air Force
Egyptian Air Force
French Air Force
French Navy
German Air Force
Indian Air Force
Indonesian Air Force
Islamic Republic of Iran Air Force
Israeli Air Force
Italian Air Force
Japan Air Self-Defense Force
Republic of Korea Air Force
Kuwait Air Force
Royal Malaysian Air Force
Royal Moroccan Air Force
Royal Netherlands Air Force
Pakistan Air Force
Portuguese Air Force
Russian Air Force
Republic of Singapore Air Force
Spanish Air and Space Force
Swedish Air Force
Turkish Air Force
United Arab Emirates Air Force
Royal Air Force
Royal Saudi Air Force
United States Air Force
United States Marine Corps
United States Navy
Venezuelan Air Force
| Technology | Military aviation | null |
237962 | https://en.wikipedia.org/wiki/Deinotherium | Deinotherium | Deinotherium is an extinct genus of large, elephant-like proboscideans that lived from about the middle-Miocene until the early Pleistocene. Although its appearance is reminiscent of modern elephants, Deinotherium possessed a notably more flexible neck, with limbs adapted to a more cursorial lifestyle, as well as tusks which grew down and curved back from the mandible, as opposed to the forward-growing maxillary tusks of extant elephants. Deinotherium was a widespread genus, ranging from East Africa, north to Southern Europe, and east to the Indian subcontinent. They were primarily browsing animals, with a diet largely consisting of leaves. The genus most likely went extinct due to environmental changes, such as forested areas gradually being replaced by open grasslands, during the latter half of the Neogene. Deinotherium thrived the longest in Africa, where they survived until the end of the Early Pleistocene, around 1 million years ago.
History and naming
Deinotherium has a long history, possibly dating back as early as the 17th century when a French surgeon named Matsorier found the bones of large animals in an area known as the "field of giants" near Lyon. Matsorier is said to have exhibited these bones across France and Germany as the supposed bones of a French monarch, until he was exposed and the bones were handed over to the French National Museum of Natural History. In 1775 researchers recognized the bones as belonging to an animal "similar to a mammoth" and during the late 18th/early 19th century George Cuvier hypothesized that they actually belonged to a large tapir with upwards curving tusks which he named Tapir gigantesque. Another early hypothesis suggested that Deinotherium was a sirenian that used its tusks to anchor itself to the sea floor while sleeping.
The genus Deinotherium was coined in 1829 by Johann Jakob von Kaup to describe a fossil skull and mandible discovered in Germany. The type specimen, D. giganteum was at the time thought to be an evolutionary link between sloths and mastodonts. Further remains were discovered and named, including many that would later come to be considered part of the genus Prodeinotherium. These additional remains also helped solidify Deinotherium's position within Proboscidea and finds in India described as D. indicum extended the range of the genus outside of Europe. Fossils of an exceptionally large specimen found in Manzati, Romania between the late 19th and early 20th century were described as D. gigantissimum. In Bulgaria Deinotherium remains have been found from 1897 onward, with one particular fossil of an almost complete animal found in 1965. These remains were officially described in December of 2006 as D. thraceiensis, making it the most recently named species, although later studies synonymize it with the other European species.
The name Deinotherium is derived from the Ancient Greek , meaning "terrible" and , meaning "beast"). Some authors have on occasion referred to Deinotherium as Dinotherium, following latinization of the first element of the name. Although pronunciation remains unchanged, Deinotherium remains the valid spelling as it was coined first.
Description
Deinotherium was a large-bodied proboscidean displaying continued growth between species. Two adult males of D. giganteum were around tall at the shoulder and weighed . This is similar to adult males of D. proavum, one of which weighed and was tall at the shoulder. The average male and female D. proavum has been estimated to have had a shoulder height of and a weight of . However, both these species are smaller than a 45-year-old male of D. "thraceiensis", at tall at the shoulder and . The most recent species, D. bozasi, was around tall at the shoulder and weighed . The general anatomy of Deinotherium is similar to that of modern elephants with pillar-like limbs, although proportionally longer and more slender than those of other proboscideans. The bones of the toes are longer and less robust than in elephants and the neck likewise differs notably in that it is relatively longer, though still quite short compared to other modern browsers like giraffes.
The permanent tooth formula of D. giganteum was (deciduous ), with vertical cheek tooth replacement. Two sets of bilophodont and trilophodont teeth were present. The molars and rear premolars were vertical shearing teeth, and suggest that deinotheres became an independent evolutionary branch very early on; the other premolars were used for crushing. The cranium was short, low, and flattened on the top, in contrast to more advanced proboscideans, which have a higher and more domed forehead, with very large, elevated occipital condyles. The largest skulls of Deinotherium reached a length of . The nasal opening was retracted and large, indicating a large trunk. The rostrum was long and the rostral fossa broad. The mandibular symphyses (the lower jaw-bone) were very long and curved downward, which, with the backward-curved tusks, is a distinguishing feature of the group.
These tusks are without doubt the most immediately visible feature of Deinotherium. Unlike in modern proboscideans, which possess tusks that grow from the upper incisors, the tusks of Deinotherium grow from the lower incisors, with upper incisors and upper and lower canines missing entirely. The curvature is initially formed by the mandible itself, with the teeth themselves erupting at only the halfway point of the curve. The degree to which the tusks follow the direction predetermined by the mandible varies between specimens, with some tusks following the curve and pointing backwards, forming an almost semicircular shape, while in other specimens the tusks continue down almost vertically. The tusks have a roughly oval cross-section and could reach a length of .
Although the presence of an elephant-like proboscis or trunk in Deinotherium is evident thanks to the size and shape of the external nares, the exact shape and size of this trunk is a matter that has long been debated. Historic depictions commonly portray it as very elephantine with a long trunk and tusks breaking through the skin below an elephantine lower lip. In the early 2000s Markov and colleagues published papers on the facial soft tissue of Deinotherium contesting these ideas, instead suggesting an alternative soft tissue reconstruction. In the first of these publications the authors argue that, due to the origin of these animal's tusks, the lower lip should be situated beneath them as they evolved their classic downturned appearance. They further suggest that, while a trunk would be present, it would likely not resemble that of modern elephants and instead be more robust and muscular, which they reason is evidenced by the lack of a proper insertion surface. Although later research concurs that the trunk or proboscis of Deinotherium was likely notably different from those of modern proboscideans, the idea of a short tapir-like trunk is questioned. In particular, it is pointed out that the tall stature and still relatively short neck of Deinotherium would render it very difficult for the animal to drink without assuming a more complex posture. Thus it is suggested that the trunk must have been at least long enough for the animal to effectively drink.
Species
Throughout the long history of deinotheriid research, 31 species have been described and assigned to the family, many on the basis of poorly sampled material, especially teeth of varying size. The amount of species recognized by authors differs depending on researchers, but the three species most commonly considered valid are listed below.
D. bozasi
Known from East Africa, Deinotherium bozasi was the last known species of Deinotherium, surviving in the Kanjera Formation, Kenya, until the early Pleistocene roughly 1 million years ago. It is characterized by a narrower rostral trough, a smaller but higher nasal aperture, a higher and narrower cranium, and a shorter mandibular symphysis than the other two species. In a 2013 publication Martin Pickford notes that D. bozasi has mandibles anatomically similar to those of D. proavum, however most specimens are smaller than those of the European species. To explain this, two hypotheses are suggested, one that they share a common ancestor and the other that D. bozasi may be an example of Allen's rule, which states that animals at lower latitudes are typically smaller than relatives at higher latitudes. However Markov and colleagues suggest that the similar mandibular anatomy may be a case of parallel evolution between late European species and D. bozasi in response to aridification and an increased need for effective mastication.
D. giganteum
The type species D. giganteum was found in Europe from the Middle Miocene to Early Pliocene. However, the exact extinction of D. giganteum in Europe is unknown. The last known occurrences in Central and Western Europe appear to be in MN13 (Messinian to Zanclean), while material from Russia might extend the range of the species to MN15 (Ruscinian). Fossils of D. giganteum have also been found on the island of Crete in the upper Miocene Faneroméni Formation, during a time when the island was still connected to the mainland.
D. indicum
The Asian species, D. indicum is distinguished by a more robust dentition as well as p4-m3 intravalley tubercles and found across the Indian subcontinent (India and Pakistan) during the Middle and Late Miocene. It disappeared from the fossil record about 7 million years ago (Late Miocene). Although it is generally regarded as valid, some researchers argue that it is synonymous with D. proavum and that the later name would take precedence. Pickford, for instance, argues that fossils from Iran create a geographic link between European populations and the Indian specimens, concluding that they may be one single wide ranging species.
One hypothesis opposing this three-species model suggests that, rather than being a single consistent species lasting throughout the Miocene, D. giganteum actually represents multiple chronospecies, with the type species only applying to the intermediate form.
Other species that have been described include:
D. levius (Jourdan, 1861)
D. levius is a European species of Deinotherium recovered from sediments dating to the late Astaracian to Aragonian. While it is considered a synonym of D. giganteum by some researchers, others propose that it is a stratigraphically distinct chronospecies and the earliest of European Deinotherium. In accordance to this hypothesis, D. levius would eventually give rise to D. giganteum by the Vallesian stage of the Miocene, after which the two species continued to coexist until the formers extinction.
D. proavum (Eichwald, 1831)
D. proavum is a large bodied species of Deinotherium that may be a junior synonym of Deinotherium giganteum. Other research meanwhile proposes that it, alongside D. giganteum and D. levius, is part of a single anagenetic lineage of Deinotherium species. For this hypothesis it has been suggested that it evolved from D. giganteum during the late Vallesion to Turolian, with early members of the species still being similar in size to its ancestor before surpassing it later during its range. However, the assignment of specimens to D. proavum is largely based on stratigraphy and size, making the differentiation between species difficult, especially with some research suggesting that the two species continued to coexist.
D. gigantissimus
D. gigantissimus from Romania is typically considered to be a larger specimen belonging either to D. giganteum or D. proavum (depending on how many species are recognized by the respective author). The situation is similar in D. thraceiensis from Bulgaria, another notably large deinothere, described in 2006 but usually lumped into other European species by subsequent publications. The state of Asian species is especially complex, with a multitude of specimens being described from poor remains. These include D. sindiense (Lydekker, 1880), D. orlovii (Sahni and Tripathi, 1957), D. naricum (Pilgrim, 1908), and D. anguistidens (Koch 1845), all of which are generally considered dubious by publications of the 21st century. Only one other species from Africa was described, D. hopwoodi (Osborn, 1936), based on teeth from the Omo Basin in Ethiopia. However his research was published posthumously and was predated by D. bozasi, described two years prior.
Another matter that complicates the amount of Deinotherium species recognized by science is the state of the genus Prodeinotherium. One prevailing theory is that Prodeinotherium is a distinct genus ancestral to the larger Deinotherium species. Other researchers, however, argue that the anatomical differences, the difference in size in particular, are not enough to properly distinguish the two, which would subsequently render species of Prodeinotherium as Deinotherium instead. This would create the combinations D. bavaricum, D. cuvieri (both European), D. hobleyi (Africa), P. pentapotamiae, and possibly D. sinense (Asia).
Deinotherium was a widespread genus, found across vast areas of East Africa, Europe, the Arabian Peninsula and South to East Asia. In Europe fossils are especially common in the southeast, with up to half of known specimens in the region originating in Bulgaria. Especially significant specimens include those found in Ezerovo, Plovdiv Province (type specimen of D. thraceiensis) and near Varna. Romania likewise yielded significant remains, with one notably large specimen being found by Grigoriu Ștefănescu near Mânzați (type specimen of D. gigantissimum). The fossils of the two now-invalid species are displayed at the National Museum of Natural History, Bulgaria and the Grigore Antipa National Museum of Natural History, Romania respectively. Multiple specimens have also been found in Greece and even on the island of Crete, indicating that the large animal had traveled there over a potential landbridge. Towards the east Deinotherium is known from finds in Russia (Rostov-on-Don), Georgia, and Turkey. The range of Deinotherium furthermore extends over the Middle East, with the holotype of D. indicum being found on the island of Perim (Yemen) in the Red Sea. Fossils are also known from Iran and multiple localities on the Indian Subcontinent such as the Siwalik Hills. The easternmost occurrence of the genus appears to be in the province of Gansu, Northwest China. The western range of Deinotherium spans most of West and Central Europe including Hungary, the Czech Republic (Františkovy Lázně), Austria (Gratkorn Locality), Switzerland (Jura Mountains), France ("Field of Giants"), Portugal, Spain and Germany. Some of the earliest and most significant finds in Germany have been made in the Dinotheriensande (Eppelsheim Formation) of the Mainz Basin, named for their great abundance of deinothere remains. The holotype specimen of Deintherium, described by Kaup in the early 1800s, stems from this part of Europe. Outside of Eurasia, Deinotherium bozasi is found in East Africa, with specimens known from the Olduvai Gorge in Tanzania, the Omo Basin and Middle Awash of Ethiopia, and multiple localities in Kenya. D. bozasi remains have also been found in the Kenyan Chemoigut Beds around Lake Baringo, as well as the Kubi Algi Formation and Koobi Fora Formation in East Rudolf. An additional tooth is known from Sahabi, Libya and it's possible that both Deinotherium and Prodeinotherium coexisted in the Kenyan Ngorora Formation.
Evolution
The origin of deinotheriids can be found in the Oligocene of Africa with the relatively small bodied Chilgatherium. Initially restricted to Africa, the continued northward movement of the African Plate eventually caused the Proboscidean Datum Event, during which proboscideans diversified and spread into Eurasia, among them the ancestral Prodeinotherium, thought to be the direct predecessor of the larger Deinotherium. Generally, Deinotherium displays relatively little change in morphology throughout its evolution, but a steady increase in body size from 2 meters shoulder height in Prodeinotherium to up to 4 meters in later Deinotherium species and a mass far exceeding even large African elephants. The reasons for this rapid increase in body size is interpreted to have had multiple factors influencing it. On the one hand, increased size is an effective predator deterrent, especially during the Miocene when carnivorans had reached a great diversity including hyaenodonts, amphicyonids and large cats. Secondly, continued aridification during the Miocene increasingly split up woodlands, with greater distances of open landscape stretching between the food sources of browsers such as Deinotherium. This also accounts for the morphological adaptations seen in the limbs of Deinotherium, better suited for long distance travel. Furthermore, the appearance of Deinotherium coincided with falling temperatures during the middle Miocene. According to Bergmann's rule, these circumstances favor increased body mass for maintaining heat in cold temperatures. Despite the many key adaptations deinotheres developed for effective foraging, the continued aridification that progressed throughout the Miocene eventually led to the extinction of the group, which failed to survive without readily available food sources matching their diet. Populations in Western Europe were the first to disappear, followed later by those in Eastern Europe. While European lineages of Deinotherium had gone extinct with the onset of the Pliocene, the genus managed to survive notably longer in its southern range in Africa. The last known Deinotherium remains, assigned to D. bozasi, were found in sediments dating to the Pleistocene, approximately 1 million years ago.
Paleoecology
Several key adaptations suggest that Deinotherium was a folivorous, browsing proboscidean that preferred open woodland habitats and fed on the leaves of the tree canopy. In Asia D. indicum has been associated with wet and warm, low-energy woodland and in Portugal deinotheriid remains were found in regions corresponding with moist, tropical to subtropical woodland conditions likened to modern Senegal. A browsing lifestyle is supported by the inclination of the occiput that gives Deinotherium a slightly more raised head posture, and their teeth, which strongly resemble those of modern tapirs, animals that predominantly feed on fruits, flowers, bark and leaves. Their limbs show some notable differences to Prodeinotherium, allowing for a more agile mode of locomotion and allowing for easier travel across open landscapes in the search of food, which coincides with the widespread breakup of forests and expansion of grasslands during the time Deinotherium lived in Europe. Fossil finds from the Austrian Gratkorn locality and the Mainz Basin in Germany indicate that Deinotherium was not a permanent resident in some areas it inhabited. In Austria it has been suggested that they traversed areas on a regular basis, while in Germany there is evidence for the animals range shifting with changing climatic conditions, present during subtropical climate conditions and absent in subboreal conditions.
One of the most enigmatic features of Deinotherium are their downturned tusks and their function. Research conducted on Deinotherium suggests that these tusks were likely not used for digging, nor are they sexually dimorphic, leaving use in feeding as their most likely function. These tusks exhibit patterns of wear, in particular on their medial and caudal sides. In a 2001 paper Markov and colleagues argue that Deinotherium could have used its tusks to remove branches that would have gotten in the way of feeding, while using the proboscis to transport leaf material into its mouth. From there Deinotherium would have used a powerful tongue (inferred based on a notable trough at the front of the symphysis) to further manipulate its food. Different tusk anatomy in young individuals would suggest altered feeding strategies in juveniles.
| Biology and health sciences | Proboscidea | Animals |
237977 | https://en.wikipedia.org/wiki/Gangrene | Gangrene | Gangrene is a type of tissue death caused by a lack of blood supply. Symptoms may include a change in skin color to red or black, numbness, swelling, pain, skin breakdown, and coolness. The feet and hands are most commonly affected. If the gangrene is caused by an infectious agent, it may present with a fever or sepsis.
Risk factors include diabetes, peripheral arterial disease, smoking, major trauma, alcoholism, HIV/AIDS, frostbite, influenza, dengue fever, malaria, chickenpox, plague, hypernatremia, radiation injuries, meningococcal disease, Group B streptococcal infection and Raynaud's syndrome. It can be classified as dry gangrene, wet gangrene, gas gangrene, internal gangrene, and necrotizing fasciitis. The diagnosis of gangrene is based on symptoms and supported by tests such as medical imaging.
Treatment may involve surgery to remove the dead tissue, antibiotics to treat any infection, and efforts to address the underlying cause. Surgical efforts may include debridement, amputation, or the use of maggot therapy. Efforts to treat the underlying cause may include bypass surgery or angioplasty. In certain cases, hyperbaric oxygen therapy may be useful. How commonly the condition occurs is unknown.
Etymology
The etymology of gangrene derives from the Latin word and from the Greek (γάγγραινα), which means "putrefaction of tissues".
Signs and symptoms
Symptoms may include a change in skin color to red or black, numbness, pain, skin breakdown, and coolness. The feet and hands are most commonly involved.
Causes
Gangrene is caused by a critically insufficient blood supply (e.g., peripheral vascular disease) or infection. It is associated with diabetes and long-term tobacco smoking.
Dry gangrene
Dry gangrene is a form of coagulative necrosis that develops in ischemic tissue, where the blood supply is inadequate to keep tissue viable. It is not a disease itself, but a symptom of other diseases. The term dry is used only when referring to a limb or to the gut (in other locations, this same type of necrosis is called an infarction, such as myocardial infarction). Dry gangrene is often due to peripheral artery disease, but can be due to acute limb ischemia. As a result, people with atherosclerosis, high cholesterol, diabetes and smokers commonly have dry gangrene. The limited oxygen in the ischemic limb limits putrefaction and bacteria fail to survive. The affected part is dry, shrunken, and dark reddish-black. The line of separation usually brings about complete separation, with eventual falling off of the gangrenous tissue if it is not removed surgically, a process called autoamputation.
Dry gangrene is the result of chronic ischemia without infection. If ischemia is detected early, when ischemic wounds rather than gangrene are present, the process can be treated by revascularization (via vascular bypass or angioplasty). However, once gangrene has developed, the affected tissues are not salvageable. Because dry gangrene is not accompanied by infection, it is not as emergent as gas gangrene or wet gangrene, both of which have a risk of sepsis. Over time, dry gangrene may develop into wet gangrene if an infection develops in the dead tissues.
Diabetes mellitus is a risk factor for peripheral vascular disease, thus for dry gangrene, but also a risk factor for wet gangrene, particularly in patients with poorly controlled blood sugar levels, as elevated serum glucose creates a favorable environment for bacterial infection.
Wet gangrene
Wet, or infected, gangrene is characterized by thriving bacteria and has a poor prognosis (compared to dry gangrene) due to sepsis resulting from the free communication between infected fluid and circulatory fluid. In wet gangrene, the tissue is infected by saprogenic microorganisms (Clostridium perfringens or Bacillus fusiformis, for example), which cause tissue to swell and emit a foul odor. Wet gangrene usually develops rapidly due to blockage of venous (mainly) or arterial blood flow. The affected part is saturated with stagnant blood, which promotes the rapid growth of bacteria. The toxic products formed by bacteria are absorbed, causing systemic manifestation of sepsis and finally death. The affected part is edematous, soft, putrid, rotten, and dark.
Because of the high mortality associated with infected gangrene (about 80% without treatment and 20% with treatment), an emergency salvage amputation, such as a guillotine amputation, is often needed to limit systemic effects of the infection. Such an amputation can be converted to a formal amputation, such as a below- or above-knee amputation.
Gas gangrene
Gas gangrene is a bacterial infection that produces gas within tissues. It can be caused by Clostridium, most commonly alpha toxin-producing C. perfringens, or various nonclostridial species. Infection spreads rapidly as the gases produced by the bacteria expand and infiltrate healthy tissue in the vicinity. Because of its ability to quickly spread to surrounding tissues, gas gangrene should be treated as a medical emergency.
Gas gangrene is caused by bacterial exotoxin-producing clostridial species, which are mostly found in soil, and other anaerobes such as Bacteroides and anaerobic streptococci. These environmental bacteria may enter the muscle through a wound and subsequently proliferate in necrotic tissue and secrete powerful toxins that destroy nearby tissue, generating gas at the same time. A gas composition of 5.9% hydrogen, 3.4% carbon dioxide, 74.5% nitrogen, and 16.1% oxygen was reported in one clinical case.
Gas gangrene can cause necrosis, gas production, and sepsis. Progression to toxemia and shock is often very rapid.
Other types
Necrotizing fasciitis is a rare infection that spreads deep into the body along tissue planes. It is categorized into four subtypes, with the first two being the most common. Type 1 requires an infection with an anaerobe and a species in the Enterobacteriaceae family, while type 2 is characterized by infection with Streptococcus pyogenes, a Gram-positive cocci bacteria, and thus is also known as hemolytic streptococcal gangrene.
Noma is a gangrene of the face most often found in Africa, Southeast Asia and South America.
Fournier gangrene is a type of necrotizing fasciitis that usually affects the genitals and groin.
Venous limb gangrene may be caused by Heparin-induced thrombocytopenia and thrombosis.
Severe mesenteric ischemia may result in gangrene of the small intestine.
Severe ischemic colitis may result in gangrene of the large intestine.
Treatment
Treatment varies based on the severity and type of gangrene.
Lifestyle
Exercises such as walking and massage therapy may be tried. Discontinue use of cigarette smoking is recommended prevent further damage of blood vessel walls and maintain blood flow. Exercise training programs that increase in intensity are encouraged for patients that experience claudication to promote blood flow to the lower extremities. Additional supportive measures to prevent ischemic gangrene in relation to critical limb ischemia include but are not limited to attention towards feet care such as use of proper fitting and protective shoes and avoidance of tight fitting clothing that decreases blood flow.
Medication
Medications may include pain management, medications that promote circulation in the circulatory system and antibiotics. Since gangrene is associated with periodic pain caused by too little blood flow, pain management is important so patients can continue doing exercises that promote circulation. Pain management medications can include opioids and opioid-like analgesics. Since gangrene is a result of ischemia, circulatory system management is important. These medications can include antiplatelet drug, anticoagulant, and fibrinolytics. Prevention and management of ischemic gangrene also includes maintaining normal blood pressure through use of anti-hypertensive medications such as beta-blockers, angiotensin-converting enzyme inhibitors and angiotensin receptor blockers. To prevent further blockage of blood vessels, treatment of co-morbid hypercholesterolemia through lipid-lowering medications such as statins is recommended. Pentoxifylline is a medication that is described to improve blood flow and tissue oxygenation, although its efficacy is unknown it has shown to boost excercise duration.
As infection is often associated with gangrene, antibiotics are often a critical component of its treatment. The life-threatening nature of gangrene requires treatment with intravenous antibiotics in an inpatient setting. Antibiotics alone are not effective because they may not penetrate infected tissues sufficiently. Antibiotic treatment of gas gangrene, except for C. tertium infections which is treated with vancomycin or metronidazole intravenously, is typically penicillin and clindamycin for about two weeks. For consideration, there has been noted resistance of clindamycin in C. perfringens infections in different parts of the world. In order to provide the most optimal treatment plan, microbiology susceptibility testing would provide additional information for the clinician in terms what antibiotics would work the best.
Surgery
Surgical removal of all dead tissue, however, is the mainstay of treatment for gangrene. Surgical inspection and blood cultures (to rule out bacteremia) is indicated of any paitents suspected to have gas gangrene regardless of cause. Often, gangrene is associated with underlying infection, thus the gangrenous tissue must be debrided to hinder the spread of the associated infection. Delayed closure of wounds after debridement allows for ensure that site is clear of any infection. The extent of surgical debridement needed depends on the extent of the gangrene and may be limited to the removal of a finger, toe, or ear, but in severe cases may involve a limb amputation. Ischemic disease of the legs is the most common reason for amputations. In about a quarter of these cases, the other side requires amputation in the next three years.
Dead tissue alone does not require debridement, and in some cases, such as dry gangrene, the affected part falls off (autoamputates), making surgical removal unnecessary. Waiting for autoamputation, however, may cause health complications as well as decreased quality of life.
After the gangrene is treated with debridement and antibiotics, the underlying cause can be treated. In the case of gangrene due to critical limb ischemia, revascularization can be performed to treat the underlying peripheral underlateral artery disease. To prevent complications of gangrene from critical limb ischemia, revascularization procedures can be utilized for severely symptomatic patients that are refractory to medications. Angioplasty should be considered if severe blockage in lower leg vessels (tibial and peroneal artery) leads to gangrene. Additional revascularization procedures that are performed endovascularly include stenting and atherectomies. Surgical interventions available include different types of femoral bypass (such as femoral-popliteal, femoral-femoral, axillo-femoral, etc.) or aortoiliac endarterectomy, both aimed at restoring blood flow to the tissues of the lower extremities.
Other
Hyperbaric oxygen therapy treatment is used to treat gas gangrene. It increases pressure and oxygen content to allow blood to carry more oxygen to inhibit anaerobic organism growth and reproduction.
Regenerative medical treatments and stem-cell therapies have successfully altered gangrene and ulcer prognosis.
Pronosis
Gas Gangrene
The pronosis of such a rapidly progressive disease requires timely diagsosis with prompt surgical debridement and adminstration of antibiotics. Gas gangrene that involves trunk or visceral organs compared to the extremities are typically harder to treat due to its locations making debridement difficult. If gas gangrene is left untreated, then it can progress to bacteremia and progress to death. Mortality rates are particularly high for patients that present with shock and those that present with spontaneous gas gangrene infected with C. septicum.
History
As early as 1028, flies and maggots were commonly used to treat chronic wounds or ulcers to prevent or arrest necrotic spread, as some species of maggots consume only dead flesh, leaving nearby living tissue unaffected. This practice largely died out after the introduction of antibiotics to the range of treatments for wounds. In recent times, however, maggot therapy has regained some credibility and is sometimes employed with great efficacy in cases of chronic tissue necrosis.
The French Baroque composer Jean-Baptiste Lully contracted gangrene in January 1687 when, while conducting a performance of his Te Deum, he stabbed his own toe with his pointed staff (which was used as a baton). The disease spread to his leg, but the composer refused to have his toe amputated, which eventually led to his death in March of that year.
French King Louis XIV died of gangrene in his leg on 1 September 1715, four days prior to his 77th birthday.
Sebald Justinus Brugmans, Professor at Leyden University, from 1795 on Director of the Medical Bureau of the Batavian Republic, and inspector-general of the French Imperial Military Health-Service in 1811, became a leading expert in the fight against hospital-gangrene and its prevention. He wrote a treatise on gangrene in 1814 in which he meticulously analyzed and explained the causes of this dreadful disease, which he was convinced was contagious. He completed his entry with a thorough evaluation of all possible and well experienced sanitary regulations. His work was very well received and was instrumental in convincing most later authors that gangrene was a contagious disease.
John M. Trombold wrote: "Middleton Goldsmith, a surgeon in the Union Army during the American Civil War, meticulously studied hospital gangrene and developed a revolutionary treatment regimen. The cumulative Civil War hospital gangrene mortality was 45%. Goldsmith's method, which he applied to over 330 cases, yielded a mortality under 3%." Goldsmith advocated the use of debridement and topical and injected bromide solutions on infected wounds to reduce the incidence and virulence of "poisoned miasma". Copies of his book were issued to Union surgeons to encourage the use of his methods.
| Biology and health sciences | Infectious disease | null |
238019 | https://en.wikipedia.org/wiki/Snowshoe%20hare | Snowshoe hare | The snowshoe hare (Lepus americanus), also called the varying hare or snowshoe rabbit, is a species of hare found in North America. It has the name "snowshoe" because of the large size of its hind feet. The animal's feet prevent it from sinking into the snow when it hops and walks. Its feet also have fur on the soles to protect it from freezing temperatures.
For camouflage, its fur turns white during the winter and rusty brown during the summer. Its flanks are white year-round. The snowshoe hare is also distinguishable by the black tufts of fur on the edge of its ears. Its ears are shorter than those of most other hares.
In summer, it feeds on plants such as grass, ferns, and leaves; in winter, it eats twigs, the bark from trees, and plants and, similar to the Arctic hare, has been known to occasionally eat dead animals. It can sometimes be seen feeding in small groups. This animal is mainly active at night and does not hibernate. The snowshoe hare may have up to four litters in a year, which average three to eight young. Males compete for females, and females may breed with several males.
A major predator of the snowshoe hare is the Canada lynx. Historical records of animals caught by fur hunters over hundreds of years show the lynx and hare numbers rising and falling in a cycle, which has made the hare known to biology students worldwide as a case study of the relationship between numbers of predators and their prey.
Taxonomy and distribution
Snowshoe hares occur from Newfoundland to Alaska; south in the Sierra Nevada to central California; in the Rocky Mountains to southern Utah and northern New Mexico; and in the Appalachian Mountains to West Virginia. Populations in its southern range, such as in Ohio, Maryland, North Carolina, New Jersey, Tennessee, and Virginia have been extirpated. Locations of subspecies are as follows:
Lepus americanus americanus (Erxleben) – Ontario, Manitoba, Saskatchewan, Alberta, Montana, and North Dakota
L. a. cascadensis (Nelson) – British Columbia and Washington
L. a. columbiensis (Rhoads) – British Columbia, Alberta, and Washington
L. a. dalli (Merriam) – Mackenzie District, British Columbia, Alaska, Yukon
L. a. klamathensis (Merriam) – Oregon and California
L. a. oregonus (Orr) – Oregon
L. a. pallidus (Cowan) – British Columbia
L. a. phaeonotus (J. A. Allen) – Ontario, Manitoba, Saskatchewan, Michigan, Wisconsin, and Minnesota
L. a. pineus (Dalquest) – British Columbia, Idaho, and Washington
L. a. seclusus (Baker and Hankins) – Wyoming
L. a. struthopus (Bangs) – Newfoundland, Nova Scotia, New Brunswick, Prince Edward Island, Quebec, and Maine
L. a. tahoensis (Orr) – California, western Nevada
L. a. virginianus (Harlan) – Ontario, Quebec, Maine, New Hampshire, Vermont, Massachusetts, New York, Pennsylvania
L. a. washingtonii (Baird) – British Columbia, Washington, and Oregon
Description
The snowshoe hare's fur is rusty brown in the spring and summer, and white in the winter. It also always has a gray underbelly, and black on the tips and edges of its ears and tail. It has very large hind feet, and dense fur on their soles. The snowshoe hare's ears are not as long as some other species of hares' ears. In the winter, it turns a bright white to blend in with the snow.
Snowshoe hares range in length from 413 to 518 mm (16.3 to 20.4 in), of which 39 to 52 mm (1.5 to 2.0 in) are tail. The hind foot, long and broad, measures 117 to 147 mm (4.6 to 5.8 in) in length. The ears are 62 to 70 mm (2.4 to 2.8 in) from notch to tip. Snowshoe hares usually weigh between 1.43 and 1.55 kg (3.15 to 3.42 lb). Males are slightly smaller than females, as is typical for leporids. In the summer, the coat is a grizzled rusty or grayish brown, with a blackish middorsal line, buffy flanks and a white belly. The face and legs are cinnamon brown. The ears are brownish with black tips and white or creamy borders. During the winter, the fur is almost entirely white, except for black eyelids and the blackened tips on the ears. The soles of the feet are densely furred, with stiff hairs (forming the snowshoe) on the hind feet.
Habitat
Snowshoe hares are primarily found in areas with dense plant coverage such as boreal forests, upper montane forests and wetlands, though are occasionally seen in more open areas like agricultural land.
In Utah, snowshoe hares used Gambel oak (Quercus gambelli) in the northern portion of the Gambel oak range. In the Southwest, the southernmost populations of snowshoe hares occur in
the Sangre de Cristo Mountains, New Mexico, in subalpine scrub: narrow bands of shrubby and prostrate conifers at and just below timberline that are usually composed of Engelmann spruce, bristlecone pine, limber pine, and juniper.
In Minnesota, snowshoe hares are found in uplands and wetlands. In New England, snowshoe hares favor second-growth forests.
Reproduction and leverets
Like all leporids, snowshoe hares are crepuscular and nocturnal. They are shy and secretive and spend most of the day in shallow depressions, called forms, scraped out under clumps of ferns, brush thickets, and downed piles of timber. They occasionally use the large burrows of mountain beavers (Aplodontia rufa) as forms. Diurnal activity level increases during the breeding season. Juveniles are usually more active and less cautious than adults.
Snowshoe hares are active year-round. The breeding season for hares is stimulated by new vegetation and varies with latitude, location, and yearly events (such as weather conditions and phase of snowshoe hare population cycle). Breeding generally begins in late December to January and lasts until July or August . In northwestern Oregon, male peak breeding activity (as determined by testes weight) occurs in May and is at the minimum in November. In Ontario, the peak is in May and in Newfoundland, the peak is in June. Female estrus begins in March in Newfoundland, Alberta, and Maine, and in early April in Michigan and Colorado. First litters of the year are born from mid-April to May.
The gestation period is 35 to 40 days; most studies report 37 days as the average length of gestation. Litters average three to five leverets depending on latitude, elevation, and phase of population cycle, ranging from one to seven. Deep snow-pack increases the amount of upper-branch browse available to snowshoe hares in winter, and therefore has a positive relationship with the nutritional status of breeding adults. Litters are usually smaller
in the southern sections of their range since there is less snow. Newborns are fully furred, open-eyed, and mobile. They leave the natal form within a short time after birth, often within 24 hours. After leaving the birthplace, siblings stay near each other during the day, gathering once each evening to nurse. Weaning occurs at 25 to 28 days except for the last litter of the season, which may nurse for two months or longer.
Female snowshoe hares can become pregnant anytime after the 35th day of gestation. The second litter can therefore be conceived before the first litter is born (snowshoe hares have twin uteri). Pregnancy rates ranged from 78 to 100% for females during the period of first litter production, 82 to 100% for second litters, and for the periods of third and fourth litters pregnancy rates vary with population cycle. In Newfoundland, the average number of litters per female per year ranged from 2.9 to 3.5, and in Alberta the range was from 2.7 to 3.3. The number of litters per year varies with phase of population cycle (see below). In Alberta the average number of litters per year was almost 3 just after a population peak and 4 just after the population low. Females normally first breed as 1-year-olds. Juvenile breeding is rare and has only been observed in females from the first litter of the year and only in years immediately following a low point in the population cycle.
In Yukon, 30-day survival of radio-tagged leverets was 46%, 15%, and 43% for the first, second, and third litters of the year, respectively. There were no differences in mortality in plots with food added. The main proximate cause of mortality was predation by small mammals, including red squirrels (Tamiasciurus hudsonicus) and Arctic ground squirrels (Spermophilus parryii). Littermates tended to live or die together more often than by chance. Individual survival was negatively related to litter size and positively related to body size at birth. Litter size is negatively correlated with body size at birth.
Population cycles
Northern populations of snowshoe hares undergo cycles that range from seven to 17 years between population peaks. The average time between peaks is approximately 10 years. The period of abundance usually lasts for two to five years, followed by a population decline to lower numbers or local scarcity. Areas of great abundance tend to be scattered. Populations do not peak simultaneously in all areas, although a great deal of synchronicity occurs in northern latitudes. From 1931 to 1948, the cycle was synchronized within one or two years over most of Canada and Alaska, despite differences in predators and food supplies. In central Alberta, low snowshoe hare density occurred in 1965, with 42 to 74 snowshoe hares per 100 acres (40 ha). The population peak occurred in November 1970 with 2,830 to 5,660 snowshoe hares per 100 acres (40 ha). In the southern parts of its range, snowshoe hare populations do not fluctuate radically.
Exclosure experiments in Alberta indicated browsing by snowshoe hares during population peaks has the greatest impact on palatable species, thus further reducing the amount of available foods. In this study, insufficient nutritious young browse was available to sustain the number of snowshoe hares present in the peak years (1971 and 1972) in winter.
The hare's fluctuating numbers are modelled by the Lotka–Volterra equations.
Preferred habitat
Major variables in habitat quality include average visual obstruction and browse biomass. Snowshoe hares prefer young forests with abundant understories. The presence of cover is the primary determinant of habitat quality, and is more significant than food availability or species composition. Species composition does, however, influence population density; dense softwood understories support greater snowshoe hare density than hardwoods because of cover quality. In Maine, female snowshoe hares were observed to be more common on sites with less cover but more nutritious forage; males tended to be found on sites with heavier cover.
Winter browse availability depends on height of understory brush and winter snow depth; saplings with narrow stem diameters are required for winter browse in heavy snow.
In northern regions, snowshoe hares occupy conifer and mixed forests in all stages of succession, but early successional forests foster peak abundance. Deciduous forests are usually occupied only in early stages of succession. In New England, snowshoe hares preferred second-growth deciduous, coniferous, and mixed woods with dense brushy understories; they appear to prefer shrubby old-field areas, early- to mid-successional burns, shrub-swamps, bogs, and upper montane krumholz vegetation. In Maine, snowshoe hares were more active in clearcut areas than in partially cut or uncut areas. Sapling densities were highest on 12- to 15-year-old plots; these plots were used more than younger stands. In northern Utah, they occupied all the later stages of succession on quaking aspen and spruce-fir, but were not observed in meadows. In Alberta, snowshoe hares use upland shrub-sapling stages of regenerating aspens (either postfire or postharvest). In British Columbia overstocked juvenile lodgepole pine (Pinus contorta) stands formed optimal snowshoe hare habitat.
In western Washington, most unburned, burned, or scarified clearcuts will normally be fully occupied by snowshoe hares within four to five years, as vegetation becomes dense. In older stands (more than 25 years), stem density begins to decline and cover for snowshoe hares decreases. However, in north-central Washington, they may not
colonize clearcuts until six or seven years, and it may take 20 to 25 years for their density to reach maximum. Winter snowshoe hare pellet counts were highest in 20-year-old lodgepole pine stands, lower in older lodgepole stands, and lowest in spruce-dominated stands. In western Oregon, snowshoe hares were abundant only in early successional stages, including stable brushfields. In west-central Oregon, an old-growth Douglas-fir forest was clearcut and monitored through 10 years of succession. A few snowshoe hares were noted in adjacent virgin forest plots; they represented widely scattered, sparse populations. One snowshoe hare was observed on the disturbed plot 2.5 years after it had been clearcut and burned; at this stage, ground cover was similar to that of the uncut forest. By 9 years after disturbance, snowshoe hare density had increased markedly.
In western Washington, snowshoe hares routinely used steep slopes where cover was adequate; most studies, however, suggest they tend to prefer gentle slopes. Moonlight increases snowshoe hare vulnerability to predation, particularly in winter. They tend to avoid open areas during bright phases of the moon and during bright periods of a single night. Their activity usually shifts from coniferous understories in winter to hardwood understories in summer.
Vegetative structure plays an important role in the size of snowshoe hare home ranges. Snowshoe hares wander up to 5 miles (8 km) when food is scarce. In Montana home ranges are smaller in brushy woods than in open woods. In Colorado and Utah, the average home range of both sexes was 20 acres (8.1 ha). On the Island of Montreal in Quebec, the average daily range for both sexes was 4 acres (1.6 ha) in old-field mixed woods. In Montana, the home range averaged 25 acres (10 ha) for males and 19 acres (7.6 ha) for females. In Oregon the average snowshoe hare home range was 14.6 acres (5.9 ha).
Cover requirements
Snowshoe hares require dense, brushy, usually coniferous cover; thermal and escape cover are especially important for young hares. Low brush provides hiding, escape, and thermal cover. Heavy cover 10 feet (3 m) above ground provides protection from avian predators, and heavy cover 3.3 feet (1 m) tall provides cover from terrestrial predators. Overwinter survival increases with increased cover. A wide variety of habitat types are used if cover is available. Base visibility in good snowshoe hare habitat ranges from 2% at 16.5 feet (5 m) distance to 0% at 66 feet (20 m). Travel cover is slightly more open, ranging from 14.7% visibility at 16.5 feet (5 m) to 2.6% at 66 feet (20 m). Areas with horizontal vegetation density of 40 to 100% at 50 feet (15 m) are adequate snowshoe hare habitat in Utah.
Food habits
Snowshoe hares eat a variety of plant materials. Forage type varies with season. Succulent green vegetation is consumed when available from spring to fall; after the first frost, buds, twigs, evergreen needles, and bark form the bulk of snowshoe hare diets until spring greenup. Snowshoe hares typically feed at night and follow well-worn forest paths to feed on various plants and trees.
Winter
Snowshoe hares prefer branches, twigs, and small stems up to 0.25 inch (6.3 mm) diameter; larger stems are sometimes used in winter. In Yukon, they normally eat fast-growing birches and willows, and avoid spruce. At high densities, however, the apical shoots of small spruce are eaten. The snowshoe hare winter diet is dominated by bog birch (Betula glandulosa), which is preferred but not always available. Greyleaf willow (Salix glauca) is eaten most often when bog birch is not available. Buffaloberry (Shepherdia canadensis) is the fourth most common diet item. White spruce (Picea glauca) is eaten, but not preferred. In Alaska, spruce, willows, and alders comprise 75% of snowshoe hare diets; spruce needles make up nearly 40% of the diet. In northwestern Oregon, winter foods include needles and tender bark of Sitka spruce, Douglas-fir, and western hemlock (Tsuga heterophylla); leaves and green twigs of salal; buds, twigs, and bark of willows; and green herbs. In north-central Washington, willows and birches are not plentiful; snowshoe hares browse the tips of lodgepole pine seedlings. In Utah, winter foods include Douglas-fir, willows, snowberry (Symphoricarpos spp.), maples, and serviceberry (Amelanchier spp.). In Minnesota, aspens, willows, hazelnut (Corylus spp.), ferns (Pteridophyta spp.), birches, alders, sumacs (Rhus spp.), and strawberries (Fragaria spp.) are winter foods. Winter foods in New York include eastern white pine, red pine (Pinus resinosa), white spruce, paper birch, and aspens. In Ontario, sugar maple (Acer saccharum), striped maple (A. pensylvanicum), red maple, other deciduous species, northern white-cedar (T. occidentalis), balsam fir, beaked hazelnut (C. cornuta), and buffaloberry were heavily barked. In New Brunswick, snowshoe hares consumed northern white-cedar, spruces, American beech (Fagus grandifolia), balsam fir, mountain maple (A. spicatum), and many other species of browse. In Newfoundland, paper birch is preferred. Further details on regional food preferences are summarized in Snowshoe hare and allies:
Recent studies show that Snowshoe hares also eat meat including flesh from their own species.
Spring, summer and autumn
In Alaska, snowshoe hares consume new leaves of blueberries (Vaccinium spp.), new shoots of field horsetails (Equisetum arvense), and fireweed (Epilobium angustifolium) in spring.
Grasses are not a major item due to low availability associated with sites that have adequate cover. In summer, leaves of willows, black spruce, birches, and bog Labrador tea (Ledum groenlandicum) are also consumed. Black spruce is the most heavily used and the most common species in the area. Pen trials suggest black spruce is not actually preferred. Roses (Rosa spp.) were preferred, but a minor dietary item, as they were not common in the study area. In northwest Oregon, summer foods include grasses, clovers (Trifolium spp.), other forbs, and some woody plants, including Sitka spruce, Douglas-fir, and young leaves and twigs of salal. In Minnesota, aspens, willows, grasses, birches, alders, sumacs, and strawberries are consumed when green. In Ontario, summer diets consist of clovers, grasses, and forbs.
Predators
The snowshoe hare is a major prey item for a number of predators. Its foremost predator is the Canada lynx (Lynx canadensis), but other predators include bobcats (L. rufus), fishers (Pekania pennanti), American martens (Martes americana), Pacific martens, (M. caurina), long-tailed weasels (Neogale frenata), minks (N. vison), foxes (Vulpes and Urocyon spp.), coyote (Canis latrans), domestic dogs (C. familiaris), domestic cats (Felis catus), wolves (Canis lupus), cougars (Puma concolor), great horned owls (Bubo virginianus), barred owls (Strix varia), spotted owls (S. occidentalis), other owls, red-tailed hawks (Buteo jamaicensis), northern goshawks (Accipiter gentilis), other hawks (Buteonidae), golden eagles (Aquila chryseatos), as well as corvids. Other predators include black bears (Ursus americanus). In Glacier National Park snowshoe hares are a prey item of Rocky Mountain wolves (Canis lupus irremotus).
Vulnerability to climate change
The habitat for some snowshoe hares has changed dramatically, leaving some habitats without snow for longer periods than previously. Some hares have adapted and stay brown all winter. Others, however, continue to turn white in winter. These hares are at an increased risk of being hunted and killed because they are no longer camouflaged. Many people in the scientific community believe that snowshoe hare populations are at risk of crashing unless interbreeding speeds up the process of evolution to year-round brown. Other species who rely on the hare as part of their diet are also at risk.
| Biology and health sciences | Lagomorphs | Animals |
238097 | https://en.wikipedia.org/wiki/Warfarin | Warfarin | Warfarin, sold under the brand name Coumadin among others, is an anticoagulant medication. While the drug is described as a "blood thinner", it does not reduce viscosity but rather prevents blood clots (thrombus) from forming (coagulating). Accordingly, it is commonly used to prevent deep vein thrombosis and pulmonary embolism, and to protect against stroke in people who have atrial fibrillation, valvular heart disease, or artificial heart valves. Warfarin may sometimes be prescribed following ST-segment elevation myocardial infarctions (STEMI) and orthopedic surgery. It is usually taken by mouth, but may also be administered intravenously. It is a vitamin K antagonist.
The common side effect, a natural consequence of reduced clotting, is bleeding. Less common side effects may include areas of tissue damage, and purple toes syndrome. Use is not recommended during pregnancy. The effects of warfarin are typically monitored by checking prothrombin time (INR) every one to four weeks. Many other medications and dietary factors can interact with warfarin, either increasing or decreasing its effectiveness. The effects of warfarin may be reversed with phytomenadione (vitamin K1), fresh frozen plasma, or prothrombin complex concentrate.
Warfarin decreases blood clotting by blocking vitamin K epoxide reductase, an enzyme that reactivates vitamin K1. Without sufficient active vitamin K1, the plasma concentrations of clotting factors II, VII, IX, and X are reduced and thus have decreased clotting ability. The anticlotting protein C and protein S are also inhibited, but to a lesser degree.
Despite being labeled a vitamin K antagonist, warfarin does not antagonize the action of vitamin K1, but rather antagonizes vitamin K1 recycling, depleting active vitamin K1.
A few days are required for full effect to occur, and these effects can last for up to five days. Because the mechanism involves enzymes such as VKORC1, patients on warfarin with polymorphisms of the enzymes may require adjustments in therapy if the genetic variant that they have is more readily inhibited by warfarin, thus requiring lower doses.
Warfarin first came into large-scale commercial use in 1948 as a rat poison. It was formally approved as a medication to treat blood clots in humans by the U.S. Food and Drug Administration in 1954. In 1955, warfarin's reputation as a safe and acceptable treatment for coronary artery disease, arterial plaques, and ischemic strokes was bolstered when President Dwight D. Eisenhower was treated with warfarin following a highly publicized heart attack. It is on the World Health Organization's List of Essential Medicines. Warfarin is available as a generic medication and is sold under many brand names. In 2022, it was the 85th most commonly prescribed medication in the United States, with more than 8million prescriptions.
Medical uses
Warfarin is indicated for the prophylaxis and treatment of venous thrombosis and its extension, pulmonary embolism; prophylaxis and treatment of thromboembolic complications associated with atrial fibrillation and/or cardiac valve replacement; and reduction in the risk of death, recurrent myocardial infarction, and thromboembolic events such as stroke or systemic embolization after myocardial infarction.
Warfarin is used to decrease the tendency for thrombosis, or as secondary prophylaxis (prevention of further episodes) in those individuals who have already formed a blood clot (thrombus). Warfarin treatment can help prevent formation of future blood clots and help reduce the risk of embolism (migration of a thrombus to a spot where it blocks blood supply to a vital organ).
Warfarin is best suited for anticoagulation (clot formation inhibition) in areas of slowly running blood (such as in veins and the pooled blood behind artificial and natural valves), and in blood pooled in dysfunctional cardiac atria. Thus, common clinical indications for warfarin use are atrial fibrillation, the presence of artificial heart valves, deep venous thrombosis, and pulmonary embolism (where the embolized clots first form in veins). Warfarin is also used in antiphospholipid syndrome. It has been used occasionally after heart attacks (myocardial infarctions), but is far less effective at preventing new thromboses in coronary arteries. Prevention of clotting in arteries is usually undertaken with antiplatelet drugs, which act by a different mechanism from warfarin (which normally has no effect on platelet function). It can be used to treat people following ischemic strokes due to atrial fibrillation, though direct oral anticoagulants (DOACs) may offer greater benefits.
Dosing
Dosing of warfarin is complicated because it is known to interact with many commonly used medications and certain foods. These interactions may enhance or reduce warfarin's anticoagulation effect. To optimize the therapeutic effect without risking dangerous side effects such as bleeding, close monitoring of the degree of anticoagulation is required by a blood test measuring an prothrombin time (INR). During the initial stage of treatment, INR is checked daily; intervals between tests can be lengthened if the patient manages stable therapeutic INR levels on an unchanged warfarin dose. Newer point-of-care testing is available and has increased the ease of INR testing in the outpatient setting. Instead of a blood draw, the point-of-care test involves a simple finger prick.
Maintenance dose
Recommendations by many national bodies, including the American College of Chest Physicians, have been distilled to help manage dose adjustments.
The maintenance dose of warfarin can fluctuate significantly depending on the amount of vitamin K1 in the diet. Keeping vitamin K1 intake at a stable level can prevent these fluctuations. Leafy green vegetables tend to contain higher amounts of vitamin K1. Green parts of members of the family Apiaceae, such as parsley, cilantro, and dill are extremely rich sources of vitamin K; cruciferous vegetables such as cabbage and broccoli, as well as the darker varieties of lettuces and other leafy greens, are also relatively high in vitamin K1. Green vegetables such as peas and green beans do not have such high amounts of vitamin K1 as leafy greens. Certain vegetable oils have high amounts of vitamin K1. Foods low in vitamin K1 include roots, bulbs, tubers, and most fruits and fruit juices. Cereals, grains, and other milled products are also low in vitamin K1.
Several studies reported that the maintenance dose can be predicted based on various clinical data.
Self-testing
Anticoagulation with warfarin can also be monitored by patients at home. International guidelines on home testing were published in 2005. The guidelines stated:
A 2006 systematic review and meta-analysis of 14 randomized trials showed home testing led to a reduced incidence of complications (thrombosis and major bleeding), and improved the time in the therapeutic range.
Alternative anticoagulants
In some countries, other coumarins are used instead of warfarin, such as acenocoumarol and phenprocoumon. These have a shorter (acenocoumarol) or longer (phenprocoumon) half-life, and are not completely interchangeable with warfarin. Several types of anticoagulant drugs offering the efficacy of warfarin without a need for monitoring, such as dabigatran, apixaban, edoxaban, and rivaroxaban, have been approved in a number of countries for classical warfarin uses. Complementing these drugs are reversal agents available for dabigatran (idarucizumab), and for apixaban, and rivaroxaban (andexanet alfa). Andexanet alfa is suggested for edoxaban, but use of it is considered off label due to limited evidence. A reversal agent for dabigatran, apixaban, edoxaban, and rivaroxaban is in development (ciraparantag).
Contraindications
All anticoagulants are generally contraindicated in situations in which the reduction in clotting that they cause might lead to serious and potentially life-threatening bleeds. This includes people with active bleeding conditions (such as gastrointestinal ulcers), or disease states with increased risk of bleeding (e.g., low platelets, severe liver disease, uncontrolled hypertension). For patients undergoing surgery, treatment with anticoagulants is generally suspended. Similarly, spinal and lumbar puncture (e.g., spinal injections, epidurals, etc.) carry increased risk, so treatment is suspended prior to these procedures.
Warfarin should not be given to people with heparin-induced thrombocytopenia until platelet count has improved or normalised. Warfarin is usually best avoided in people with protein C or protein S deficiency, as these thrombophilic conditions increase the risk of skin necrosis, which is a rare but serious side effect associated with warfarin.
Pregnancy
Warfarin is contraindicated in pregnancy, as it passes through the placental barrier and may cause bleeding in the fetus; warfarin use during pregnancy is commonly associated with spontaneous abortion, stillbirth, neonatal death, and preterm birth. Coumarins (such as warfarin) are also teratogens, that is, they cause birth defects; the incidence of birth defects in infants exposed to warfarin in utero appears to be around 5%, although higher figures (up to 30%) have been reported in some studies. Depending on when exposure occurs during pregnancy, two distinct combinations of congenital abnormalities can arise.
First trimester of pregnancy
Usually, warfarin is avoided in the first trimester, and a low-molecular-weight heparin such as enoxaparin is substituted. With heparin, risks of maternal haemorrhage and other complications are still increased, but heparins do not cross the placental barrier, so do not cause birth defects. Various solutions exist for the time around delivery.
When warfarin (or another 4-hydroxycoumarin derivative) is given during the first trimester—particularly between the sixth and ninth weeks of pregnancy—a constellation of birth defects known variously as fetal warfarin syndrome (FWS), warfarin embryopathy, or coumarin embryopathy can occur. FWS is characterized mainly by skeletal abnormalities, which include nasal hypoplasia, a depressed or narrowed nasal bridge, scoliosis, and calcifications in the vertebral column, femur, and heel bone, which show a peculiar stippled appearance on X-rays. Limb abnormalities, such as brachydactyly (unusually short fingers and toes) or underdeveloped extremities, can also occur. Common nonskeletal features of FWS include low birth weight and developmental disabilities.
Second trimester and later
Warfarin administration in the second and third trimesters is much less commonly associated with birth defects, and when they do occur, are considerably different from FWS. The most common congenital abnormalities associated with warfarin use in late pregnancy are central nervous system disorders, including spasticity and seizures, and eye defects. Because of such later pregnancy birth defects, anticoagulation with warfarin poses a problem in pregnant women requiring warfarin for vital indications, such as stroke prevention in those with artificial heart valves.
Warfarin may be used in lactating women who wish to breastfeed their infants. Available data does not suggest that warfarin crosses into the breast milk. Similarly, INR levels should be checked to avoid adverse effects.
Adverse effects
Bleeding
The only common side effect of warfarin is hemorrhage. The risk of severe bleeding is small but definite (a typical yearly rate of 1–3% has been reported), and any benefit needs to outweigh this risk when warfarin is considered. All types of bleeding occur more commonly, but the most severe ones are those involving the brain (intracerebral hemorrhage/hemorrhagic stroke) and the spinal cord. Risk of bleeding is increased if the INR is out of range (due to accidental or deliberate overdose or due to interactions). This risk increases greatly once the INR exceeds 4.5.
Several risk scores exist to predict bleeding in people using warfarin and similar anticoagulants. A commonly used score (HAS-BLED) includes known predictors of warfarin-related bleeding: uncontrolled high blood pressure (H), abnormal kidney function (A), previous stroke (S), known previous bleeding condition (B), previous labile INR when on anticoagulation (L), elderly as defined by age over 65 (E), and drugs associated with bleeding (e.g., aspirin) or alcohol misuse (D). While their use is recommended in clinical practice guidelines, they are only moderately effective in predicting bleeding risk and do not perform well in predicting hemorrhagic stroke. Bleeding risk may be increased in people on hemodialysis. Another score used to assess bleeding risk on anticoagulation, specifically Warfarin or Coumadin, is the ATRIA score, which uses a weighted additive scale of clinical findings to determine bleeding risk stratification. The risks of bleeding are increased further when warfarin is combined with antiplatelet drugs such as clopidogrel, aspirin, or nonsteroidal anti-inflammatory drugs.
Warfarin necrosis
A rare but serious complication resulting from treatment with warfarin is warfarin necrosis, which occurs more frequently shortly after commencing treatment in patients with a deficiency of protein C, an innate anticoagulant that, like the procoagulant factors whose synthesis warfarin inhibits, requires vitamin K-dependent carboxylation for its activity. Since warfarin initially decreases protein C levels faster than the coagulation factors, it can paradoxically increase the blood's tendency to coagulate when treatment is first begun (many patients when starting on warfarin are given heparin in parallel to combat this), leading to massive thrombosis with skin necrosis and gangrene of limbs. Its natural counterpart, purpura fulminans, occurs in children who are homozygous for certain protein C mutations.
Osteoporosis
After initial reports that warfarin could reduce bone mineral density, several studies demonstrated a link between warfarin use and osteoporosis-related fracture. A 1999 study in 572 women taking warfarin for deep venous thrombosis, risk of vertebral fracture and rib fracture was increased; other fracture types did not occur more commonly. A 2002 study looking at a randomly selected selection of 1,523 patients with osteoporotic fracture found no increased exposure to anticoagulants compared to controls, and neither did stratification of the duration of anticoagulation reveal a trend towards fracture.
A 2006 retrospective study of 14,564 Medicare recipients showed that warfarin use for more than one year was linked with a 60% increased risk of osteoporosis-related fracture in men, but no association in women was seen. The mechanism was thought to be a combination of reduced intake of vitamin K (a vitamin necessary for bone health) and inhibition by warfarin of vitamin K-mediated carboxylation of certain bone proteins, rendering them nonfunctional.
Purple toe syndrome
Another rare complication that may occur early during warfarin treatment (usually within 3 to 8 weeks of commencement) is purple toe syndrome. This condition is thought to result from small deposits of cholesterol breaking loose and causing embolisms in blood vessels in the skin of the feet, which causes a blueish-purple colour and may be painful.
It is typically thought to affect the big toe, but it affects other parts of the feet, as well, including the bottom of the foot (plantar surface). The occurrence of purple toe syndrome may require discontinuation of warfarin.
Calcification
Several studies have also implicated warfarin use in valvular and vascular calcification. No specific treatment is available, but some modalities are under investigation.
Overdose
The major side effect of warfarin use is bleeding. Risk of bleeding is increased if the INR is out of range (due to accidental or deliberate overdose or due to interactions). Many drug interactions can increase the effect of warfarin, also causing an overdose.
In patients with supratherapeutic INR but INR less than 10 and no bleeding, it is enough to lower the dose or omit a dose, monitor the INR and resume warfarin at an adjusted lower dose when the target INR is reached. For people who need rapid reversal of warfarin – such as due to serious bleeding – or who need emergency surgery, the effects of warfarin can be reversed with vitamin K, prothrombin complex concentrate (PCC), or fresh frozen plasma (FFP). Generally, four-factor PCC can be given more quickly than FFP, the amount needed is a smaller volume of fluid than FFP, and does not require ABO blood typing. Administration of PCCs results in rapid hemostasis, similar to that of FFP, namely, with comparable rates of thromboembolic events, but with reduced rates of volume overload. Blood products should not be routinely used to reverse warfarin overdose, when vitamin K could work alone. While PCC has been found in lab tests to be better than FFP, when rapid reversal is needed, as of 2018, whether a difference in outcomes such as death or disability exists is unclear.
When warfarin is being given and INR is in therapeutic range, simple discontinuation of the drug for five days is usually enough to reverse the effect and cause INR to drop below 1.5.
Interactions
Warfarin interacts with many commonly used drugs, and the metabolism of warfarin varies greatly between patients. Some foods have also been reported to interact with warfarin. Apart from the metabolic interactions, highly protein bound drugs can displace warfarin from serum albumin and cause an increase in the INR. This makes finding the correct dosage difficult, and accentuates the need of monitoring; when initiating a medication that is known to interact with warfarin (e.g., simvastatin), INR checks are increased or dosages adjusted until a new ideal dosage is found.
When taken with nonsteroidal anti-inflammatory drugs (NSAIDs), warfarin increases the risk for gastrointestinal bleeding. This increased risk is due to the antiplatelet effect of NSAIDs and possible damage to the gastrointestinal mucosa.
Many commonly used antibiotics, such as metronidazole or the macrolides, greatly increase the effect of warfarin by reducing the metabolism of warfarin in the body. Other broad-spectrum antibiotics can reduce the amount of the normal bacterial flora in the bowel, which make significant quantities of vitamin K1, thus potentiating the effect of warfarin. In addition, food that contains large quantities of vitamin K1 will reduce the warfarin effect. Thyroid activity also appears to influence warfarin dosing requirements; hypothyroidism (decreased thyroid function) makes people less responsive to warfarin treatment, while hyperthyroidism (overactive thyroid) boosts the anticoagulant effect. Several mechanisms have been proposed for this effect, including changes in the rate of breakdown of clotting factors and changes in the metabolism of warfarin.
Excessive use of alcohol is also known to affect the metabolism of warfarin and can elevate the INR, and thus increase the risk of bleeding. The U.S. Food and Drug Administration (FDA) product insert on warfarin states that alcohol should be avoided. The Cleveland Clinic suggests that when taking warfarin one should not drink more than "one beer, 6 oz of wine, or one shot of alcohol per day".
Warfarin also interacts with many herbs and spices, some used in food (such as ginger and garlic) and others used purely for medicinal purposes (such as ginseng and Ginkgo biloba). All may increase bleeding and bruising in people taking warfarin; similar effects have been reported with borage (starflower) oil. St. John's wort, sometimes recommended to help with mild to moderate depression, reduces the effectiveness of a given dose of warfarin; it induces the enzymes that break down warfarin in the body, causing a reduced anticoagulant effect.
Between 2003 and 2004, the UK Committee on Safety of Medicines received several reports of increased INR and risk of haemorrhage in people taking warfarin and cranberry juice. Data establishing a causal relationship are still lacking, and a 2006 review found no cases of this interaction reported to the USFDA; nevertheless, several authors have recommended that both doctors and patients be made aware of its possibility. The mechanism behind the interaction is still unclear.
Chemistry
X-ray crystallographic studies of warfarin show that it exists in tautomeric form, as the cyclic hemiketal, which is formed from the 4-hydroxycoumarin and the ketone in the 3-position substituent. However, the existence of many 4-hydroxycoumadin anticoagulants (for example phenprocoumon) that possess no ketone group in the 3-substituent to form such a structure, suggests that the hemiketal must tautomerise to the 4-hydroxy form in order for warfarin to be active.
Stereochemistry
Warfarin contains a stereocenter and consists of two enantiomers. This is a racemate, i.e., a 1: 1 mixture of ( R ) – and the ( S ) – form:
Pharmacology
Pharmacokinetics
Warfarin consists of a racemic mixture of two active enantiomers—R- and S- forms—each of which is cleared by different pathways. S-warfarin is two to five times more potent than the R-isomer in producing an anticoagulant response. Both the enantiomers of warfarin undergo CYP-mediated metabolism by many different CYPs to form 3',4',6,7,8 and 10-hydroxy warfarin metabolites, major being 7-OH warfarin formed from S-warfarin by CYP2C9 and 10-OH warfarin from R-warfarin by CYP3A4.
Warfarin is slower-acting than the common anticoagulant heparin, though it has a number of advantages. Heparin must be given by injection, whereas warfarin is available orally. Warfarin has a long half-life and need only be given once a day. Heparin can also cause a prothrombotic condition, heparin-induced thrombocytopenia (an antibody-mediated decrease in platelet levels), which increases the risk for thrombosis. It takes several days for warfarin to reach the therapeutic effect, since the circulating coagulation factors are not affected by the drug (thrombin has a half-life time of days). Warfarin's long half-life means that it remains effective for several days after it is stopped. Furthermore, if given initially without additional anticoagulant cover, it can increase thrombosis risk (see below).
Mechanism of action
Warfarin is one of several drugs often referred to as a "blood thinner"; this is not technically correct, as these drugs reduce coagulation of blood, increasing the clotting time, without affecting the viscosity ("thickness") as such of blood.
Warfarin inhibits the vitamin K-dependent synthesis of biologically active forms of the clotting factors II, VII, IX and X, as well as the regulatory factors protein C, protein S, and protein Z. Other proteins not involved in blood clotting, such as osteocalcin, or matrix Gla protein, may also be affected.
The precursors of these factors require gamma carboxylation of their glutamic acid residues to allow the coagulation factors to bind to phospholipid surfaces inside blood vessels, on the vascular endothelium. The enzyme that carries out the carboxylation of glutamic acid is gamma-glutamyl carboxylase. The carboxylation reaction proceeds only if the carboxylase enzyme is able to convert a reduced form of vitamin K (vitamin K hydroquinone) to vitamin K epoxide at the same time. The vitamin K epoxide is, in turn, recycled back to vitamin K and vitamin K hydroquinone by another enzyme, the vitamin K epoxide reductase (VKOR). Warfarin inhibits VKOR (specifically the VKORC1 subunit), thereby diminishing available vitamin K and vitamin K hydroquinone in the tissues, which decreases the carboxylation activity of the glutamyl carboxylase. When this occurs, the coagulation factors are no longer carboxylated at certain glutamic acid residues, and are incapable of binding to the endothelial surface of blood vessels, and are thus biologically inactive. As the body's stores of previously produced active factors degrade (over several days) and are replaced by inactive factors, the anticoagulation effect becomes apparent. The coagulation factors are produced, but have decreased functionality due to undercarboxylation; they are collectively referred to as PIVKAs (proteins induced [by] vitamin K absence), and individual coagulation factors as PIVKA-number (e.g., PIVKA-II).
When warfarin is newly started, it may promote clot formation temporarily, because the level of proteins C and S are also dependent on vitamin K activity. Warfarin causes decline in protein C levels in first 36 hours. In addition, reduced levels of protein S lead to a reduction in activity of protein C (for which it is the co-factor), so reduces degradation of factor Va and factor VIIIa. Although loading doses of warfarin over 5 mg also produce a precipitous decline in factor VII, resulting in an initial prolongation of the INR, full antithrombotic effect does not take place until significant reduction in factor II occurs days later. The haemostasis system becomes temporarily biased towards thrombus formation, leading to a prothrombotic state. Thus, when warfarin is loaded rapidly at greater than 5 mg per day, to co-administering heparin, an anticoagulant that acts upon antithrombin and helps reduce the risk of thrombosis, is beneficial, with warfarin therapy for four to five days, to have the benefit of anticoagulation from heparin until the full effect of warfarin has been achieved.
Pharmacogenomics
Warfarin activity is determined partially by genetic factors. Polymorphisms in two genes (VKORC1 and CYP2C9) play a particularly large role in response to warfarin.
VKORC1 polymorphisms explain 30% of the dose variation between patients: particular mutations make VKORC1 less susceptible to suppression by warfarin. There are two main haplotypes that explain 25% of variation: low-dose haplotype group (A) and a high-dose haplotype group (B). VKORC1 polymorphisms explain why African Americans are on average relatively resistant to warfarin (higher proportion of group B haplotypes), while Asian Americans are generally more sensitive (higher proportion of group A haplotypes). Group A VKORC1 polymorphisms lead to a more rapid achievement of a therapeutic INR, but also a shorter time to reach an INR over 4, which is associated with bleeding.
CYP2C9 polymorphisms explain 10% of the dose variation between patients, mainly among Caucasian patients as these variants are rare in African American and most Asian populations. These CYP2C9 polymorphisms do not influence time to effective INR as opposed to VKORC1, but do shorten the time to INR > 4.
Despite the promise of pharmacogenomic testing in warfarin dosing, its use in clinical practice is controversial. In August 2009, the Centers for Medicare and Medicaid Services concluded, "the available evidence does not demonstrate that pharmacogenomic testing of CYP2C9 or VKORC1 alleles to predict warfarin responsiveness improves health outcomes in Medicare beneficiaries." A 2014 meta-analysis showed that using genotype-based dosing did not confer benefit in terms of time within therapeutic range, excessive anticoagulation (as defined by INR greater than 4), or a reduction in either major bleeding or thromboembolic events.
History
In the early 1920s, an outbreak occurred of a previously unrecognized cattle disease in the northern United States and Canada. Cattle were haemorrhaging after minor procedures, and on some occasions spontaneously. For example, 21 of 22 cows died after dehorning, and 12 of 25 bulls died after castration. All of these animals had bled to death.
In 1921, Frank Schofield, a Canadian veterinary pathologist, determined that the cattle were ingesting moldy silage made from sweet clover, and that this was functioning as a potent anticoagulant. Only spoiled hay made from sweet clover (grown in northern states of the US and in Canada since the turn of the century) produced the disease. Schofield separated good clover stalks and damaged clover stalks from the same hay mow, and fed each to a different rabbit. The rabbit that had ingested the good stalks remained well, but the rabbit that had ingested the damaged stalks died from a haemorrhagic illness. A duplicate experiment with a different sample of clover hay produced the same result. In 1929, North Dakota veterinarian Lee M. Roderick demonstrated that the condition was due to a lack of functioning prothrombin.
The identity of the anticoagulant substance in spoiled sweet clover remained a mystery until 1940. In 1933, Karl Paul Link and his laboratory of chemists working at the University of Wisconsin set out to isolate and characterize the haemorrhagic agent from the spoiled hay. Five years were needed before Link's student, Harold A. Campbell, recovered 6 mg of crystalline anticoagulant. Next, Link's student, Mark A. Stahmann, took over the project and initiated a large-scale extraction, isolating 1.8 g of recrystallized anticoagulant in about 4 months. This was enough material for Stahmann and Charles F. Huebner to check their results against Campbell's, and to thoroughly characterize the compound. Through degradation experiments, they established that the anticoagulant was 3,3'-methylenebis-(4-hydroxycoumarin), which they later named dicoumarol. They confirmed their results by synthesizing dicoumarol and proving in 1940 that it was identical to the naturally occurring agent.
Dicoumarol was a product of the plant molecule coumarin (not to be confused with Coumadin, a later tradename for warfarin). Coumarin is now known to be present in many plants, and produces the notably sweet smell of freshly cut grass or hay and plants such as sweet grass; in fact, the plant's high content of coumarin is responsible for the original common name of "sweet clover", which is named for its sweet smell, not its bitter taste. They are present notably in woodruff (Galium odoratum, Rubiaceae), and at lower levels in licorice, lavender, and various other species. The name coumarin comes via the French coumarou from kumarú, the Tupi name for the tree of the tonka bean, which notably contains a high concentration of coumarin. However, coumarins themselves do not influence clotting or warfarin-like action, but must first be metabolized by various fungi into compounds such as 4-hydroxycoumarin, then further (in the presence of naturally occurring formaldehyde) into dicoumarol, to have any anticoagulant properties.
Over the next few years, numerous similar chemicals (specifically 4-hydroxycoumarins with a large aromatic substituent at the 3 position) were found to have the same anticoagulant properties. The first drug in the class to be widely commercialized was dicoumarol itself, patented in 1941 and later used as a pharmaceutical. Karl Link continued working on developing more potent coumarin-based anticoagulants for use as rodent poisons, resulting in warfarin in 1948. The name "warfarin" stems from the acronym WARF, for Wisconsin Alumni Research Foundation + the ending "-arin" indicating its link with coumarin. Warfarin was first registered for use as a rodenticide in the US in 1948, and was immediately popular. Although warfarin was developed by Link, the Wisconsin Alumni Research Foundation financially supported the research and was assigned the patent.
After an incident in 1951, in which an army inductee attempted suicide with multiple doses of warfarin in rodenticide, but recovered fully after presenting to a naval hospital and being treated with vitamin K (by then known as a specific antidote), studies began in the use of warfarin as a therapeutic anticoagulant. It was found to be generally superior to dicoumarol, and in 1954, was approved for medical use in humans. An early recipient of warfarin was US President Dwight Eisenhower, who was prescribed the drug after having a heart attack in 1955.
The exact mechanism of action remained unknown until it was demonstrated, in 1978, that warfarin inhibits the enzyme vitamin K epoxide reductase, and hence interferes with vitamin K metabolism.
Lavrenty Beria and I. V. Khrustalyov are thought to have conspired to use warfarin to poison Soviet leader Joseph Stalin. Warfarin is tasteless and colourless, and produces symptoms similar to those that Stalin exhibited.
Occupational safety
Warfarin used for pest control is a hazardous substance harmful to health. People can be exposed to warfarin in the workplace by breathing it in, swallowing it, skin absorption, and eye contact. The Occupational Safety and Health Administration has set the legal limit (permissible exposure limit) for warfarin exposure in the workplace as 0.1 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health has set a recommended exposure limit of 0.1 mg/m3 over an 8-hour workday. At levels of 100 mg/m3, warfarin is immediately dangerous to life and health.
It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
Society and culture
The name "warfarin" is derived from the acronym for "Wisconsin Alumni Research Foundation", plus "-arin", indicating its link with coumarin. Warfarin is a derivative of dicoumarol, an anticoagulant originally discovered in spoiled sweet clover. Dicoumarol, in turn, is from coumarin, a sweet-smelling but coagulation-inactive chemical found in "sweet" clover and tonka beans (also known as cumaru from which coumarin's name derives).
Brand names
Warfarin as a drug is marketed under many brand and generic names, including Aldocumar, Anasmol, Anticoag, Befarin, Cavamed, Cicoxil, Circuvit, Cofarin, Coumadin, Coumadine, Cumar, Farin, Foley, Haemofarin, Jantoven, Kovar, Lawarin, Maforan, Marevan, Marfarin, Marivanil, Martefarin, Morfarin, Orfarin, Panwarfin, Scheme, Simarc, Varfarin, Varfarins, Varfine, Waran, Warcok, Warf, Warfareks, Warfarin, Warfarina, Warfarine, Warfarinum, Warfen, Warfin, Warik, Warin, Warlin, and Zyfarin.
Veterinary use
Warfarin is used as a poison for rats and other pests.
Pest control
Warfarin was introduced as a rodenticide, only later finding medical uses; in both cases it was used as an anticoagulant. The use of warfarin itself as a rat poison is declining, because many rat populations have developed resistance to it, and poisons of considerably greater potency have become available. However, warfarin continued to be considered a valuable tool for rodent control which minimised risk to other species.
Rodents
Coumarins (4-hydroxycoumarin derivatives) are used as rodenticides for controlling rats and mice in residential, industrial, and agricultural areas. Warfarin is both odorless and tasteless, and is effective when mixed with food bait, because the rodents will return to the bait and continue to feed over a period of days until a lethal dose is accumulated (considered to be 1 mg/kg/day over about six days). It may also be mixed with talc and used as a tracking powder, which accumulates on the animal's skin and fur, and is subsequently consumed during grooming. The for warfarin is 50–100mg/kg for a single dose, after 5–7 days. 1mg/kg for repeated daily doses for 5 days, after 5–8 days. The IDLH value is 100 mg/m3 (warfarin; various species).
Resistance to warfarin as a poison has developed in many rat populations due to an autosomal dominant on chromosome 1 in brown rats. This has arisen independently and become fixed several times around the world. Other 4-hydroxycoumarins used as rodenticides include coumatetralyl and brodifacoum, which is sometimes referred to as "super-warfarin", because it is more potent, longer-acting, and effective even in rat and mouse populations that are resistant to warfarin. Unlike warfarin, which is readily excreted, newer anticoagulant poisons also accumulate in the liver and kidneys after ingestion. However, such rodenticides may also accumulate in birds of prey and other animals that eat the poisoned rodents or baits.
Vampire bats
Warfarin is used to cull populations of vampire bats, in which rabies is often prevalent, in areas where human–wildlife conflict is a concern. Vampire bats are captured with mist nets and coated with a combination of petroleum jelly and warfarin. The bat returns to its roost and other members of the roost become poisoned as well by ingesting the warfarin after reciprocal grooming. Suspected vampire bat roosts may also be coated in the warfarin solution, though this kills other bat species and remains in the environment for years. The efficacy of killing vampire bats to reduce rabies transmission is questionable; a study in Peru showed that culling programs did not lead to lower transmission rates of rabies to livestock and humans.
Brand names
Warfarin as a pest control poison is marketed under many brand and generic names, including Cov-R-Tox, Co-Rax, d-Con, Dethmor, Killgerm Sewercide, Mar-Fin, Rattunal, Rax, Rodex, Rodex Blox, Rosex, Sakarat, Sewarin, Solfarin, Sorex Warfarin, Tox-Hid, Warf, warfarin, and Warfarat. Warfarin is called coumafene in France, zoocoumarin in the Netherlands and Russia, and coumarin in Japan.
| Biology and health sciences | Specific drugs | Health |
238112 | https://en.wikipedia.org/wiki/Volcanology | Volcanology | Volcanology (also spelled vulcanology) is the study of volcanoes, lava, magma and related geological, geophysical and geochemical phenomena (volcanism). The term volcanology is derived from the Latin word vulcan. Vulcan was the ancient Roman god of fire.
A volcanologist is a geologist who studies the eruptive activity and formation of volcanoes and their current and historic eruptions. Volcanologists frequently visit volcanoes, especially active ones, to observe volcanic eruptions, collect eruptive products including tephra (such as ash or pumice), rock and lava samples. One major focus of enquiry is the prediction of eruptions; there is currently no accurate way to do this, but predicting or forecasting eruptions, like predicting earthquakes, could save many lives.
Modern volcanology
In 1841, the first volcanological observatory, the Vesuvius Observatory, was founded in the Kingdom of the Two Sicilies. Volcanology advances have required more than just structured observation, and the science relies upon the understanding and integration of knowledge in many fields including geology, tectonics, physics, chemistry and mathematics, with many advances only being able to occur after the advance had occurred in another field of science. For example, the study of radioactivity only commenced in 1896, and its application to the theory of plate tectonics and radiometric dating took about 50 years after this. Many other developments in fluid dynamics, experimental physics and chemistry, techniques of mathematical modelling, instrumentation and in other sciences have been applied to volcanology since 1841.
Techniques
Seismic observations are made using seismographs deployed near volcanic areas, watching out for increased seismicity during volcanic events, in particular looking for long period harmonic tremors, which signal magma movement through volcanic conduits.
Surface deformation monitoring includes the use of geodetic techniques such as leveling, tilt, strain, angle and distance measurements through tiltmeters, total stations and EDMs. This also includes GNSS observations and InSAR. Surface deformation indicates magma upwelling: increased magma supply produces bulges in the volcanic center's surface.
Gas emissions may be monitored with equipment including portable ultra-violet spectrometers (COSPEC, now superseded by the miniDOAS), which analyzes the presence of volcanic gases such as sulfur dioxide; or by infra-red spectroscopy (FTIR). Increased gas emissions, and more particularly changes in gas compositions, may signal an impending volcanic eruption.
Temperature changes are monitored using thermometers and observing changes in thermal properties of volcanic lakes and vents, which may indicate upcoming activity.
Satellites are widely used to monitor volcanoes, as they allow a large area to be monitored easily. They can measure the spread of an ash plume, such as the one from Eyjafjallajökull's 2010 eruption, as well as SO2 emissions. InSAR and thermal imaging can monitor large, scarcely populated areas where it would be too expensive to maintain instruments on the ground.
Other geophysical techniques (electrical, gravity and magnetic observations) include monitoring fluctuations and sudden change in resistivity, gravity anomalies or magnetic anomaly patterns that may indicate volcano-induced faulting and magma upwelling.
Stratigraphic analyses includes analyzing tephra and lava deposits and dating these to give volcano eruption patterns, with estimated cycles of intense activity and size of eruptions.
Compositional analysis has been very successful in the grouping of volcanoes by type, origin of magma, including matching of volcanoes to a mantle plume of a particular hotspot, mantle plume melting depths, the history of recycled subducted crust, matching of tephra deposits to each other and to volcanoes of origin, and the understanding the formation and evolution of magma reservoirs, an approach which has now been validated by real time sampling.
Forecasting
Some of the techniques mentioned above, combined with modelling, have proved useful and successful in the forecasting of some eruptions, such as the evacuation of the locality around Mount Pinatubo in 1991 that may have saved 20,000 lives. Short-term forecasts tend to use seismic or multiple monitoring data with long term forecasting involving the study of the previous history of local volcanism. However, volcanology forecasting does not just involve predicting the next initial onset time of an eruption, as it might also address the size of a future eruption, and evolution of an eruption once it has begun.
History
Volcanology has an extensive history. The earliest known recording of a volcanic eruption may be on a wall painting dated to about 7,000 BCE found at the Neolithic site at Çatal Höyük in Anatolia, Turkey. This painting has been interpreted as a depiction of an erupting volcano, with a cluster of houses below shows a twin peaked volcano in eruption, with a town at its base (though archaeologists now question this interpretation). The volcano may be either Hasan Dağ, or its smaller neighbour, Melendiz Dağ.
Greco-Roman philosophy
The classical world of Greece and the early Roman Empire explained volcanoes as sites of various gods. Greeks considered that Hephaestus, the god of fire, sat below the volcano Etna, forging the weapons of Zeus. The Greek word used to describe volcanoes was etna, or hiera, after Heracles, the son of Zeus. The Roman poet Virgil, in interpreting the Greek mythos, held that the giant Enceladus was buried beneath Etna by the goddess Athena as punishment for rebellion against the gods; the mountain's rumblings were his tormented cries, the flames his breath and the tremors his railing against the bars of his prison. Enceladus' brother Mimas was buried beneath Vesuvius by Hephaestus, and the blood of other defeated giants welled up in the Phlegrean Fields surrounding Vesuvius.
The Greek philosopher Empedocles (c. 490-430 BCE) saw the world divided into four elemental forces, of Earth, Air, Fire and Water. Volcanoes, Empedocles maintained, were the manifestation of Elemental Fire. Plato contended that channels of hot and cold waters flow in inexhaustible quantities through subterranean rivers. In the depths of the earth snakes a vast river of fire, the Pyriphlegethon, which feeds all the world's volcanoes. Aristotle considered underground fire as the result of "the...friction of the wind when it plunges into narrow passages."
Wind played a key role in volcano explanations until the 16th century after Anaxagoras, in the fifth century BC, had proposed eruptions were caused by a great wind. Lucretius, a Roman philosopher, claimed Etna was completely hollow and the fires of the underground driven by a fierce wind circulating near sea level. Ovid believed that the flame was fed from "fatty foods" and eruptions stopped when the food ran out. Vitruvius contended that sulfur, alum and bitumen fed the deep fires. Observations by Pliny the Elder noted the presence of earthquakes preceded an eruption; he died in the eruption of Vesuvius in 79 CE while investigating it at Stabiae. His nephew, Pliny the Younger, gave detailed descriptions of the eruption in which his uncle died, attributing his death to the effects of toxic gases. Such eruptions have been named Plinian in honour of the two authors.
Middle Ages
Thirteenth century Dominican scholar Restoro d'Arezzo devoted two entire chapters (11.6.4.6 and 11.6.4.7) of his seminal treatise La composizione del mondo colle sue cascioni to the origin of the endogenous energy of the Earth. Restoro maintained that the interior of the Earth was very hot and insisted, following Empedocles, that the Earth had a molten center and that volcanoes erupted through the rise of molten rock to the surface.
Renaissance observations
During the Renaissance, observers as Bernard Palissy, Conrad Gessner, and Johannes Kentmann (1518–1568) showed a deep intense interest in the nature, behavior, origin and history of the terrestrial globe. Many theories of volcanic action were framed during the late sixteenth mid-seventeenth centuries. Georgius Agricola argued the rays of the sun, as later proposed by Descartes had nothing to do with volcanoes. Agricola believed vapor under pressure caused eruptions of 'mointain oil' and basalt. Johannes Kepler considered volcanoes as conduits for the tears and excrement of the Earth, voiding bitumen, tar and sulfur. Descartes, pronouncing that God had created the Earth in an instant, declared he had done so in three layers; the fiery depths, a layer of water, and the air. Volcanoes, he said, were formed where the rays of the sun pierced the earth.
The volcanoes of southern Italy attracted naturalists ever since the Renaissance led to the rediscovery of Classical descriptions of them by wtiters like Lucretius and Strabo. Vesuvius, Stromboli and Vulcano provided an opportunity to study the nature of volcanic phenomena. Italian natural philosophers living within reach of these volcanoes wrote long and learned books on the subject: Giovanni Alfonso Borelli's account of the eruption of Mount Etna in 1669 became a standard source of information, as did Giulio Cesare Recupito's account of the 1631 eruption of Mount Vesuvius (1632 and later editions) and Francesco Serao's account of the eruption of Vesuvius in 1737 (1737, with editions in French and English).
The Jesuit Athanasius Kircher (1602–1680) witnessed eruptions of Mount Etna and Stromboli, then visited the crater of Vesuvius and published his view of an Earth with a central fire connected to numerous others caused by the burning of sulfur, bitumen and coal. He published his view of this in Mundus Subterraneus with volcanoes acting as a type of safety valve.
The causes of these phenomena were discussed in the large number of theories of the Earth that were published in the hundred years after 1650. The authors of these theories were not themselves observers, but combined the observations of others with Newtonian, Cartesian, Biblical or animistic science to produce a variety of all-embracing systems. Volcanic eruptions and earthquakes were generally linked in these systems to the existence of great open caverns under the Earth where inflammable vapours could accumulate until they were ignited. According to Thomas Burnet, much of the Earth itself was inflammable, with pitch, coal and brimstone all ready to burn. In William Whiston's theory the presence of underground air was necessary if ignition were to take place, while John Woodward stressed that water was essential. Athanasius Kircher maintained that the caverns and sources of the heat were deep, and reached down towards the centre of the Earth, while other writers, notably Georges Buffon, believed they were relatively superficial, and that volcanic fires were seated well up within the volcanic cone itself. A number of writers, most notably Thomas Robinson, believed that the Earth was an animal, and that its internal heat, earthquakes and eruptions were all signs of life. This animistic philosophy was waning by the end of the seventeenth century, but traces continued well into the eighteenth.
Science wrestled with the ideas of the combustion of pyrite with water, that rock was solidified bitumen, and with notions of rock being formed from water (Neptunism). Of the volcanoes then known, all were near the water, hence the action of the sea upon the land was used to explain volcanism.
Interaction with religion and mythology
Tribal legends of volcanoes abound from the Pacific Ring of Fire and the Americas, usually invoking the forces of the supernatural or the divine to explain the violent outbursts of volcanoes. Taranaki and Tongariro, according to Māori mythology, were lovers who fell in love with Pihanga, and a spiteful jealous fight ensued. Some Māori will not to this day live on the direct line between Tongariro and Taranaki for fear of the dispute flaring up again.
In the Hawaiian religion, Pele ( Pel-a; ) is the goddess of volcanoes and a popular figure in Hawaiian mythology. Pele was used for various scientific terms as for Pele's hair, Pele's tears, and Limu o Pele (Pele's seaweed). A volcano on the Jovian moon Io is also named Pele.
Saint Agatha is patron saint of Catania, close to mount Etna, and an important highly venerated (till today) example of virgin martyrs of Christian antiquity. In 253 CE, one year after her violent death, the stilling of an eruption of Mt. Etna was attributed to her intercession. Catania was however nearly completely destroyed by the eruption of Mt. Etna in 1169, and over 15,000 of its inhabitants died. Nevertheless, the saint was invoked again for the 1669 Etna eruption and, for an outbreak that was endangering the town of Nicolosi in 1886. The way the saint is invoked and dealt with in Italian folk religion, in a quid pro quo manner, or bargaining approach which is sometimes used in prayerful interactions with saints, has been related (in the tradition of James Frazer) to earlier pagan beliefs and practices.
In 1660 the eruption of Vesuvius rained twinned pyroxene crystals and ash upon the nearby villages. The crystals resembled the crucifix and this was interpreted as the work of Saint Januarius. In Naples, the relics of St Januarius are paraded through town at every major eruption of Vesuvius. The register of these processions and the 1779 and 1794 diary of Father Antonio Piaggio allowed British diplomat and amateur naturalist Sir William Hamilton to provide a detailed chronology and description of Vesuvius' eruptions.
Notable volcanologists
Plato (428–348 BC)
Pliny the Elder (23–79 AD)
Pliny the Younger (61 – )
George-Louis Leclerc, Comte de Buffon (1707–1788)
James Hutton (1726–1797)
Déodat Gratet de Dolomieu (1750–1801)
George Julius Poulett Scrope (1797–1876)
Giuseppe Mercalli (1850–1914)
Thomas Jaggar (1871–1953), founder of the Hawaiian Volcano Observatory
Haroun Tazieff (1914–1998), advisor to the French Government and Jacques Cousteau
George P. L. Walker (1926–2005), pioneering volcanologist who transformed the subject into a quantitative science
Haraldur Sigurdsson (born 1939), Icelandic volcanologist and geochemist
Katia and Maurice Krafft (1942–1991 and 1946–1991, respectively), died at Mount Unzen in Japan, 1991
David A. Johnston (1949–1980), killed during the 1980 eruption of Mount St. Helens
Harry Glicken (1958–1991), died at Mount Unzen in Japan, 1991
Gallery
| Physical sciences | Volcanology | Earth science |
238115 | https://en.wikipedia.org/wiki/Heparin | Heparin | Heparin, also known as unfractionated heparin (UFH), is a medication and naturally occurring glycosaminoglycan. Heparin is a blood anticoagulant that increases the activity of antithrombin. It is used in the treatment of heart attacks and unstable angina. It can be given intravenously or by injection under the skin. Its anticoagulant properties make it useful to prevent blood clotting in blood specimen test tubes and kidney dialysis machines.
Common side effects include bleeding, pain at the injection site, and low blood platelets. Serious side effects include heparin-induced thrombocytopenia. Greater care is needed in those with poor kidney function.
Heparin is contraindicated for suspected cases of vaccine-induced pro-thrombotic immune thrombocytopenia (VIPIT) secondary to SARS-CoV-2 vaccination, as heparin may further increase the risk of bleeding in an anti-PF4/heparin complex autoimmune manner, in favor of alternative anticoagulant medications (such as argatroban or danaparoid).
Heparin appears to be relatively safe for use during pregnancy and breastfeeding. Heparin is produced by basophils and mast cells in all mammals.
The discovery of heparin was announced in 1916. It is on the World Health Organization's List of Essential Medicines. A fractionated version of heparin, known as low molecular weight heparin, is also available.
History
Heparin was discovered by Jay McLean and William Henry Howell in 1916, although it did not enter clinical trials until 1935. It was originally isolated from dog liver cells, hence its name (ἧπαρ hēpar is Greek for 'liver'; hepar + -in).
McLean was a second-year medical student at Johns Hopkins University, and was working under the guidance of Howell investigating pro-coagulant preparations when he isolated a fat-soluble phosphatide anticoagulant in canine liver tissue. In 1918, Howell coined the term 'heparin' for this type of fat-soluble anticoagulant. In the early 1920s, Howell isolated a water-soluble polysaccharide anticoagulant, which he also termed 'heparin', although it was different from the previously discovered phosphatide preparations. McLean's work as a surgeon probably changed the focus of the Howell group to look for anticoagulants, which eventually led to the polysaccharide discovery.
It had at first been accepted that it was Howell who discovered heparin. However, in the 1940s, Jay McLean became unhappy that he had not received appropriate recognition for what he saw as his discovery. Though relatively discreet about his claim and not wanting to upset his former chief, he gave lectures and wrote letters claiming that the discovery was his. This gradually became accepted as fact, and indeed after he died in 1959, his obituary credited him as being the true discoverer of heparin. This was elegantly restated in 1963 in a plaque unveiled at Johns Hopkins to commemorate the major contribution (of McLean) to the discovery of heparin in 1916 in collaboration with Professor William Henry Howell.
In the 1930s, several researchers were investigating heparin. Erik Jorpes at Karolinska Institutet published his research on the structure of heparin in 1935, which made it possible for the Swedish company Vitrum AB to launch the first heparin product for intravenous use in 1936. Between 1933 and 1936, Connaught Medical Research Laboratories, then a part of the University of Toronto, perfected a technique for producing safe, nontoxic heparin that could be administered to patients, in a saline solution. The first human trials of heparin began in May 1935, and, by 1937, it was clear that Connaught's heparin was safe, easily available, and effective as a blood anticoagulant. Before 1933, heparin was available in small amounts, was extremely expensive and toxic, and, as a consequence, of no medical value.
Heparin production experienced a break in the 1990s. Until then, heparin was mainly obtained from cattle tissue, which was a by-product of the meat industry, especially in North America. With the rapid spread of BSE, more and more manufacturers abandoned this source of supply. As a result, global heparin production became increasingly concentrated in China, where the substance was now procured from the expanding industry of breeding and slaughtering hogs. The dependence of medical care on the meat industry assumed threatening proportions in the wake of the COVID-19 pandemic. In 2020, several studies demonstrated the efficacy of heparin in mitigating severe disease progression, as its anticoagulant effect counteracted the formation of immunothrombosis. However, the availability of heparin on the world market was decreased, because concurrently a renewed swine flu epidemic had reduced significant portions of the Chinese hog population. The situation was further exacerbated by the fact that mass slaughterhouses around the world became coronavirus hotspots themselves and were forced to close temporarily. In less affluent countries, the resulting heparin shortage also led to worsened health care beyond the treatment of COVID-19, for example through the cancellation of cardiac surgeries.
Medical use
Heparin acts as an anticoagulant, preventing the formation of clots and extension of existing clots within the blood. Heparin itself does not break down clots that have already formed, instead, it prevents clot formation by inhibiting thrombin and other procoagulant serine proteases. Heparin is generally used for anticoagulation for the following conditions:
Acute coronary syndrome, e.g., NSTEMI
Atrial fibrillation
Deep-vein thrombosis and pulmonary embolism (both prevention and treatment)
Other thrombotic states and conditions
Cardiopulmonary bypass for heart surgery
ECMO circuit for extracorporeal life support
Hemofiltration
Indwelling central or peripheral venous catheters
Heparin and its low-molecular-weight derivatives (e.g., enoxaparin, dalteparin, tinzaparin) are effective in preventing deep vein thromboses and pulmonary emboli in people at risk, but no evidence indicates any one is more effective than the other in preventing mortality.
In angiography, 2 to 5 units/mL of unfractionated heparin saline flush is used as a locking solution to prevent the clotting of blood in guidewires, sheaths, and catheters, thus preventing thrombus from dislodging from these devices into the circulatory system .
Unfractionated heparin is used in hemodialysis. Compared to low-molecular-weight heparin, unfractionated heparin does not have prolonged anticoagulation action after dialysis and is low cost. However, the short duration of action for heparin would require it to maintain continuous infusion to maintain its action. Meanwhile, unfractionated heparin has higher risk of heparin-induced thrombocytopenia.
Adverse effects
A serious side-effect of heparin is heparin-induced thrombocytopenia (HIT), caused by an immunological reaction that makes platelets a target of immunological response, resulting in the degradation of platelets, which causes thrombocytopenia. This condition is usually reversed on discontinuation, and in general can be avoided with the use of synthetic heparins. Not all patients with heparin antibodies will develop thrombocytopenia. Also, a benign form of thrombocytopenia is associated with early heparin use, which resolves without stopping heparin. Approximately one-third of patients with diagnosed heparin-induced thrombocytopenia will ultimately develop thrombotic complications.
Two non-hemorrhagic side effects of heparin treatment are known. The first is an elevation of serum aminotransferase levels, which has been reported in as many as 80% of patients receiving heparin. This abnormality is not associated with liver dysfunction, and it disappears after the drug is discontinued. The other complication is hyperkalemia, which occurs in 5 to 10% of patients receiving heparin, and is the result of heparin-induced aldosterone suppression. The hyperkalemia can appear within a few days after the onset of heparin therapy. More rarely, the side-effects alopecia and osteoporosis can occur with chronic use.
As with many drugs, overdoses of heparin can be fatal. In September 2006, heparin received worldwide publicity when three prematurely born infants died after they were mistakenly given overdoses of heparin at an Indianapolis hospital.
Contraindications
Heparin is contraindicated in those with risk of bleeding (especially in people with uncontrolled blood pressure, liver disease, and stroke), severe liver disease, or severe hypertension.
Antidote to heparin
Protamine sulfate has been given to counteract the anticoagulant effect of heparin (1 mg per 100 units of heparin that had been given over the past 6 hours). It may be used in those who overdose on heparin or to reverse heparin's effect when it is no longer needed.
Physiological function
Heparin's normal role in the body is unclear. Heparin is usually stored within the secretory granules of mast cells and released only into the vasculature at sites of tissue injury. It has been proposed that rather than anticoagulation, the main purpose of heparin is defense at such sites against invading bacteria and other foreign materials. In addition, it is observed across many widely different species, including some invertebrates that do not have a similar blood coagulation system. It is a highly sulfated glycosaminoglycan. It has the highest negative charge density of any known biological molecule.
Evolutionary conservation
In addition to the bovine and porcine tissue from which pharmaceutical-grade heparin is commonly extracted, it has also been extracted and characterized from:
Turkey
Whale
Dromedary camel
Mouse
Humans
Lobster
Fresh water mussel
Clam
Shrimp
Mangrove crab
Sand dollar
Atlantic salmon
Zebra fish
The biological activity of heparin within species 6–11 is unclear and further supports the idea that the main physiological role of heparin is not anticoagulation. These species do not possess any blood coagulation system similar to that present within the species listed 1–5. The above list also demonstrates how heparin has been highly evolutionarily conserved, with molecules of a similar structure being produced by a broad range of organisms belonging to many different phyla.
Pharmacology
In nature, heparin is a polymer of varying chain size. Unfractionated heparin (UFH) as a pharmaceutical is heparin that has not been fractionated to sequester the fraction of molecules with low molecular weight. In contrast, low-molecular-weight heparin (LMWH) has undergone fractionation to make its pharmacodynamics more predictable. Often either UFH or LMWH can be used; in some situations one or the other is preferable.
Mechanism of action
Heparin binds to the enzyme inhibitor antithrombin III (AT), causing a conformational change that results in its activation through an increase in the flexibility of its reactive site loop. The activated AT then inactivates thrombin, factor Xa and other proteases. The rate of inactivation of these proteases by AT can increase by up to 1000-fold due to the binding of heparin. Heparin binds to AT via a specific pentasaccharide sulfation sequence contained within the heparin polymer:
GlcNAc/NS(6S)-GlcA-GlcNS(3S,6S)-IdoA(2S)-GlcNS(6S)
The conformational change in AT on heparin-binding mediates its inhibition of factor Xa. For thrombin inhibition, however, thrombin must also bind to the heparin polymer at a site proximal to the pentasaccharide. The highly negative charge density of heparin contributes to its very strong electrostatic interaction with thrombin. The formation of a ternary complex between AT, thrombin, and heparin results in the inactivation of thrombin. For this reason, heparin's activity against thrombin is size-dependent, with the ternary complex requiring at least 18 saccharide units for efficient formation. In contrast, antifactor Xa activity via AT requires only the pentasaccharide-binding site.
This size difference has led to the development of low-molecular-weight heparins (LMWHs) and fondaparinux as anticoagulants. Fondaparinux targets anti-factor Xa activity rather than inhibiting thrombin activity, to facilitate a more subtle regulation of coagulation and an improved therapeutic index. It is a synthetic pentasaccharide, whose chemical structure is almost identical to the AT binding pentasaccharide sequence that can be found within polymeric heparin and heparan sulfate.
With LMWH and fondaparinux, the risk of osteoporosis and heparin-induced thrombocytopenia (HIT) is reduced. Monitoring of the activated partial thromboplastin time is also not required and does not reflect the anticoagulant effect, as APTT is insensitive to alterations in factor Xa.
Danaparoid, a mixture of heparan sulfate, dermatan sulfate, and chondroitin sulfate can be used as an anticoagulant in patients having developed HIT. Because danaparoid does not contain heparin or heparin fragments, cross-reactivity of danaparoid with heparin-induced antibodies is reported as less than 10%.
The effects of heparin are measured in the lab by the partial thromboplastin time (aPTT), one of the measures of the time it takes the blood plasma to clot. Partial thromboplastin time should not be confused with prothrombin time, or PT, which measures blood clotting time through a different pathway of the coagulation cascade.
Administration
Heparin is given parenterally because it is not absorbed from the gut, due to its high negative charge and large size. It can be injected intravenously or subcutaneously (under the skin); intramuscular injections (into muscle) are avoided because of the potential for forming hematomas. Because of its short biologic half-life of about one hour, heparin must be given frequently or as a continuous infusion. Unfractionated heparin has a half-life of about one to two hours after infusion, whereas LMWH has a half-life of four to five hours. The use of LMWH has allowed once-daily dosing, thus not requiring a continuous infusion of the drug. If long-term anticoagulation is required, heparin is often used only to commence anticoagulation therapy until an oral anticoagulant e.g. warfarin takes effect.
The American College of Chest Physicians publishes clinical guidelines on heparin dosing.
Natural degradation or clearance
Unfractionated heparin has a half-life of about one to two hours after infusion, whereas low-molecular-weight heparin's half-life is about four times longer. Lower doses of heparin have a much shorter half-life than larger ones. Heparin binding to macrophage cells is internalized and depolymerized by the macrophages. It also rapidly binds to endothelial cells, which precludes the binding to antithrombin that results in anticoagulant action. For higher doses of heparin, endothelial cell binding will be saturated, such that clearance of heparin from the bloodstream by the kidneys will be a slower process.
Chemistry
Heparin structure
Native heparin is a polymer with a molecular weight ranging from 3 to 30 kDa, although the average molecular weight of most commercial heparin preparations is in the range of 12 to 15 kDa. Heparin is a member of the glycosaminoglycan family of carbohydrates (which includes the closely related molecule heparan sulfate) and consists of a variably sulfated repeating disaccharide unit.
The main disaccharide units that occur in heparin are shown below. The most common disaccharide unit* (see below) is composed of a 2-O-sulfated iduronic acid and 6-O-sulfated, N-sulfated glucosamine, IdoA(2S)-GlcNS(6S). For example, this makes up 85% of heparins from beef lung and about 75% of those from porcine intestinal mucosa.
Not shown below are the rare disaccharides containing a 3-O-sulfated glucosamine (GlcNS(3S,6S)) or a free amine group (GlcNH3+). Under physiological conditions, the ester and amide sulfate groups are deprotonated and attract positively charged counterions to form a heparin salt. Heparin is usually administered in this form as an anticoagulant.
GlcA = β--glucuronic acid, IdoA = α--iduronic acid, IdoA(2S) = 2-O-sulfo-α--iduronic acid, GlcNAc = 2-deoxy-2-acetamido-α--glucopyranosyl, GlcNS = 2-deoxy-2-sulfamido-α--glucopyranosyl, GlcNS(6S) = 2-deoxy-2-sulfamido-α--glucopyranosyl-6-O-sulfate
One unit of heparin (the "Howell unit") is an amount approximately equivalent to 0.002 mg of pure heparin, which is the quantity required to keep 1 ml of cat's blood fluid for 24 hours at 0 °C.
Three-dimensional structure
The three-dimensional structure of heparin is complicated because iduronic acid may be present in either of two low-energy conformations when internally positioned within an oligosaccharide. The conformational equilibrium is influenced by the sulfation state of adjacent glucosamine sugars. Nevertheless, the solution structure of a heparin dodecasaccharide composed solely of six GlcNS(6S)-IdoA(2S) repeat units has been determined using a combination of NMR spectroscopy and molecular modeling techniques. Two models were constructed, one in which all IdoA(2S) were in the 2S0 conformation (A and B below), and one in which they are in the 1C4 conformation (C and D below). However, no evidence suggests that changes between these conformations occur in a concerted fashion. These models correspond to the protein data bank code 1HPN.
In the image above:
A = 1HPN (all IdoA(2S) residues in 2S0 conformation) Jmol viewer
B = van der Waals radius space-filling model of A
C = 1HPN (all IdoA(2S) residues in 1C4 conformation) Jmol viewer
D = van der Waals radius space-filling model of C
In these models, heparin adopts a helical conformation, the rotation of which places clusters of sulfate groups at regular intervals of about 17 angstroms (1.7 nm) on either side of the helical axis.
Depolymerization techniques
Either chemical or enzymatic depolymerization techniques or a combination of the two underlie the vast majority of analyses carried out on the structure and function of heparin and heparan sulfate (HS).
Enzymatic
The enzymes traditionally used to digest heparin or HS are naturally produced by the soil bacterium Pedobacter heparinus (formerly named Flavobacterium heparinum). This bacterium is capable of using either heparin or HS as its sole carbon and nitrogen source. To do so, it produces a range of enzymes such as lyases, glucuronidases, sulfoesterases, and sulfamidases. The lyases have mainly been used in heparin/HS studies. The bacterium produces three lyases, heparinases I (), II (no EC number assigned) and III () and each has distinct substrate specificities as detailed below.
The lyases cleave heparin/HS by a beta elimination mechanism. This action generates an unsaturated double bond between C4 and C5 of the uronate residue. The C4-C5 unsaturated uronate is termed ΔUA or UA. It is a sensitive UV chromophore (max absorption at 232 nm) and allows the rate of an enzyme digest to be followed, as well as providing a convenient method for detecting the fragments produced by enzyme digestion.
Chemical
Nitrous acid can be used to chemically depolymerize heparin/HS. Nitrous acid can be used at pH 1.5 or a higher pH of 4. Under both conditions, nitrous acid affects deaminative cleavage of the chain. At both 'high' (4) and 'low' (1.5) pH, deaminative cleavage occurs between GlcNS-GlcA and GlcNS-IdoA, albeit at a slower rate at the higher pH. The deamination reaction, and therefore chain cleavage, is regardless of O-sulfation carried by either monosaccharide unit.
At low pH, deaminative cleavage results in the release of inorganic SO4, and the conversion of GlcNS into anhydromannose (aMan). Low-pH nitrous acid treatment is an excellent method to distinguish N-sulfated polysaccharides such as heparin and HS from non N-sulfated polysaccharides such as chondroitin sulfate and dermatan sulfate, chondroitin sulfate and dermatan sulfate not being susceptible to nitrous acid cleavage.
Detection in body fluids
Current clinical laboratory assays for heparin rely on an indirect measurement of the effect of the drug, rather than on a direct measure of its chemical presence. These include activated partial thromboplastin time (APTT) and antifactor Xa activity. The specimen of choice is usually fresh, nonhemolyzed plasma from blood that has been anticoagulated with citrate, fluoride, or oxalate.
Other functions
Blood specimen test tubes, vacutainers, and capillary tubes that use the lithium salt of heparin (lithium heparin) as an anticoagulant are usually marked with green stickers and green tops. Heparin has the advantage over EDTA of not affecting levels of most ions. However, the concentration of ionized calcium may be decreased if the concentration of heparin in the blood specimen is too high. Heparin can interfere with some immunoassays, however. As lithium heparin is usually used, a person's lithium levels cannot be obtained from these tubes; for this purpose, royal-blue-topped (and dark green-topped) vacutainers containing sodium heparin are used.
Heparin-coated blood oxygenators are available for use in heart-lung machines. Among other things, these specialized oxygenators are thought to improve overall biocompatibility and host homeostasis by providing characteristics similar to those of native endothelium.
The DNA binding sites on RNA polymerase can be occupied by heparin, preventing the polymerase from binding to promoter DNA. This property is exploited in a range of molecular biological assays.
Common diagnostic procedures require PCR amplification of a patient's DNA, which is easily extracted from white blood cells treated with heparin. This poses a potential problem, since heparin may be extracted along with the DNA, and it has been found to interfere with the PCR reaction at levels as low as 0.002 U in a 50 μL reaction mixture.
Heparin has been used as a chromatography resin, acting as both an affinity ligand and an ion exchanger. Its polyanionic structure can mimic nucleic acids like DNA and RNA, making it useful for purification of nucleic acid-binding proteins including DNA and RNA polymerases and transcription factors. Heparin's specific affinity for VSV-G, a viral envelope glycoprotein often used to pseudotype retroviral and lentiviral vectors for gene therapy, allows it to be used for downstream purification of viral vectors.
Heparin is being trialed in a nasal spray form as prophylaxis against COVID-19 infection. Furthermore, its reported from trials that due to anti-viral, anti-inflammatory and its anti-clotting effects its inhalation could improve at a 70% rate on patients that were actively struck by a COVID-19 infection.
Society and culture
Contamination recalls
Considering the animal source of pharmaceutical heparin, the number of potential impurities is relatively large compared with a wholly synthetic therapeutic agent. The range of possible biological contaminants includes viruses, bacterial endotoxins, transmissible spongiform encephalopathy (TSE) agents, lipids, proteins, and DNA. During the preparation of pharmaceutical-grade heparin from animal tissues, impurities such as solvents, heavy metals, and extraneous cations can be introduced. However, the methods employed to minimize the occurrence and to identify and/or eliminate these contaminants are well established and listed in guidelines and pharmacopeias. The major challenge in the analysis of heparin impurities is the detection and identification of structurally related impurities. The most prevalent impurity in heparin is dermatan sulfate (DS), also known as chondroitin sulfate B. The building block of DS is a disaccharide composed of 1,3-linked N-acetyl galactosamine (GalN) and a uronic acid residue, connected via 1,4 linkages to form the polymer. DS is composed of three possible uronic acids (GlcA, IdoA, or IdoA2S) and four possible hexosamine (GalNAc, Gal- NAc4S, GalNAc6S, or GalNAc4S6S) building blocks. The presence of iduronic acid in DS distinguishes it from chondroitin sulfate A and C and likens it to heparin and HS. DS has a lower negative charge density overall compared to heparin. A common natural contaminant, DS is present at levels of 1–7% in heparin API but has no proven biological activity that influences the anticoagulation effect of heparin.
In December 2007, the US Food and Drug Administration (FDA) recalled a shipment of heparin because of bacterial growth (Serratia marcescens) in several unopened syringes of this product. S. marcescens can lead to life-threatening injuries and/or death.
2008 recall due to adulteration in drug from China
In March 2008, major recalls of heparin were announced by the FDA due to contamination of the raw heparin stock imported from China. According to the FDA, the adulterated heparin killed nearly 80 people in the United States. The adulterant was identified as an "over-sulphated" derivative of chondroitin sulfate, a popular shellfish-derived supplement often used for arthritis, which was intended to substitute for actual heparin in potency tests.
According to the New York Times: "Problems with heparin reported to the agency include difficulty breathing, nausea, vomiting, excessive sweating and rapidly falling blood pressure that in some cases led to life-threatening shock".
Use in homicide
In 2006, Petr Zelenka, a nurse in the Czech Republic, deliberately administered large doses to patients, killing seven, and attempting to kill ten others.
Overdose issues
In 2007, a nurse at Cedars-Sinai Medical Center mistakenly gave the 12-day-old twins of actor Dennis Quaid a dose of heparin that was 1,000 times the recommended dose for infants. The overdose allegedly arose because the labeling and design of the adult and infant versions of the product were similar. The Quaid family subsequently sued the manufacturer, Baxter Healthcare Corp., and settled with the hospital for $750,000. Prior to the Quaid accident, six newborn babies at Methodist Hospital in Indianapolis, Indiana, were given an overdose. Three of the babies died after the mistake.
In July 2008, another set of twins born at Christus Spohn Hospital South, in Corpus Christi, Texas, died after an accidentally administered overdose of the drug. The overdose was due to a mixing error at the hospital pharmacy and was unrelated to the product's packaging or labeling. , the exact cause of the twins' death was under investigation.
In March 2010, a two-year-old transplant patient from Texas was given a lethal dose of heparin at the University of Nebraska Medical Center. The exact circumstances surrounding her death are still under investigation.
Production
Pharmaceutical-grade heparin is derived from mucosal tissues of slaughtered meat animals such as porcine (pig) intestines or bovine (cattle) lungs. Advances to produce heparin synthetically have been made in 2003 and 2008. In 2011, a chemoenzymatic process of synthesizing low molecular weight heparins from simple disaccharides was reported.
Research
As detailed in the table below, the potential is great for the development of heparin-like structures as drugs to treat a wide range of diseases, in addition to their current use as anticoagulants.
– indicates that no information is available
As a result of heparin's effect on such a wide variety of disease states, a number of drugs are indeed in development whose molecular structures are identical or similar to those found within parts of the polymeric heparin chain.
| Biology and health sciences | Specific drugs | Health |
238150 | https://en.wikipedia.org/wiki/Furnace%20%28central%20heating%29 | Furnace (central heating) | A furnace (American English), referred to as a heater or boiler in British English, is an appliance used to generate heat for all or part of a building. Furnaces are mostly used as a major component of a central heating system. Furnaces are permanently installed to provide heat to an interior space through intermediary fluid movement, which may be air, steam, or hot water. Heating appliances that use steam or hot water as the fluid are normally referred to as a residential steam boilers or residential hot water boilers. The most common fuel source for modern furnaces in North America and much of Europe is natural gas; other common fuel sources include LPG (liquefied petroleum gas), fuel oil, wood and in rare cases coal. In some areas electrical resistance heating is used, especially where the cost of electricity is low or the primary purpose is for air conditioning. Modern high-efficiency furnaces can be up to 98% efficient and operate without a chimney, with a typical gas furnace being about 80% efficient. Waste gas and heat are mechanically ventilated through either metal flue pipes or polyvinyl chloride (PVC) pipes that can be vented through the side or roof of the structure. Fuel efficiency in a gas furnace is measured in AFUE (Annual Fuel Utilization Efficiency).
Etymology
The name derives from Latin word fornax, which means oven.
Categories
Furnaces can be classified into four general categories, based on efficiency and design, natural draft, forced-air, forced draft, and condensing.
Natural draft
The first category of furnaces is natural draft, atmospheric burner furnaces. These furnaces consisted of cast-iron or riveted-steel heat exchangers built within an outer shell of brick, masonry, or steel. The heat exchangers were vented through brick or masonry chimneys. Air circulation depended on large, upwardly pitched pipes constructed of wood or metal. The pipes would channel the warm air into floor or wall vents inside the home. This method of heating worked because warm air rises.
The system was simple, had few controls, a single automatic gas valve, and no blower. These furnaces could be made to work with any fuel simply by adapting the burner area. They have been operated with wood, coke, coal, trash, paper, natural gas, fuel oil as well as whale oil for a brief period at the turn of the century. Furnaces that used solid fuels required daily maintenance to remove ash and "clinkers" that accumulated in the bottom of the burner area. In later years, these furnaces were adapted with electric blowers to aid air distribution and speed moving heat into the home. Gas and oil-fired systems were usually controlled by a thermostat inside the home, while most wood and coal-fired furnaces had no electrical connection and were controlled by the amount of fuel in the burner and position of the fresh-air damper on the burner access door.
Forced-air
The second category of furnace is the forced-air having atmospheric burner style with a cast-iron or sectional steel heat exchanger. Through the 1950s and 1960s, this style of furnace was used to replace the big, natural draft systems, and was sometimes installed on the existing gravity duct work. The heated air was moved by blowers which were belted driven and designed for a wide range of speeds. These furnaces were still big and bulky compared to modern furnaces, and had heavy-steel exteriors with bolt-on removable panels. Energy efficiency would range anywhere from just over 50% to upward of 65% AFUE. This style furnace still used large, masonry or brick chimneys for flues and was eventually designed to accommodate air-conditioning systems.
Forced draft
The third category of furnace is the forced draft, mid-efficiency furnace with a steel heat exchanger and multi-speed blower. These furnaces were physically much more compact than the previous styles. They were equipped with combustion air blowers that would pull air through the heat exchanger which greatly increased fuel efficiency while allowing the heat exchangers to become smaller. These furnaces may have multi-speed blowers and were designed to work with central air-conditioning systems.
Condensing
The fourth category of furnace is the high-efficiency condensing gas furnace. High efficiency condensing gas furnaces typically achieve between 90% and 98% AFUE. A condensing gas furnace includes a sealed combustion area, combustion draft inducer and a secondary heat exchanger. The primary gain in efficiency for a condensing gas furnace, as compared to a mid-efficiency forced-air or forced-draft furnace, is the capture of latent heat from the exhaust gases in the secondary heat exchanger. The secondary heat exchanger removes most of the heat energy from the exhaust gas, actually condensing water vapour and other chemicals (which form a mild acid) as it operates. The vent pipes, also known as the exhaust system, are often installed using PVC pipe instead of metal venting pipe to prevent corrosion, but this will vary based on geographical location of the installation and local regulations. The draft inducer allows for the exhaust piping to be routed vertically or horizontally as it exits the structure. A typical installation arrangement for high-efficiency furnaces includes a fresh air intake (supply) pipe that brings fresh air from outside the home to the furnace combustion unit. Normally the fresh combustion air is routed alongside the exhaust PVC during installation and the pipes exit through a sidewall of the home in the same location. High efficiency furnaces typically deliver a 25% to 35% fuel savings over a 60% AFUE furnace.
Types of furnace output control
Single-stage
A single-stage furnace has only one stage of operation, it is either on or off. This means that it is relatively noisy, always running at the highest speed, and always pumping out the hottest air at the highest velocity.
One of the benefits to a single-stage furnace is typically the cost for installation. Single-stage furnaces are relatively inexpensive since the technology is rather simple. However, the simplicity of single-stage gas furnaces come at the cost of blower motor noise and mechanical inefficiency. The blower motors on these single-stage furnaces consume more energy overall because, regardless of the heating requirements of the space, the fan and blower motors operate at a fixed-speed. Due to its One-Speed operation, a single-stage furnace is also called a single-speed furnace.
Two-stage
A two-stage furnace has to do two stage full speed and half (or reduced) speed. Depending on the demanded heat, they can run at a lower speed most of the time. They can be quieter, move the air at less velocity, and will better keep the desired temperature in the house.
Modulating
A modulating furnace can modulate the heat output and air velocity nearly continuously, depending on the demanded heat and outside temperature. This means that it only works as much as necessary and therefore saves energy.
Heat distribution
The furnace transfers heat to the living space of the building through an intermediary distribution system. If the distribution is through hot water (or other fluid) or through steam, then the furnace is more commonly called a boiler. One advantage of a boiler is that the furnace can provide hot water for bathing and washing dishes, rather than requiring a separate water heater. One disadvantage to this type of application is when the boiler breaks down, neither heating nor domestic hot water are available.
Air convection heating systems have been in use for over a century. Older systems rely on a passive air circulation system where the greater density of cooler air causes it to sink into the furnace area below, through air return registers in the floor, and the lesser density of warmed air causes it to rise in the ductwork; the two forces acting together to drive air circulation in a system termed 'gravity-fed'. The layout of these 'octopus’ furnaces and their duct systems is optimized with various diameters of large dampered ducts.
By comparison, most modern "warm air" furnaces typically use a fan to circulate air to the rooms of house and pull cooler air back to the furnace for reheating; this is called forced-air heat. Because the fan easily overcomes the resistance of the ductwork, the arrangement of ducts can be far more flexible than the octopus of old. In American practice, separate ducts collect cool air to be returned to the furnace. At the furnace, cool air passes into the furnace, usually through an air filter, through the blower, then through the heat exchanger of the furnace, whence it is blown throughout the building. One major advantage of this type of system is that it also enables easy installation of central air conditioning, simply by adding a cooling coil at the outlet of the furnace.
Air is circulated through ductwork, which may be made of sheet metal or plastic "flex" duct, and is insulated or uninsulated. Unless the ducts and plenum have been sealed using mastic or foil duct tape, the ductwork is likely to have a high leakage of conditioned air, possibly into unconditioned spaces. Another cause of wasted energy is the installation of ductwork in unheated areas, such as attics and crawl spaces; or ductwork of air conditioning systems in attics in warm climates.
| Technology | Household appliances | null |
238181 | https://en.wikipedia.org/wiki/Gibbs%20free%20energy | Gibbs free energy | In thermodynamics, the Gibbs free energy (or Gibbs energy as the recommended name; symbol ) is a thermodynamic potential that can be used to calculate the maximum amount of work, other than pressure–volume work, that may be performed by a thermodynamically closed system at constant temperature and pressure. It also provides a necessary condition for processes such as chemical reactions that may occur under these conditions. The Gibbs free energy is expressed as
where:
is the internal energy of the system
is the enthalpy of the system
is the entropy of the system
is the temperature of the system
is the volume of the system
is the pressure of the system (which must be equal to that of the surroundings for mechanical equilibrium).
The Gibbs free energy change (, measured in joules in SI) is the maximum amount of non-volume expansion work that can be extracted from a closed system (one that can exchange heat and work with its surroundings, but not matter) at fixed temperature and pressure. This maximum can be attained only in a completely reversible process. When a system transforms reversibly from an initial state to a final state under these conditions, the decrease in Gibbs free energy equals the work done by the system to its surroundings, minus the work of the pressure forces.
The Gibbs energy is the thermodynamic potential that is minimized when a system reaches chemical equilibrium at constant pressure and temperature when not driven by an applied electrolytic voltage. Its derivative with respect to the reaction coordinate of the system then vanishes at the equilibrium point. As such, a reduction in is necessary for a reaction to be spontaneous under these conditions.
The concept of Gibbs free energy, originally called available energy, was developed in the 1870s by the American scientist Josiah Willard Gibbs. In 1873, Gibbs described this "available energy" as
The initial state of the body, according to Gibbs, is supposed to be such that "the body can be made to pass from it to states of dissipated energy by reversible processes". In his 1876 magnum opus On the Equilibrium of Heterogeneous Substances, a graphical analysis of multi-phase chemical systems, he engaged his thoughts on chemical-free energy in full.
If the reactants and products are all in their thermodynamic standard states, then the defining equation is written as , where is enthalpy, is absolute temperature, and is entropy.
Overview
According to the second law of thermodynamics, for systems reacting at fixed temperature and pressure without input of non-Pressure Volume (pV) work, there is a general natural tendency to achieve a minimum of the Gibbs free energy.
A quantitative measure of the favorability of a given reaction under these conditions is the change ΔG (sometimes written "delta G" or "dG") in Gibbs free energy that is (or would be) caused by the reaction. As a necessary condition for the reaction to occur at constant temperature and pressure, ΔG must be smaller than the non-pressure-volume (non-pV, e.g. electrical) work, which is often equal to zero (then ΔG must be negative). ΔG equals the maximum amount of non-pV work that can be performed as a result of the chemical reaction for the case of a reversible process. If analysis indicates a positive ΔG for a reaction, then energy — in the form of electrical or other non-pV work — would have to be added to the reacting system for ΔG to be smaller than the non-pV work and make it possible for the reaction to occur.
One can think of ∆G as the amount of "free" or "useful" energy available to do non-pV work at constant temperature and pressure. The equation can be also seen from the perspective of the system taken together with its surroundings (the rest of the universe). First, one assumes that the given reaction at constant temperature and pressure is the only one that is occurring. Then the entropy released or absorbed by the system equals the entropy that the environment must absorb or release, respectively. The reaction will only be allowed if the total entropy change of the universe is zero or positive. This is reflected in a negative ΔG, and the reaction is called an exergonic process.
If two chemical reactions are coupled, then an otherwise endergonic reaction (one with positive ΔG) can be made to happen. The input of heat into an inherently endergonic reaction, such as the elimination of cyclohexanol to cyclohexene, can be seen as coupling an unfavorable reaction (elimination) to a favorable one (burning of coal or other provision of heat) such that the total entropy change of the universe is greater than or equal to zero, making the total Gibbs free energy change of the coupled reactions negative.
In traditional use, the term "free" was included in "Gibbs free energy" to mean "available in the form of useful work". The characterization becomes more precise if we add the qualification that it is the energy available for non-pressure-volume work. (An analogous, but slightly different, meaning of "free" applies in conjunction with the Helmholtz free energy, for systems at constant temperature). However, an increasing number of books and journal articles do not include the attachment "free", referring to G as simply "Gibbs energy". This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the removal of the adjective "free" was recommended. This standard, however, has not yet been universally adopted.
The name "free enthalpy" was also used for G in the past.
History
The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in the earlier years of physical chemistry to describe the force that caused chemical reactions.
In 1873, Josiah Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, in which he sketched the principles of his new equation that was able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies composed of part solid, part liquid, and part vapor, and by using a three-dimensional volume-entropy-internal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes would ensue. Further, Gibbs stated:
In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body...
Thereafter, in 1882, the German scientist Hermann von Helmholtz characterized the affinity as the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Gibbs free energy G at T = constant, P = constant or Helmholtz free energy F at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (internal energy). Thus, G or F is the amount of energy "free" for work under the given conditions.
Until this point, the general view had been such that: "all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish". Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world.
Definitions
The Gibbs free energy is defined as
which is the same as
where:
U is the internal energy (SI unit: joule),
p is pressure (SI unit: pascal),
V is volume (SI unit: m3),
T is the temperature (SI unit: kelvin),
S is the entropy (SI unit: joule per kelvin),
H is the enthalpy (SI unit: joule).
The expression for the infinitesimal reversible change in the Gibbs free energy as a function of its "natural variables" p and T, for an open system, subjected to the operation of external forces (for instance, electrical or magnetic) Xi, which cause the external parameters of the system ai to change by an amount dai, can be derived as follows from the first law for reversible processes:
where:
μi is the chemical potential of the ith chemical component. (SI unit: joules per particle or joules per mole)
Ni is the number of particles (or number of moles) composing the ith chemical component.
This is one form of the Gibbs fundamental equation. In the infinitesimal expression, the term involving the chemical potential accounts for changes in Gibbs free energy resulting from an influx or outflux of particles. In other words, it holds for an open system or for a closed, chemically reacting system where the Ni are changing. For a closed, non-reacting system, this term may be dropped.
Any number of extra terms may be added, depending on the particular system being considered. Aside from mechanical work, a system may, in addition, perform numerous other types of work. For example, in the infinitesimal expression, the contractile work energy associated with a thermodynamic system that is a contractile fiber that shortens by an amount −dl under a force f would result in a term f dl being added. If a quantity of charge −de is acquired by a system at an electrical potential Ψ, the electrical work associated with this is −Ψ de, which would be included in the infinitesimal expression. Other work terms are added on per system requirements.
Each quantity in the equations above can be divided by the amount of substance, measured in moles, to form molar Gibbs free energy. The Gibbs free energy is one of the most important thermodynamic functions for the characterization of a system. It is a factor in determining outcomes such as the voltage of an electrochemical cell, and the equilibrium constant for a reversible reaction. In isothermal, isobaric systems, Gibbs free energy can be thought of as a "dynamic" quantity, in that it is a representative measure of the competing effects of the enthalpic and entropic driving forces involved in a thermodynamic process.
The temperature dependence of the Gibbs energy for an ideal gas is given by the Gibbs–Helmholtz equation, and its pressure dependence is given by
or more conveniently as its chemical potential:
In non-ideal systems, fugacity comes into play.
Derivation
The Gibbs free energy total differential with respect to natural variables may be derived by Legendre transforms of the internal energy.
The definition of G from above is
.
Taking the total differential, we have
Replacing dU with the result from the first law gives
The natural variables of G are then p, T, and {Ni}.
Homogeneous systems
Because S, V, and Ni are extensive variables, an Euler relation allows easy integration of dU:
Because some of the natural variables of G are intensive, dG may not be integrated using Euler relations as is the case with internal energy. However, simply substituting the above integrated result for U into the definition of G gives a standard expression for G:
This result shows that the chemical potential of a substance is its (partial) mol(ecul)ar Gibbs free energy. It applies to homogeneous, macroscopic systems, but not to all thermodynamic systems.
Gibbs free energy of reactions
The system under consideration is held at constant temperature and pressure, and is closed (no matter can come in or out). The Gibbs energy of any system is and an infinitesimal change in G, at constant temperature and pressure, yields
By the first law of thermodynamics, a change in the internal energy U is given by
where is energy added as heat, and is energy added as work. The work done on the system may be written as , where is the mechanical work of compression/expansion done on or by the system and is all other forms of work, which may include electrical, magnetic, etc. Then
and the infinitesimal change in G is
The second law of thermodynamics states that for a closed system at constant temperature (in a heat bath), and so it follows that
Assuming that only mechanical work is done, this simplifies to
This means that for such a system when not in equilibrium, the Gibbs energy will always be decreasing, and in equilibrium, the infinitesimal change dG will be zero. In particular, this will be true if the system is experiencing any number of internal chemical reactions on its path to equilibrium.
In electrochemical thermodynamics
When electric charge dQele is passed between the electrodes of an electrochemical cell generating an emf , an electrical work term appears in the expression for the change in Gibbs energy:
where S is the entropy, V is the system volume, p is its pressure and T is its absolute temperature.
The combination (, Qele) is an example of a conjugate pair of variables. At constant pressure the above equation produces a Maxwell relation that links the change in open cell voltage with temperature T (a measurable quantity) to the change in entropy S when charge is passed isothermally and isobarically. The latter is closely related to the reaction entropy of the electrochemical reaction that lends the battery its power. This Maxwell relation is:
If a mole of ions goes into solution (for example, in a Daniell cell, as discussed below) the charge through the external circuit is
where n0 is the number of electrons/ion, and F0 is the Faraday constant and the minus sign indicates discharge of the cell. Assuming constant pressure and volume, the thermodynamic properties of the cell are related strictly to the behavior of its emf by
where ΔH is the enthalpy of reaction. The quantities on the right are all directly measurable.
Useful identities to derive the Nernst equation
During a reversible electrochemical reaction at constant temperature and pressure, the following equations involving the Gibbs free energy hold:
(see chemical equilibrium),
(for a system at chemical equilibrium),
(for a reversible electrochemical process at constant temperature and pressure),
(definition of ),
and rearranging gives
which relates the cell potential resulting from the reaction to the equilibrium constant and reaction quotient for that reaction (Nernst equation),
where
, Gibbs free energy change per mole of reaction,
, Gibbs free energy change per mole of reaction for unmixed reactants and products at standard conditions (i.e. 298K, 100kPa, 1M of each reactant and product),
, gas constant,
, absolute temperature,
, natural logarithm,
, reaction quotient (unitless),
, equilibrium constant (unitless),
, electrical work in a reversible process (chemistry sign convention),
, number of moles of electrons transferred in the reaction,
, Faraday constant (charge per mole of electrons),
, cell potential,
, standard cell potential.
Moreover, we also have
which relates the equilibrium constant with Gibbs free energy. This implies that at equilibrium
and
Standard Gibbs energy change of formation
The standard Gibbs free energy of formation of a compound is the change of Gibbs free energy that accompanies the formation of 1 mole of that substance from its component elements, in their standard states (the most stable form of the element at 25 °C and 100 kPa). Its symbol is ΔfG˚.
All elements in their standard states (diatomic oxygen gas, graphite, etc.) have standard Gibbs free energy change of formation equal to zero, as there is no change involved.
ΔfG = ΔfG˚ + RT ln Qf,
where Qf is the reaction quotient.
At equilibrium, ΔfG = 0, and Qf = K, so the equation becomes
ΔfG˚ = −RT ln K,
where K is the equilibrium constant of the formation reaction of the substance from the elements in their standard states.
Graphical interpretation by Gibbs
Gibbs free energy was originally defined graphically. In 1873, American scientist Willard Gibbs published his first thermodynamics paper, "Graphical Methods in the Thermodynamics of Fluids", in which Gibbs used the two coordinates of the entropy and volume to represent the state of the body. In his second follow-up paper, "A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces", published later that year, Gibbs added in the third coordinate of the energy of the body, defined on three figures. In 1874, Scottish physicist James Clerk Maxwell used Gibbs' figures to make a 3D energy-entropy-volume thermodynamic surface of a fictitious water-like substance. Thus, in order to understand the concept of Gibbs free energy, it may help to understand its interpretation by Gibbs as section AB on his figure 3, and as Maxwell sculpted that section on his 3D surface figure.
| Physical sciences | Thermodynamics | Physics |
238220 | https://en.wikipedia.org/wiki/Sulidae | Sulidae | The bird family Sulidae comprises the gannets and boobies. Collectively called sulids, they are medium-large coastal seabirds that plunge-dive for fish and similar prey. The 10 species in this family are often considered congeneric in older sources, placing all in the genus Sula. However, Sula (true boobies) and Morus (gannets) can be readily distinguished by morphological, behavioral, and DNA sequence characters. Abbott's booby (Papasula) is given its own genus, as it stands apart from both in these respects. It appears to be a distinct and ancient lineage, maybe closer to the gannets than to the true boobies.
Description
Sulids measure about in length and have a wingspan around . They have long, narrow, and pointed wings, and a quite long, graduated, and rather lozenge-shaped tail whose outer feathers are shorter than the central ones. Their flight muscles are rather small to allow for the small cross-section required for plunge-diving, as an adaptive trade-off relative to some sacrifice in flight performance. Consequently, they are very streamlined, reducing drag, so their bodies are "torpedo-shaped" and somewhat flat.
They have stout legs and webbed feet, with the web connecting all four toes. In some species, the webs are brightly colored and used in courtship displays. The bill is usually conspicuously colored, long, deep at the base, and pointed, with saw-like edges. The upper mandible curves down slightly at the tip and can be moved upward to accept large prey. To keep water out during plunges, the nostrils enter into the bill rather than opening to the outside directly. The eyes are angled forward, and provide a wider field of binocular vision than in most other birds.
Their plumage is either all-white (or light brownish or greyish) with dark wingtips and (usually) tail, or at least some dark brown or black above with white underparts; gannets have a yellowish hue to their heads. The face usually has some sort of black markings, typically on the lores. Unlike their relatives (the darters and cormorants), sulids have a well-developed preen gland whose waxy secretions they spread on their feathers for waterproofing and pest control. They moult their tail feathers irregularly and the flight feathers of their wings in stages, so that starting at the first moult, they always have some old feathers, some new ones, and some partly grown ones. Moult as a response to periods of stress has been recorded.
Distribution and ecology
The sulids are distributed mainly in tropical and subtropical waters, but they, particularly gannets, are found in temperate regions, too. These birds are not truly pelagic seabirds like the related Procellariiformes, and usually stay rather close to the coasts, but the abundant colonies of sulids that exist on many Pacific islands suggest that they are not infrequently blown away from their home range by storms, and can wander for long distances in search of a safe place to land if need be.
All species feed entirely at sea, mostly on mid-sized fish and similarly sized marine invertebrates (e.g. cephalopods). Many species feed communally, and some species follow fishing boats to scavenge discarded bycatch and chum. The typical hunting behavior is a dive from midair, taking the bird a 1–2 m under water. If prey manages to escape the diving birds at first, they may give chase using their legs and wings for underwater swimming.
As noted above, the behavioral traits of gannets and boobies differ considerably, but the Sulidae as a whole are characterized by several behavioral synapomorphies: Before taking off, they point their bills upwards (gannets) or forward (boobies). After landing again, they point downwards with their bills. In response to a threat, they do not attack, but shake their heads and point their bills towards the intruders.
Reproduction
All sulids breed in colonies. Males examine the colony area in flight and then pick a nest site, which they defend by fighting and territorial displays. Males then advertise to females by a special display and call. Their display behavior is characteristic, though not as diverse as the numerous variations found among the cormorants; it typically includes the male shaking his head. Females search the colony in flight and on foot for a mate. Once they select males, pairs maintain their bonds by preening each other and by frequent copulation.
The clutch is typically two eggs. The eggs are unmarked (but may become stained by debris in the nest), whitish, pale blue, green, or pink, and have a coating that resembles lime. Egg weight ranges from 3.3 to 8.0% of the female's weight. Incubation lasts 42 to 55 days, depending on the species. Both sexes incubate; like their relatives, they do not have brood patches, but their feet become vascularized and hot, and the birds place the eggs under the webs. Eggs lost during the first half of incubation are replaced.
At hatching, parents move the eggs and then the hatchlings to the tops of their webs. The young hatch naked, but soon develop white down. They beg by touching the parent's bill and take regurgitated food straight from its gape. At first, at least one parent is always in attendance of the altricial young; after two weeks, both parents leave the nest unguarded at times while they go fishing. The times for the chicks to fledge and become independent of their parents depend greatly on the food supply. Rarely does more than one chick survive to maturity, except in the Peruvian booby (Sula variegata), which has the biggest clutch (two to four eggs), and less often in the blue-footed booby (S. nebouxii). Siblicide by the stronger of two chicks is frequent.
Systematics and evolution
Sulids are related to a number of other aquatic birds, which all lack external nostrils and a brood patch, but have all four toes webbed and a gular sac. The closest living relatives of the Sulidae are the Phalacrocoracidae (cormorants and shags) and the Anhingidae (darters). The latter are somewhat intermediate between sulids and cormorants, but (like many cormorants) they are freshwater birds in a clade containing otherwise seabirds, and also symplesiomorphic with sulids but synapomorphic with cormorants in some other respects. Thus, the Sulidae seem to be the oldest and most distinct lineage of those three, which are united in a suborder Sulae. Therein, the Sulidae are typically placed simply as a family; sometimes, a superfamily Suloidea is recognized, wherein some of the primitive prehistoric forms (e.g. Empheresula, Eostega, and Masillastega) are placed as basal lineages distinct from the living Sulidae. However, the proposed family Pseudosulidae (or Enkurosulidae) is almost certainly invalid.
The Sulae were traditionally included in the Pelecaniformes in its obsolete paraphyletic circumscription, but pelicans, the namesake family of the Pelecaniformes, are actually more closely related to herons, ibises and spoonbills, the hamerkop, and the shoebill than to the sulids and allies. In recognition of this, the Sulae have been proposed for separation in a new order Phalacrocoraciformes, which also includes the frigatebirds (Fregatidae), as well as one or more prehistoric lineages that are entirely extinct today. The IOC World Bird List uses Suliformes as the proposed order name.
Within the family itself, three living genera—Sula (boobies, six species), Papasula (Abbott's booby), and Morus (gannets, three species)—are recognized. A 2011 study of multiple genes found Abbott's booby to be basal to all other gannets and boobies, and likely to have diverged from them around 22 million years ago, and the ancestors of the gannets and remaining boobies split around 17 million years ago. The most recent common ancestor of all boobies lived in the late Miocene around 6 million years ago, after which time the boobies steadily diverged. The gannets split more recently, only around 2.5 million years ago.
The fossil record of sulids is quite extensive due to the many Miocene/Pliocene forms that have been recovered, but the lineage of sulids extends back to the Eocene, and all things (such as the Early Eocene frigatebird Limnofregata) considered, the sulids seem to have diverged from the lineage leading to cormorants and darters around 50 million years ago (Mya), perhaps a bit earlier. The initial evolutionary radiation formed a number of genera that are now completely extinct, such as the freshwater Masillastega (which, as noted above, might not have been a modern-type sulid) or the bizarre Rhamphastosula (which had a bill shaped like an aracari's). The modern genera evolved (like many other living genera of birds) around the Oligocene-Miocene boundary about 23 Mya. Microsula, which lived during that time, seems to have been a primitive booby that still had many symplesiomorphies with gannets. Like the other Phalacrocoraciformes, the sulids originated probably in the general region of the Atlantic or western Tethys Sea – probably the latter rather than the former, given that their earliest fossils are abundant in Europe, but absent from the well-studied contemporary American deposits.
Prehistoric sulids (or suloids) only known from fossils are:
Masillastega (Early Eocene of Messel, Germany) – may belong in Eostega
Eostega (Late Eocene of Cluj-Manastur, Romania) – may include Masillastega
Sulidae gen. et sp. indet. (Thalberg Late Oligocene of Germany) – Empheresula?
Sulidae gen. et sp. indet. (Late Oligocene of South Carolina, United States) – Microsula?
Empheresula (Late Oligocene of Gannat, France – Middle Miocene of Steinheimer Becken, Germany) – including "Sula" arvernensis, "Parasula"
Microsula (Late Oligocene of South Carolina, United States – Grund Middle Miocene of Austria) – may belong in Morus or Sula, includes "Sula" avita, "S." pygmaea, Enkurosula, "Pseudosula"
Sarmatosula (Middle Miocene of Credinţa, Romania)
Miosula (Late Miocene of California)
Paleosula (Early Pliocene? of California)
Rhamphastosula (Pisco Early Pliocene of SC Peru)
Bimbisula (middle Pliocene of South Carolina)
Sulidae gen. et sp. indet. (Late Pliocene of Valle di Fine, Italy) – Morus?
For prehistoric species of the extant genera, see the genus articles.
The Early Oligocene Prophalacrocorax ronzoni of Ronzon, France, was variously placed in the seaduck genus Mergus, in Sula, and after a distinct genus was established for it, in the Phalacrocoracidae. While it is quite likely to belong in the Sulae and may have been an ancient sulid (or suloid), of the three placements explicitly proposed, none seems to be correct.
| Biology and health sciences | Pelecanimorphae | null |
238246 | https://en.wikipedia.org/wiki/Megatsunami | Megatsunami | A megatsunami is a very large wave created by a large, sudden displacement of material into a body of water.
Megatsunamis have different features from ordinary tsunamis. Ordinary tsunamis are caused by underwater tectonic activity (movement of the earth's plates) and therefore occur along plate boundaries and as a result of earthquakes and the subsequent rise or fall in the sea floor that displaces a volume of water. Ordinary tsunamis exhibit shallow waves in the deep waters of the open ocean that increase dramatically in height upon approaching land to a maximum run-up height of around in the cases of the most powerful earthquakes. By contrast, megatsunamis occur when a large amount of material suddenly falls into water or anywhere near water (such as via a landslide, meteor impact, or volcanic eruption). They can have extremely large initial wave heights in the hundreds of metres, far beyond the height of any ordinary tsunami. These giant wave heights occur because the water is "splashed" upwards and outwards by the displacement.
Examples of modern megatsunamis include the one associated with the 1883 eruption of Krakatoa (volcanic eruption), the 1958 Lituya Bay megatsunami (a landslide which caused an initial wave of ), and the 1963 Vajont Dam landslide (caused by human activity destabilizing sides of valley). Prehistoric examples include the Storegga Slide (landslide), and the Chicxulub, Chesapeake Bay, and Eltanin meteor impacts.
Overview
A megatsunami is a tsunami with an initial wave amplitude (height) measured in many tens or hundreds of metres. The term "megatsunami" has been defined by media and has no precise definition, although it is commonly taken to refer to tsunamis over high. A megatsunami is a separate class of event from an ordinary tsunami and is caused by different physical mechanisms.
Normal tsunamis result from displacement of the sea floor due to movements in the Earth's crust (plate tectonics). Powerful earthquakes may cause the sea floor to displace vertically on the order of tens of metres, which in turn displaces the water column above and leads to the formation of a tsunami. Ordinary tsunamis have a small wave height offshore and generally pass unnoticed at sea, forming only a slight swell on the order of above the normal sea surface. In deep water it is possible that a tsunami could pass beneath a ship without the crew of the vessel noticing. As it approaches land, the wave height of an ordinary tsunami increases dramatically as the sea floor slopes upward and the base of the wave pushes the water column above it upwards. Ordinary tsunamis, even those associated with the most powerful strike-slip earthquakes, typically do not reach heights in excess of .
By contrast, megatsunamis are caused by landslides and other impact events that displace large volumes of water, resulting in waves that may exceed the height of an ordinary tsunami by tens or even hundreds of metres. Underwater earthquakes or volcanic eruptions do not normally generate megatsunamis, but landslides next to bodies of water resulting from earthquakes or volcanic eruptions can, since they cause a much larger amount of water displacement. If the landslide or impact occurs in a limited body of water, as happened at the Vajont Dam (1963) and in Lituya Bay (1958) then the water may be unable to disperse and one or more exceedingly large waves may result.
Submarine landslides can pose a significant hazard when they cause a tsunami. Although a variety of different types of landslides can cause tsunami, all the resulting tsunami have similar features such as large run-ups close to the tsunami, but quicker attenuation compared to tsunami caused by earthquakes. An example of this was the 17 July 1998, Papua New Guinean landslide tsunami where waves up to 15 m high impacted a 20 km section of the coast killing 2,200 people, yet at greater distances the tsunami was not a major hazard. This is due to the comparatively small source area of most landslide tsunami (relative to the area affected by large earthquakes) which causes the generation of shorter wavelength waves. These waves are greatly affected by coastal amplification (which amplifies the local effect) and radial damping (which reduces the distal effect).
The size of landslide-generated tsunamis depends both on the geological details of the landslide (such as its Froude number) and also on assumptions about the hydrodynamics of the model used to simulate tsunami generation, thus they have a large margin of uncertainty. Generally, landslide-induced tsunamis decay more quickly with distance than earthquake-induced tsunamis, as the former, often having a dipole structure at the source, tend to spread out radially and has a shorter wavelength (the rate at which a wave loses energy is inversely proportional to its wavelength, in other words the longer the wavelength of a wave, the slower it loses energy) while the latter disperses little as it propagates away perpendicularly to the source fault. Testing whether a given tsunami model is correct is complicated by the rarity of giant collapses.
Recent findings show that the nature of a tsunami is dependent upon volume, velocity, initial acceleration, length and thickness of the contributing landslide. Volume and initial acceleration are the key factors which determine whether a landslide will form a tsunami. A sudden deceleration of the landslide may also result in larger waves. The length of the slide influences both the wavelength and the maximum wave height. Travel time or run out distance of slide will also influence the resulting tsunami wavelength. In most cases the submarine landslides are noticeably subcritical, that is the Froude number (the ratio of slide speed to wave propagation) is significantly less than one. This suggests that the tsunami will move away from the wave generating slide preventing the buildup of the wave. Failures in shallow waters tend to produce larger tsunamis because the wave is more critical as the speed of propagation is less here. Furthermore, shallower waters are generally closer to the coast meaning that there is less radial damping by the time the tsunami reaches the shore. Conversely tsunamis triggered by earthquakes are more critical when the seabed displacement occurs in the deep ocean as the first wave (which is less affected by depth) has a shorter wavelength and is enlarged when travelling from deeper to shallower waters.
Determining a height range typical of megatsunamis is a complex and scientifically debated topic. This complexity is increased due to the fact that two different heights are often reported for tsunamis – the height of the wave itself in open water, and the height to which it surges when it encounters land. Depending upon the locale, this second or so-called "run-up height" can be several times larger than the wave's height just before reaching shore. While there is currently no minimum or average height classification for megatsunamis that is broadly accepted by the scientific community, the limited number of observed megatsunami events in recent history have all had run-up heights that exceeded . The megatsunami in Spirit Lake, Washington, USA that was caused by the 1980 eruption of Mount St. Helens reached , while the tallest megatsunami ever recorded (Lituya Bay in 1958) reached a run-up height of . It is also possible that much larger megatsunamis occurred in prehistory; researchers analyzing the geological structures left behind by prehistoric asteroid impacts have suggested that these events could have resulted in megatsunamis that exceeded in height.
Recognition of the concept of megatsunami
Before the 1950s, scientists had theorized that tsunamis orders of magnitude larger than those observed with earthquakes could have occurred as a result of ancient geological processes, but no concrete evidence of the existence of these "monster waves" had yet been gathered. Geologists searching for oil in Alaska in 1953 observed that in Lituya Bay, mature tree growth did not extend to the shoreline as it did in many other bays in the region. Rather, there was a band of younger trees closer to the shore. Forestry workers, glaciologists, and geographers call the boundary between these bands a trim line. Trees just above the trim line showed severe scarring on their seaward side, while those from below the trim line did not. This indicated that a large force had impacted all of the elder trees above the trim line, and presumably had killed off all the trees below it. Based on this evidence, the scientists hypothesized that there had been an unusually large wave or waves in the deep inlet. Because this is a recently deglaciated fjord with steep slopes and crossed by a major fault (the Fairweather Fault), one possibility was that this wave was a landslide-generated tsunami.
On 9 July 1958, a 7.8 strike-slip earthquake in southeast Alaska caused of rock and ice to drop into the deep water at the head of Lituya Bay. The block fell almost vertically and hit the water with sufficient force to create a wave that surged up the opposite side of the head of the bay to a height of , and was still many tens of metres high further down the bay when it carried eyewitnesses Howard Ulrich and his son Howard Jr. over the trees in their fishing boat. They were washed back into the bay and both survived.
Analysis of mechanism
The mechanism giving rise to megatsunamis was analysed for the Lituya Bay event in a study presented at the Tsunami Society in 1999; this model was considerably developed and modified by a second study in 2010.
Although the earthquake which caused the megatsunami was considered very energetic, it was determined that it could not have been the sole contributor based on the measured height of the wave. Neither water drainage from a lake, nor a landslide, nor the force of the earthquake itself were sufficient to create a megatsunami of the size observed, although all of these may have been contributing factors.
Instead, the megatsunami was caused by a combination of events in quick succession. The primary event occurred in the form of a large and sudden impulsive impact when about 40 million cubic yards of rock several hundred metres above the bay was fractured by the earthquake, and fell "practically as a monolithic unit" down the almost-vertical slope and into the bay. The rockfall also caused air to be "dragged along" due to viscosity effects, which added to the volume of displacement, and further impacted the sediment on the floor of the bay, creating a large crater. The study concluded that:
A 2010 model that examined the amount of infill on the floor of the bay, which was many times larger than that of the rockfall alone, and also the energy and height of the waves, and the accounts given by eyewitnesses, concluded that there had been a "dual slide" involving a rockfall, which also triggered a release of 5 to 10 times its volume of sediment trapped by the adjacent Lituya Glacier, as an almost immediate and many times larger second slide, a ratio comparable with other events where this "dual slide" effect is known to have happened.
Examples
Prehistoric
An astronomical object between wide traveling at per second struck the Earth 3.26 billion years ago east of what is now Johannesburg, South Africa, near South Africa's border with Eswatini, in what was then an Archean ocean that covered most of the planet, creating a crater about wide. The impact generated a megatsunami that probably extended to a depth of thousands of meters beneath the surface of the ocean and rose to the height of a skyscraper when it reached shorelines. The resultant event created the Barberton Greenstone Belt.
The asteroid linked to the extinction of dinosaurs, which created the Chicxulub crater in the Yucatán Peninsula approximately 66 million years ago, would have caused a megatsunami over tall. The height of the tsunami was limited due to relatively shallow sea in the area of the impact; had the asteroid struck in the deep sea the megatsunami would have been tall. Among the mechanisms triggering megatsunamis were the direct impact, shockwaves, returning water in the crater with a new push outward and seismic waves with a magnitude up to ~11. A more recent simulation of the global effects of the Chicxulub megatsunami showed an initial wave height of , with later waves up to in height in the Gulf of Mexico, and up to in the North Atlantic and South Pacific; the discovery of mega-ripples in Louisiana via seismic imaging data, with average wavelengths of and average wave heights of , looks like to confirm it. David Shonting and Cathy Ezrailson propose an "Edgerton effect" mechanism generating the megatsunami, similar to a milk drop falling on water that triggers a crown-shape water column, with a comparable height to the Chicxulub impactor's, that means over for the initial seawater forced outward by the explosion and blast waves; then, its collapse triggers megatsunamis changing their height according to the different water depth, raising up to . Furthermore, the initial shock wave via impact triggered seismic waves producing giant landslides and slumping around the region (the largest known event deposits on Earth) with subsequent megatsunamis of various sizes, and seiches of in Tanis, away, part of a vast inland sea at the time and directly triggered via seismic shaking by the impact within a few minutes.
During the Messinian (ca. 7.25–ca. 5.3 million years ago) various megatsunamis likely struck the coast of northern Chile.
Reservoir-induced seismicity at the end of or shortly after the Zanclean Flood (ca. 5.33 million years ago), which rapidly filled the Mediterranean Basin with water from the Atlantic Ocean, created a megatsunami with a height of nearly which struck the coast of Spain near what is now Algeciras.
A megatsunami affected the coast of south–central Chile in the Pliocene as evidenced by the sedimentary record of the Ranquil Formation.
The Eltanin impact in the southeast Pacific Ocean 2.5 million years ago caused a megatsunami that was over high in southern Chile and the Antarctic Peninsula; the wave swept across much of the Pacific Ocean.
The northern half of the East Molokai Volcano on Molokai in Hawaii suffered a catastrophic collapse about 1.5 million years ago, generating a megatsunami, and now lies as a debris field scattered northward across the ocean bottom, while what remains on the island are the highest sea cliffs in the world. The megatsunami may have reached a height of near its origin and reached California and Mexico.
The existence of large scattered boulders in only one of the four marine terraces of Herradura Bay south of the Chilean city of Coquimbo has been interpreted by Roland Paskoff as the result of a mega-tsunami that occurred in the Middle Pleistocene.
In Hawaii, a megatsunami at least in height deposited marine sediments at a modern-day elevation of – above sea level at the time the wave struck – on Lanai about 105,000 years ago. The tsunami also deposited such sediments at an elevation of on Oahu, Molokai, Maui, and the island of Hawaii.
The collapse of the ancestral Mount Amarelo on Fogo in the Cape Verde Islands about 73,000 years ago triggered a megatsunami which struck Santiago, away, with a height of at least and a run-up height of over .
A major collapse of the western edge of the Lake Tahoe basin, a landslide with a volume of which formed McKinney Bay between 21,000 and 12,000 years ago, generated megatsunamis/seiche waves with an initial height of probably about and caused the lake's water to slosh back and forth for days. Much of the water in the megatsunamis washed over the lake's outlet at what is now Tahoe City, California, and flooded down the Truckee River, carrying house-sized boulders as far downstream as the California-Nevada border at what is now Verdi, California.
In the North Sea, the Storegga Slide caused a megatsunami approximately 8,200 years ago. It is estimated to have completely flooded the remainder of Doggerland.
Around 6370 BCE, a landslide on the eastern slope of Mount Etna in Sicily into the Mediterranean Sea triggered a megatsunami in the Eastern Mediterranean with an initial wave height along the eastern coast of Sicily of . It struck the Neolithic village of Atlit Yam off the coast of Israel with a height of , prompting the village's abandonment.
Around 5650 B.C., a landslide in Greenland created a megatsunami with a run-up height on Alluttoq Island of .
Around 5350 B.C., a landslide in Greenland created a megatsunami with a run-up height on Alluttoq Island of .
Historic
c. 2000 BC: Réunion
A landslide on Réunion island, to the east of Madagascar, may have caused a megatsunami.
c. 1600 BC: Santorini
The Thera volcano erupted, the force of the eruption causing megatsunamis which affected the whole Aegean Sea and the eastern Mediterranean Sea.
c. 1100 BC: Lake Crescent
An earthquake generated the Sledgehammer Point Rockslide, which fell from Mount Storm King in what is now Washington in the United States and entered waters at least deep in Lake Crescent, generating a megatsunami with an estimated maximum run-up height of .
Modern
1674: Ambon Island, Banda Sea
On 17 February 1674, between 19:30 and 20:00 local time, an earthquake struck the Maluku Islands. Ambon Island received run-up heights of , making the wave far too large to be caused by the quake itself. Instead, it was probably the result of an underwater landslide triggered by the earthquake. The quake and tsunami killed 2,347 people.
1731: Storfjorden, Norway
At 10:00 p.m. on 8 January 1731, a landslide with a volume of possibly fell from the mountain Skafjell from a height of into the Storfjorden opposite Stranda, Norway. The slide generated a megatsunami in height that struck Stranda, flooding the area for inland and destroying the church and all but two boathouses, as well as many boats. Damaging waves struck as far away as Ørskog. The waves killed 17 people.
1741: Oshima-Ōshima, Sea of Japan
An eruption of Oshima-Ōshima occurred that lasted from 18 August 1741 to 1 May 1742. On 29 August 1741, a devastating tsunami occurred. It killed at least 1,467 people along a section of the coast, excluding native residents whose deaths were not recorded. Wave heights for Gankakezawa have been estimated at based on oral histories, while an estimate of is derived from written records. At Sado Island, over away, a wave height of has been estimated based on descriptions of the damage, while oral records suggest a height of . Wave heights have been estimated at even as far away as the Korean Peninsula. There is still no consensus in the debate as to what caused it but much evidence points to a landslide and debris avalanche along the flank of the volcano. An alternative hypothesis holds that an earthquake caused the tsunami. The event reduced the elevation of the peak of Hishiyama from . An estimated section of the volcano collapsed onto the seafloor north of the island; the collapse was similar in size to the collapse which occurred during the 1980 eruption of Mount St. Helens.
1756: Langfjorden, Norway
Just before 8:00 p.m. on 22 February 1756, a landslide with a volume of travelled at high speed from a height of on the side of the mountain Tjellafjellet into the Langfjorden about west of Tjelle, Norway, between Tjelle and Gramsgrø. The slide generated three megatsunamis in the Langfjorden and the Eresfjorden with heights of . The waves flooded the shore for inland in some areas, destroying farms and other inhabited areas. Damaging waves struck as far away as Veøy, from the landslide – where they washed inland above normal flood levels – and Gjermundnes, from the slide. The waves killed 32 people and destroyed 168 buildings, 196 boats, large amounts of forest, and roads and boat landings.
1792: Mount Unzen, Japan
On 21 May 1792, a flank of the Mayamaya dome of Mount Unzen collapsed after two large earthquakes. This had been preceded by a series of earthquakes coming from the mountain, beginning near the end of 1791. Initial wave heights were , but when they hit the other side of Ariake Bay, they were only in height, though one location received waves due to seafloor topography. The waves bounced back to Shimabara, which, when they hit, accounted for about half of the tsunami's victims. According to estimates, 10,000 people were killed by the tsunami, and a further 5,000 were killed by the landslide. As of 2011, it was the deadliest known volcanic event in Japan.
1853–1854: Lituya Bay, Alaska
Sometime between August 1853 and May 1854, a megatsunami occurred in Lituya Bay in what was then Russian America. Studies of Lituya Bay between 1948 and 1953 first identified the event, which probably occurred because of a large landslide on the south shore of the bay near Mudslide Creek. The wave had a maximum run-up height of , flooding the coast of the bay up to inland.
1874: Lituya Bay, Alaska
A study of Lituya Bay in 1953 concluded that sometime around 1874, perhaps in May 1874, another megatsunami occurred in Lituya Bay in Alaska. Probably occurring because of a large landslide on the south shore of the bay in the Mudslide Creek Valley, the wave had a maximum run-up height of , flooding the coast of the bay up to inland.
1883: Krakatoa, Sunda Strait
The massive explosion of Krakatoa created pyroclastic flows which generated megatsunamis when they hit the waters of the Sunda Strait on 27 August 1883. The waves reached heights of up to 24 metres (79 feet) along the south coast of Sumatra and up to 42 metres (138 feet) along the west coast of Java. The tsunamis were powerful enough to kill over 30,000 people, and their effect was such that an area of land in Banten had its human settlements wiped out, and they never repopulated. (This area rewilded and was later declared a national park.) The steamship Berouw, a colonial gunboat, was flung over a mile (1.6 km) inland on Sumatra by the wave, killing its entire crew. Two thirds of the island collapsed into the sea after the event. Groups of human skeletons were found floating on pumice numerous times, up to a year after the event. The eruption also generated what is often called the loudest sound in history, which was heard away on Rodrigues in the Indian Ocean.
1905: Lovatnet, Norway
On 15 January 1905, a landslide on the slope of the mountain Ramnefjellet with a volume of fell from a height of into the southern end of the lake Lovatnet in Norway, generating three megatsunamis of up to in height. The waves destroyed the villages of Bødal and Nesdal near the southern end of the lake, killing 61 people – half their combined population – and 261 farm animals and destroying 60 houses, all the local boathouses, and 70 to 80 boats, one of which – the tourist boat Lodalen – was thrown inland by the last wave and wrecked. At the northern end of the long lake, a wave measured at almost destroyed a bridge.
1905: Disenchantment Bay, Alaska
On 4 July 1905, an overhanging glacier – since known as the Fallen Glacier – broke loose, slid out of its valley, and fell down a steep slope into Disenchantment Bay in Alaska, clearing vegetation along a path wide. When it entered the water, it generated a megatsunami which broke tree branches above ground level away. The wave killed vegetation to a height of at a distance of from the landslide, and it reached heights of at different locations on the coast of Haenke Island. At a distance of from the slide, observers at Russell Fjord reported a series of large waves that caused the water level to rise and fall for a half-hour.
1934: Tafjorden, Norway
On 7 April 1934, a landslide on the slope of the mountain Langhamaren with a volume of fell from a height of about into the Tafjorden in Norway, generating three megatsunamis, the last and largest of which reached a height of between on the opposite shore. Large waves struck Tafjord and Fjørå. At Tafjord, the last and largest wave was tall and struck at an estimated speed of , flooding the town for inland and killing 23 people. At Fjørå, waves reached , destroyed buildings, removed all soil, and killed 17 people. Damaging waves struck as far as away, and waves were detected at a distance of from the landslide. One survivor suffered serious injuries requiring hospitalization.
1936: Lovatnet, Norway
On 13 September 1936, a landslide on the slope of the mountain Ramnefjellet with a volume of fell from a height of into the southern end of the lake Lovatnet in Norway, generating three megatsunamis, the largest of which reached a height of . The waves destroyed all farms at Bødal and most farms at Nesdal – completely washing away 16 farms – as well as 100 houses, bridges, a power station, a workshop, a sawmill, several grain mills, a restaurant, a schoolhouse, and all boats on the lake. A wave struck the southern end of the long lake and caused damaging flooding in the Loelva River, the lake's northern outlet. The waves killed 74 people and severely injured 11.
1936: Lituya Bay, Alaska
On 27 October 1936, a megatsunami occurred in Lituya Bay in Alaska with a maximum run-up height of in Crillon Inlet at the head of the bay. The four eyewitnesses to the wave in Lituya Bay itself all survived and described it as between high. The maximum inundation distance was inland along the north shore of the bay. The cause of the megatsunami remains unclear, but may have been a submarine landslide.
1958: Lituya Bay, Alaska, US
On 9 July 1958, a giant landslide at the head of Lituya Bay in Alaska, caused by an earthquake, generated a wave that washed out trees to a maximum elevation of at the entrance of Gilbert Inlet. The wave surged over the headland, stripping trees and soil down to bedrock, and surged along the fjord which forms Lituya Bay, destroying two fishing boats anchored there and killing two people. This was the highest wave of any kind ever recorded. The subsequent study of this event led to the establishment of the term "megatsunami," to distinguish it from ordinary tsunamis.
1963: Vajont Dam, Italy
On 9 October 1963, a landslide above Vajont Dam in Italy produced a surge that overtopped the dam and destroyed the villages of Longarone, Pirago, Rivalta, Villanova, and Faè, killing nearly 2,000 people. This is currently the only known example of a megatsunami that was indirectly caused by human activities.
1964: Valdez Arm, Alaska
On 27 March 1964, the 1964 Alaska earthquake triggered a landslide that generated a megatsunami which reached a height of in the Valdez Arm of Prince William Sound in Southcentral Alaska.
1980: Spirit Lake, Washington, US
On 18 May 1980, the upper of Mount St. Helens collapsed, creating a landslide. This released the pressure on the magma trapped beneath the summit bulge which exploded as a lateral blast, which then released the pressure on the magma chamber and resulted in a plinian eruption.
One lobe of the avalanche surged onto Spirit Lake, causing a megatsunami which pushed the lake waters in a series of surges, which reached a maximum height of above the pre-eruption water level (about ASL). Above the upper limit of the tsunami, trees lie where they were knocked down by the pyroclastic surge; below the limit, the fallen trees and the surge deposits were removed by the megatsunami and deposited in Spirit Lake.
2000: Paatuut, Greenland
On 21 November 2000, a landslide composed of of rock with a mass of 260,000,000 tons fell from an elevation of at Paatuut on the Nuussuaq Peninsula on the west coast of Greenland, reaching a speed of . About of material with a mass of 87,000,000 tons entered Sullorsuaq Strait (known in Danish as Vaigat Strait), generating a megatsunami. The wave had a run-up height of near the landslide and at Qullissat, the site of an abandoned settlement across the strait on Disko Island, away, where it inundated the coast as far as inland. Refracted energy from the tsunami created a wave that destroyed boats at the closest populated village, Saqqaq, on the southwestern coast of the Nuussuaq Peninsula from the landslide.
2007: Chehalis Lake, British Columbia, Canada
On 4 December 2007, a landslide composed of of rock and debris fell from an elevation of on the slope of Mount Orrock on the western short of Chehalis Lake. The landslide entered the deep lake, generating a megatsunamii with a run-up height of on the opposite shore and at the lake's exit point away to the south. The wave then continued down the Chehalis River for about .
2015: Taan Fiord, Alaska, US
At 8:19 p.m. Alaska Daylight Time on 17 October 2015, the side of a mountain collapsed at the head of Taan Fiord, a finger of Icy Bay in Alaska. Some of the resulting landslide came to rest on the toe of Tyndall Glacier, but about of rock with a volume of about fell into the fjord. The landslide generated a megatsunami with an initial height of about that struck the opposite shore of the fjord, with a run-up height there of .
Over the next 12 minutes, the wave travelled down the fjord at a speed of up to , with run-up heights of over in the upper fjord to between or more in its middle section, and or more at its mouth. Still probably tall when it entered Icy Bay, the tsunami inundated parts of Icy Bay's shoreline with run-ups of before dissipating into insignificance at distances of from the mouth of Taan Fiord, although the wave was detected away.
Occurring in an uninhabited area, the event was unwitnessed, and several hours passed before the signature of the landslide was noticed on seismographs at Columbia University in New York City.
2017: Karrat Fjord, Greenland
On 17 June 2017, of rock on the mountain Ummiammakku fell from an elevation of roughly into the waters of the Karrat Fjord. The event was thought to be caused by melting ice that destabilised the rock. It registered as a magnitude 4.1 earthquake and created a wave. The settlement of Nuugaatsiaq, away, saw run-up heights of . Eleven buildings were swept into the sea, four people died, and 170 residents of Nuugaatsiaq and Illorsuit were evacuated because of a danger of additional landslides and waves. The tsunami was noted at settlements as far as away.
2020: Elliot Creek, British Columbia, Canada
On 28 November 2020, unseasonably heavy rainfall triggered a landslide of into a glacial lake at the head of Elliot Creek. The sudden displacement of water generated a high megatsunami that cascaded down Elliot Creek and the Southgate River to the head of Bute Inlet, covering a total distance of over . The event generated a magnitude 5.0 earthquake and destroyed over of salmon habitat along Elliot Creek.
2023: Dickson Fjord, Greenland
On 16 September 2023 a large landslide originating above sea level entered Dickson Fjord, triggering a tsunami exceeding in run-up. Run-up of was observed along a stretch of coast. There was no major damage and there were no casualties. The tsunami was followed by a seiche that lasted for a week. The seiche produced a nine-day disturbance recorded by seismic instruments globally.
Potential future megatsunamis
In a BBC television documentary broadcast in 2000, experts said that they thought that a landslide on a volcanic ocean island is the most likely future cause of a megatsunami. The size and power of a wave generated by such means could produce devastating effects, travelling across oceans and inundating up to inland from the coast. This research was later found to be flawed. The documentary was produced before the experts' scientific paper was published and before responses were given by other geologists. There have been megatsunamis in the past, and future megatsunamis are possible but current geological consensus is that these are only local. A megatsunami in the Canary Islands would diminish to a normal tsunami by the time it reached the continents. Also, the current consensus for La Palma is that the region conjectured to collapse is too small and too geologically stable to do so in the next 10,000 years, although there is evidence for past megatsunamis local to the Canary Islands thousands of years ago. Similar remarks apply to the suggestion of a megatsunami in Hawaii.
British Columbia
Some geologists consider an unstable rock face at Mount Breakenridge, above the north end of the giant fresh-water fjord of Harrison Lake in the Fraser Valley of southwestern British Columbia, Canada, to be unstable enough to collapse into the lake, generating a megatsunami that might destroy the town of Harrison Hot Springs (located at its south end).
Canary Islands
Geologists Dr. Simon Day and Dr. Steven Neal Ward consider that a megatsunami could be generated during an eruption of Cumbre Vieja on the volcanic ocean island of La Palma, in the Canary Islands, Spain. Day and Ward hypothesize that if such an eruption causes the western flank to fail, a megatsunami could be generated.
In 1949, an eruption occurred at three of the volcano's ventsDuraznero, Hoyo Negro, and Llano del Banco. A local geologist, Juan Bonelli-Rubio, witnessed the eruption and recorded details on various phenomenon related to the eruption. Bonelli-Rubio visited the summit area of the volcano and found that a fissure about long had opened on the east side of the summit. As a result, the western half of the volcanowhich is the volcanically active arm of a triple-armed rifthad slipped approximately downwards and westwards towards the Atlantic Ocean.
In 1971, an eruption occurred at the Teneguía vent at the southern end of the sub-aerial section of the volcano without any movement. The section affected by the 1949 eruption is currently stationary and does not appear to have moved since the initial rupture.
Cumbre Vieja remained dormant until an eruption began on 19 September 2021.
It is likely that several eruptions would be required before failure would occur on Cumbre Vieja. The western half of the volcano has an approximate volume of and an estimated mass of . If it were to catastrophically slide into the ocean, it could generate a wave with an initial height of about at the island, and a likely height of around at the Caribbean and the Eastern North American seaboard when it runs ashore eight or more hours later. Tens of millions of lives could be lost in the cities and/or towns of St. John's, Halifax, Boston, New York, Baltimore, Washington, D.C., Miami, Havana and the rest of the eastern coasts of the United States and Canada, as well as many other cities on the Atlantic coast in Europe, South America and Africa. The likelihood of this happening is a matter of vigorous debate.
Geologists and volcanologists are in general agreement that the initial study was flawed. The current geology does not suggest that a collapse is imminent. Indeed, it seems to be geologically impossible right nowthe region conjectured as prone to collapse is too small and too stable to collapse within the next 10,000 years. A closer study of deposits left in the ocean from previous landslides suggests that a landslide would likely occur as a series of smaller collapses rather than a single landslide. A megatsunami does seem possible locally in the distant future as there is geological evidence from past deposits suggesting that a megatsunami occurred with marine material deposited above sea level between 32,000 and 1.75 million years ago. This seems to have been local to Gran Canaria.
Day and Ward have admitted that their original analysis of the danger was based on several worst case assumptions. A 2008 study examined this scenario and concluded that while it could cause a megatsunami, it would be local to the Canary Islands and would diminish in height, becoming a smaller tsunami by the time it reached the continents as the waves interfered and spread across the oceans.
Hawaii
Sharp cliffs and associated ocean debris at the Kohala Volcano, Lanai and Molokai indicate that landslides from the flank of the Kilauea and Mauna Loa volcanoes in Hawaii may have triggered past megatsunamis, most recently at 120,000 BP. A tsunami event is also possible, with the tsunami potentially reaching up to about in height According to the documentary National Geographic's Ultimate Disaster: Tsunami, if a big landslide occurred at Mauna Loa or the Hilina Slump, a tsunami would take only thirty minutes to reach Honolulu. There, hundreds of thousands of people could be killed as the tsunami could level Honolulu and travel inland. Also, the West Coast of America and the entire Pacific Rim could potentially be affected.
Other research suggests that such a single large landslide is not likely. Instead, it would collapse as a series of smaller landslides.
In 2018, shortly after the beginning of the 2018 lower Puna eruption, a National Geographic article responded to such claims with "Will a monstrous landslide off the side of Kilauea trigger a monster tsunami bound for California? Short answer: No."
In the same article, geologist Mika McKinnon stated:
Another volcanologist, Janine Krippner, added:
Despite this, evidence suggests that catastrophic collapses do occur on Hawaiian volcanoes and generate local tsunamis.
Norway
Although known earlier to the local population, a crack wide and in length in the side of the mountain Åkerneset in Norway was rediscovered in 1983 and attracted scientific attention. It since has widened at a rate of per year. Geological analysis has revealed that a slab of rock thick and at an elevation stretching from is in motion. Geologists assess that an eventual catastrophic collapse of of rock into Sunnylvsfjorden is inevitable and could generate megatsunamis of in height on the fjord′s opposite shore. The waves are expected to strike Hellesylt with a height of , Geiranger with a height of , Tafjord with a height of , and many other communities in Norway's Sunnmøre district with a height of several metres, and to be noticeable even at Ålesund. The predicted disaster is depicted in the 2015 Norwegian film The Wave.
| Physical sciences | Oceanography | Earth science |
238281 | https://en.wikipedia.org/wiki/Gnetophyta | Gnetophyta | Gnetophyta () is a division of plants (alternatively considered the subclass Gnetidae or order Gnetales), grouped within the gymnosperms (which also includes conifers, cycads, and ginkgos), that consists of some 70 species across the three relict genera: Gnetum (family Gnetaceae), Welwitschia (family Welwitschiaceae), and Ephedra (family Ephedraceae). The earliest unambiguous records of the group date to the Jurassic, and they achieved their highest diversity during the Early Cretaceous. The primary difference between gnetophytes and other gymnosperms is the presence of vessel elements, a system of small tubes (xylem) that transport water within the plant, similar to those found in flowering plants. Because of this, gnetophytes were once thought to be the closest gymnosperm relatives to flowering plants, but more recent molecular studies have brought this hypothesis into question, with many recent phylogenies finding them to be nested within the conifers.
Though it is clear they are all related, the exact evolutionary inter-relationships between gnetophytes are unclear. Some classifications hold that all three genera should be placed in a single order (Gnetales), while other classifications say they should be distributed among three separate orders, each containing a single family and genus. Most morphological and molecular studies confirm that the genera Gnetum and Welwitschia diverged from each other more recently than they did from Ephedra.
Ecology and morphology
Unlike most biological groupings, it is difficult to find many common characteristics between all of the members of the gnetophytes. The two common characteristics most commonly used are the presence of enveloping bracts around both the ovules and microsporangia as well as a micropylar projection of the outer membrane of the ovule that produces a pollination droplet, though these are highly specific compared to the similarities between most other plant divisions. L. M. Bowe refers to the gnetophyte genera as a "bizarre and enigmatic" trio because the gnetophytes' specialization to their respective environments is so complete that they hardly resemble each other at all. Gnetum species are mostly woody vines in tropical forests, though the best-known member of this group, Gnetum gnemon, is a tree native to western Malesia. The one remaining species of Welwitschia, Welwitschia mirabilis, native only to the dry deserts of Namibia and Angola, is a ground-hugging species with only two large strap-like leaves that grow continuously from the base throughout the plant's life. Ephedra species, known as "jointfirs" in the United States, have long slender branches which bear tiny scale-like leaves at their nodes. Infusions from these plants have been traditionally used as a stimulant, but ephedrine is a controlled substance today in many places because of the risk of harmful or even fatal overdosing.
Classification
With just three well-defined genera within an entire division, there still is understandable difficulty in establishing an unambiguous interrelationship among them; in earlier times matters were even more difficult, with Pearson in the early 20th century discussing about the class Gnetales, rather than the order. G.H.M. Lawrence referred to them as an order, but remarked that the three families were distinct enough to deserve recognition as separate orders. Foster & Gifford accepted this principle, and placed the three orders together in a common class for convenience, which they called Gnetopsida. In general the evolutionary relationships among the seed plants still are unresolved, and the Gnetophyta have played an important role in the formation of phylogenetic hypotheses. Molecular phylogenies of extant gymnosperms have conflicted with morphological characters with regard to whether the gymnosperms as a whole (including gnetophytes) comprise a monophyletic group or a paraphyletic one that gave rise to angiosperms. At issue is whether the Gnetophyta are the sister group of angiosperms, or whether they are sister to, or nested within, other extant gymnosperms. Numerous fossil gymnosperm clades once existed that are morphologically at least as distinctive as the four living gymnosperm groups, such as Bennettitales, Caytonia and the glossopterids. When these gymnosperm fossils are considered, the question of gnetophyte relationships to other seed plants becomes even more complicated. Several hypotheses, illustrated below, have been presented to explain seed plant evolution. Some morphological studies have supported a close relationship between Gnetophyta, Bennettitales and the Erdtmanithecales.
Recent research by Lee, Cibrian-Jaramillo, et al. (2011) suggests that the Gnetophyta are a sister group to the rest of the gymnosperms, contradicting the anthophyte hypothesis, which held that gnetophytes were sister to the flowering plants.
Gnetifer hypothesis
In the gnetifer hypothesis, the gnetophytes are sister to the conifers, and the gymnosperms are a monophyletic group, sister to the angiosperms.The gnetifer hypothesis first emerged formally in the mid-twentieth century, when vessel elements in the gnetophytes were interpreted as being derived from tracheids with circular bordered pits, as in conifers. It however only gained strong support with the emergence of molecular data in the late 1990s. Although the most salient morphological evidence still largely supports the anthophyte hypothesis, some more obscure morphological commonalities between the gnetophytes and conifers lend support to the gnetifer hypothesis.These shared traits include: tracheids with scalariform pits with tori interspersed with annular thickenings, absence of scalariform pitting in primary xylem, scale-like and strap-shaped leaves of Ephedra and Welwitschia; and reduced sporophylls.
Anthophyte hypothesis
From the early twentieth century, the anthophyte hypothesis was the prevailing explanation for seed plant evolution, based on shared morphological characters between the gnetophytes and angiosperms. In this hypothesis, the gnetophytes, along with the extinct order Bennettitales, are sister to the angiosperms, forming the "anthophytes". Some morphological characters that were suggested to unite the anthophytes include vessels in wood, net-veined leaves (in Gnetum only), lignin chemistry, the layering of cells in the apical meristem, pollen and megaspore features (including thin megaspore wall), short cambial initials, and lignin syringal groups. However, most genetic studies, as well as more recent morphological analyses, have rejected the anthophyte hypothesis.
Several of these studies have suggested that the gnetophytes and angiosperms have independently derived characters, including flower-like reproductive structures and tracheid vessel elements, that appear shared but are actually the result of parallel evolution.
Gnepine hypothesis
The gnepine hypothesis is a modification of the gnetifer hypothesis, and suggests that the gnetophytes belong within the conifers as a sister group to the Pinaceae. According to this hypothesis, the conifers as currently defined are not a monophyletic group, in contrast with molecular findings that support its monophyly. All existing evidence for this hypothesis comes from molecular studies since 1999. A 2018 phylogenomic study estimated the divergence between Gnetales and Pinaceae at around 241 millions of years ago, in the early Triassic while a 2021 study placed it earlier, in the Carboniferous.
However, the morphological evidence remains difficult to reconcile with the gnepine hypothesis. If the gnetophytes are nested within conifers, they must have lost several shared derived characters of the conifers (or these characters must have evolved in parallel in the other two conifer lineages): narrowly triangular leaves (gnetophytes have diverse leaf shapes), resin canals, a tiered proembryo, and flat woody ovuliferous cone scales. These kinds of major morphological changes are not without precedent in the Pinales, however: the Taxaceae, for example, have lost the classical cone of the conifers in favor of a single-terminal ovule, surrounded by a fleshy aril.
Gnetophyte-sister hypothesis
Some partitions of the genetic data suggest that the gnetophytes are sister to all of the other extant seed plant groups. However, there is no morphological evidence nor examples from the fossil record to support the gnetophyte-sister hypotheses.
Fossil gnetophytes
Knowledge of gnetophyte history through fossil discovery has increased greatly since the 1980s. Although some fossils that have been proposed to be gnetophytes have been found as far back as the Permian, their affinities to the group are equivocal. The oldest fossils that are definitely assignable to the group date to the Late Jurassic. Overall, the fossil record of the group is richest during the Early Cretaceous, exhibiting a substantial decline during the Late Cretaceous.
Ephedraceae
Leongathia V.A. Krassilov, D.L. Dilcher & J.G. Douglas 1998 Koonwarra fossil bed, Australia, Early Cretaceous (Aptian)
Jianchangia Yang, Wang and Ferguson, 2020 Jiufotang Formation, China, Early Cretaceous (Aptian)
Eamesia Yang, Lin and Ferguson, 2018 Yixian Formation, China, Early Cretaceous (Aptian)
Prognetella Krassilov et Bugdaeva, 1999 Yixian Formation, China, Early Cretaceous (Aptian) (initially interpreted as an angiosperm)
Chengia Yang, Lin & Wang, 2013, Yixian Formation, China, Early Cretaceous (Aptian)
Chaoyangia Duan, 1998 Yixian Formation, China, Early Cretaceous (Aptian)
Eragrosites Yixian Formation, China, Early Cretaceous (Aptian)
Gurvanella China, Mongolia, Early Cretaceous
Alloephedra China, Early Cretaceous
Amphiephedra China, Early Cretaceous
Beipiaoa China, Early Cretaceous
Ephedrispermum Portugal, Early Cretaceous (Aptian-Albian)
Ephedrites China, Early Cretaceous
Erenia China, Mongolia, Early Cretaceous
Liaoxia China, Early Cretaceous
Dichoephedra China, Early Cretaceous
Laiyangia P.H. Jin, 2024 China, Early Cretaceous
Gnetaceae
Khitania Guo et al. 2009 Yixian Formation, China, Early Cretaceous (Aptian)
Welwitschiaceae
Priscowelwitschia Dilcher et al., 2005 Crato Formation, Brazil, Early Cretaceous (Aptian)
Cratonia Rydin et al., 2003 Crato Formation, Brazil, Early Cretaceous (Aptian)
Welwitschiostrobus Dilcher et al., 2005 Crato Formation, Brazil, Early Cretaceous (Aptian)
Incertae sedis:
Archangelskyoxylon Brea, Gnaedinger & Martínez, 2023 Roca Blanca Formation, Argentina, Sinemurian–Toarcian (closely related to Weltwitschia and Gnetum).
Drewria Crane & Upchurch, 1987 Potomac Group, USA, Albian (possible affinities to Welwitschiaceae)
Bicatia Friis, Pedersen and Crane, 2014 Figueira da Foz Formation, Portugal, Early Cretaceous (late Aptian early Albian), Potomac Group, USA, Albian (possible affinities to Welwitschiaceae)
Liaoningia Yang et al, 2017 Yixian Formation, China, Early Cretaceous (Aptian)
Protognetum Y. Yang, L. Xie et D.K. Ferguson, 2017 Daohugou Bed, China, Middle Jurassic (Callovian)
Itajuba Ricardi-Branco et al, 2013, Crato Formation, Brazil, Early Cretaceous (Aptian)
Protoephedrites Rothwell et Stockey, 2013 Canada, Valanginian (possible ephedroid affinities)
Siphonospermum Rydin et Friis, 2010 Yixian Formation, China, Early Cretaceous (Aptian)
Welwitschiophyllum Dilcher et al., 2005 Crato Formation, Brazil, Early Cretaceous (Aptian), Akrabou Formation, Morocco, Late Cretaceous (Cenomanian-Turonian) (Initially interpreted as a member of Welwitschiaceae, later considered uncertain).
Dayvaultia Manchester et al. 2021 Morrison Formation, USA, Late Jurassic (Tithonian)
Daohugoucladus Yang et al. 2023 Daohugou Bed, China, Middle Jurassic (Callovian)
Possible gnetophytes (not confirmed as members of the group)
Archaestrobilus Trujillo Formation, Texas, United States, Upper Triassic
Dechellyia-Masculostrobus Mongolia, Early Cretaceous (Aptian-Albian)
Dinophyton Chinle Formation, United States, Upper Triassic
Nataligma Molteno Formation, South Africa, Upper Triassic (Carnian)
Palaeognetaleana Wang, 2004, China, Upper Permian
Sanmiguelia United States, Late Triassic-Early Jurassic
Eoantha Russia, Early Cretaceous
Bassitheca Morrison Formation, USA, Late Jurassic (Tithonian)
| Biology and health sciences | Gymnosperms (except conifers) | Plants |
238293 | https://en.wikipedia.org/wiki/Piciformes | Piciformes | Nine families of largely arboreal birds make up the order Piciformes , the best-known of them being the Picidae, which includes the woodpeckers and close relatives. The Piciformes contain about 71 living genera with a little over 450 species, of which the Picidae make up about half.
In general, the Piciformes are insectivorous, although the barbets and toucans mostly eat fruit and the honeyguides are unique among birds in being able to digest beeswax (although insects make up the bulk of their diet). Nearly all Piciformes have parrot-like zygodactyl feet—two toes forward and two back, an arrangement that has obvious advantages for birds that spend much of their time on tree trunks. An exception are a few species of three-toed woodpeckers. The jacamars aside, Piciformes do not have down feathers at any age, only true feathers. They range in size from the rufous piculet at 8 centimetres in length, and weighing 7 grams, to the toco toucan, at 63 centimetres long, and weighing 680 grams. All nest in cavities and have altricial young.
Taxonomy
The Galbulidae and Bucconidae are often separated into a distinct Galbuliformes order. Analysis of nuclear genes confirms that they form a lineage of their own, but suggests that they are better treated as a suborder. The other families form another monophyletic group of suborder rank, but the barbets were determined to be paraphyletic with regard to the toucans and hence, the formerly all-encompassing Capitonidae have been split up. The woodpeckers and honeyguides are each other's closest relatives. According to some researchers, the entire order Piciformes should be included as a subgroup in Coraciiformes.
The phylogenetic relationship between the nine families that make up the order Piciformes is shown in the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
Evolution
Reconstruction of the evolutionary history of the Piciformes has been hampered by poor understanding of the evolution of the zygodactyl foot. A number of prehistoric families and genera, from the Early Eocene Neanis and Hassiavis, the Zygodactylidae/Primoscenidae, Gracilitarsidae, Sylphornithidae, and "Homalopus", to the Miocene "Picus" gaudryi and the Pliocene Bathoceleus are sometimes tentatively assigned to this order. There are some extinct ancestral Piciformes known from fossils which have been difficult to place but at least in part probably belong to the Pici. The modern families are known to exist since the mid-late Oligocene to early Miocene; consequently, the older forms appear to be more basal. A large part of Piciform evolution seems to have occurred in Europe where only Picidae occur today; perhaps even some now exclusively Neotropical families have their origin in the Old World.
Classification
Order: PICIFORMES
Unassigned (all fossil)
Piciformes gen. et sp. indet. IRScNB Av 65 (Early Oligocene of Boutersem, Belgium)
Piciformes gen. et sp. indet. SMF Av 429 (Late Oligocene of Herrlingen, Germany)
Suborder Galbuli
Family Galbulidae – jacamars (18 species)
Family Bucconidae – puffbirds, nunbirds and nunlets (some 38 species)
Suborder Pici
Unresolved and basal taxa (all fossil)
Genus Rupelramphastoides (Early Oligocene of Frauenweiler, Germany)
Pici gen. et sp. indet. (Middle Miocene of Grive-Saint-Alban, France)
Family Miopiconidae (fossil)
Family Picavidae (fossil)
Infraorder Ramphastides
Family Megalaimidae – Asian barbets (about 35 species)
Family Lybiidae – African barbets (about 43 species)
Family Capitonidae – New World barbets (about 15 species)
Family Semnornithidae – toucan barbets (2 species)
Family Ramphastidae – toucans (about 43 species)
Infraorder Picides
Family Indicatoridae – honeyguides (16 species)
Family Picidae – woodpeckers, piculets and wrynecks (around 240 species)
| Biology and health sciences | Piciformes | Animals |
238347 | https://en.wikipedia.org/wiki/Mousebird | Mousebird | The mousebirds are birds in the order Coliiformes. They are the sister group to the clade Cavitaves, which includes the Leptosomiformes (the cuckoo roller), Trogoniformes (trogons), Bucerotiformes (hornbills and hoopoes), Piciformes (woodpeckers, toucans, and barbets) and Coraciformes (kingfishers, bee-eaters, rollers, motmots, and todies). This group is now confined to sub-Saharan Africa, and it is the only bird order confined entirely to that continent, with the possible exception of turacos which are considered by some as the distinct order Musophagiformes, and the cuckoo roller, which is the only member of the order Leptosomiformes, and which is found in Madagascar but not mainland Africa. Mousebirds had a wider range in the Paleogene, with a widespread distribution in Europe and North America during the Paleocene.
Description
Mousebirds are slender greyish or brown birds with soft, hairlike body feathers. They are typically about in body length, with a long, thin tail a further in length, and weigh . They are arboreal and scurry through the leaves like rodents, in search of berries, fruit and buds. This habit, and their legs, gives rise to the group's English name. They are acrobatic, and can feed upside down. All species have strong claws and reversible outer toes (pamprodactyl feet). They also have crests and stubby bills.
Behaviour and ecology
Mousebirds are gregarious, again reinforcing the analogy with mice, and are found in bands of about 20 in lightly wooded country. These birds build cup-shaped twig nests in trees, which are lined with grasses. Clutches of two to three eggs are typically laid.
Systematics and evolution
The mousebirds could be considered "living fossils" as the six species extant today are merely the survivors of a lineage that was massively more diverse in the early Paleogene and Miocene. There are comparatively abundant fossils of Coliiformes, but it has not been easy to assemble a robust phylogeny. The family is documented to exist from the Early Paleocene onwards; by at least the Late Eocene, two families are known to have existed, the extant Coliidae and the longer-billed prehistorically extinct Sandcoleidae.
The latter were previously a separate order, but eventually it was realized that they had come to group ancestral Coraciiformes, the actual sandcoleids and forms like Neanis together in a paraphyletic assemblage. Even though the sandcoleids are now assumed to be monophyletic following the removal of these taxa, many forms cannot be conclusively assigned to one family or the other. The genus Selmes, for example, is probably a coliid, but only distantly related to the modern genera.
Extinct Coliiformes occupied a wide range of ecologies. Sandcoleids in particular often preserve uncrushed seeds on their stomachs, while bearing talons similar to those of modern birds of prey.
Taxonomy
Order COLIIFORMES
Genus †Botauroides Shufeldt 1915 (Eocene of Wyoming, US)
†B. parvus Shufeldt 1915
Genus †Eobucco Feduccia & Martin 1976 - sandcoleid?
†E. brodkorbi Feduccia & Martin 1976
Genus †Eocolius Dyke & Waterhouse 2001 (London Clay Early Eocene of Walton-on-the-Naze, England) - sandcoleid or coliid
†E. walkeri Dyke & Waterhouse 2001
Genus †Limnatornis Milne-Edwards 1871 [Palaeopicus Lambrecht 1933 ex Brodkorb 1952] (Early Miocene of Saint-Gérand-le-Puy, France) - coliid? (Urocolius?)
†L. consobrinus (Milne-Edwards 1871) [Picus consobrinus Milne-Edwards 1871; Palaeopicus consobrinus (Milne-Edwards 1871) Lambrecht 1933 nomen nudum; Urocolius consobrinus (Milne-Edwards 1871)]
†L. paludicola Milne-Edwards 1871 [Colius paludicola (Milne-Edwards 1871) Ballmann 1969a; Urocolius paludicola (Milne-Edwards 1871)]
†L. archiaci (Milne-Edwards 1871) [Picus archiaci Milne-Edwards 1871; Colius archiaci (Milne-Edwards 1871) Ballmann 1969a; Urocolius archiaci (Milne-Edwards 1871) Mlíkovský 2002] (Early Miocene of Saint-Gérand-le-Puy, France)
Coliiformes gen. et sp. indet. (Late Miocene of Kohfidisch, Austria)
Genus †Uintornis Marsh 1872 - sandcoleid?
†U. lucaris Brodkorb 1971
†U. marionae Feduccia & Martin 1976
Family †Chascacocoliidae Zelenkov & Dyke 2008
Genus †Chascacocolius Houde & Olson 1992 (Late Paleocene ?- Early Eocene) - basal? sandcoleid?
†C. oscitans Houde & Olson 1992
†C. cacicirostris Mayr 2005
Family †Selmeidae Zelenkov & Dyke 2008
Genus †Selmes Mayr 1998 ex Peters 1999 (Middle Eocene ?-Late Oligocene of C Europe) - coliid? (synonym of Primocolius?)
†S. absurdipes Mayr 1998 ex Peters 1999
Family †Sandcoleidae Houde & Olson 1992 sensu Mayr & Mourer-Chauviré 2004
Genus †Sandcoleus Houde & Olson 1992 (Paleocene)
†S. copiosus Houde & Olson 1992
Genus †Anneavis Houde & Olson 1992
†A. anneae Houde & Olson 1992
Genus †Eoglaucidium Fischer 1987
†E. pallas Fischer 1987
Genus †Tsidiiyazhi Ksepka, Stidham & Williamson 2017 (Paleocene of New Mexico)
†T. abini Ksepka, Stidham & Williamson 2017
Family Coliidae Swainson 1837 sensu Mayr & Mourer-Chauviré 2004
Genus †Celericolius Ksepka & Clarke 2010
†C. acriala Ksepka & Clarke 2010
Genus †Masillacolius Mayr & Peters 1998 (middle Eocene of Messel, Germany)
†M. brevidactylus Mayr & Peters 1998
Genus †Oligocolius Mayr 2000 (Early Oligocene of Frauenweiler, Germany)
†O. brevitarsus Mayr 2000
†O. psittacocephalon Mayr 2013
Genus †Palaeospiza Allen 1878
†Palaeospiza bella Allen 1878
Genus †Primocolius Mourer-Chauviré 1988 (Late Eocene/Oligocene of Quercy, France)
†P. sigei Mourer-Chauviré 1988
†P. minor Mourer-Chauviré 1988
Subfamily Coliinae
Genus Urocolius (2 species)
U. indicus (Latham 1790) (Red-faced mousebird)
U. macrourus (Linnaeus 1766) (Blue-naped mousebird)
Genus Colius [Necrornis Milne-Edwards 1871] (4 species)
†C. hendeyi Vickers-Rich & Haarhoff 1985
†C. palustris (Milne-Edwards 1871) Ballmann 1969 [Necrornis palustris Milne-Edwards 1871]
C. castanotus Verreaux & Verreaux 1855 (Red-backed mousebird)
C. colius (Linnaeus 1766) (White-backed mousebird)
C. leucocephalus Reichenow 1879 (White-headed mousebird)
C. striatus Gmelin 1789 (Speckled mousebird)
| Biology and health sciences | Basics | Animals |
238361 | https://en.wikipedia.org/wiki/Kinorhyncha | Kinorhyncha | Kinorhyncha (, "snout") is a phylum of small marine invertebrates that are widespread in mud or sand at all depths as part of the meiobenthos. They are commonly called mud dragons. Modern species are or less, but Cambrian forms could reach .
Anatomy
Kinorhynchs are limbless animals, with a body consisting of a head, neck, and a trunk of eleven segments. They are the only members of Ecdysozoa, except from the panarthropoda, with a segmented body. Juveniles have eight or nine segments, depending on genus, with the last two or three being added later during growth. A Cambrian species, Eokinorhynchus rarus, had about twice as many segments as present forms. Like other ecdysozoans they do not have external cilia, but instead have a number of spines along the body, plus up to seven circles of spines around the head. These spines are used for locomotion, withdrawing the head and pushing forward, then gripping the substrate with the spines while drawing up the body.
The body wall consists of a thin syncitial layer, which secretes a tough cuticle; this is molted several times while growing to adulthood. The spines are essentially moveable extensions of the body wall, and are hollow and covered by cuticle. The head is completely retractable, and is covered by a set of neck plates called placids when retracted.
Kinorhynchs eat either diatoms or organic material found in the mud, depending on species. The mouth is located in a conical structure at the apex of the head, and opens into a pharynx and then an oesophagus, both of which are lined by cuticle. Two pairs of salivary glands and one or more pairs of "pancreatic glands" connect to the oesophagus and presumably secrete digestive enzymes. Beyond the oesophagus lies a midgut that combines the functions of a stomach and intestine, and lacks a cuticle, enabling it to absorb nutrients. The short hind-gut is lined by cuticle, and empties into an anus at the posterior end of the trunk.
There is no circulatory system, although the body cavity (pseudocoelom) is well developed, and includes amoebocytes. The excretory system consists of two protonephridia emptying through pores in the final segment.
The nervous system consists of a ventral nerve cord, with one ganglion in each segment, and an anterior nerve ring surrounding the pharynx. Smaller ganglia are also located in the lateral and dorsal portions of each segment, but do not form distinct cords. Some species have simple ocelli on the head, and all species have tiny bristles on the body to provide a sense of touch.
Reproduction
There are two sexes that look alike, although some sexual dimorphism in allometry has been reported. A pair of gonads are located in the mid-region of the trunk, and open to pores in the final segment. In most species, the sperm duct includes two or three spiny structures that presumably aid in copulation, although the details are unknown. Individual spermatozoa can reach a quarter of the total body length. The larvae are free-living, but little else is known of their reproductive process. After having laid an egg, the female packs it into a protective envelope of mud and organic material.
Classification
Their closest relatives are thought to be the phyla Loricifera and Priapulida. Together they constitute the Scalidophora.
Taxonomy
The two groups of Kinorhynchs are generally characterized as classes in Sørensen et al. (2015). 270 species have been described and this number is expected to increase substantially. Morphological data has been collected for systematic phylogeny from dozens, and the integration of this with molecular data has led to a new systematic paradigm featuring the order Allomalorhagida (with Homalorhagida being retired). Phylogenomic data has shown Allomalorhagida and Cyclorhagida to be divided in three and two major clades respectively.
The oldest known species is Eokinorhynchus from the Fortunian of China.
Phylum Kinorhyncha
Eokinorhynchus Zhang et al., 2015
Class Cyclorhagida (Zelinka, 1896) Chitwood, 1951
Order Echinorhagata Sørensen et al., 2015
Echinoderidae Zelinka, 1894
Order Kentrorhagata Sørensen et al., 2015
Antygomonidae Adrianov & Malakhov, 1994
Cateriidae? Gerlach, 1956 (following Sørensen et al.)
Centroderidae Zelinka, 1896
Semnoderidae Remane, 1929
Zelinkaderidae Higgins, 1990
Order Xenosomata Zelinka, 1907
Campyloderidae Remane, 1929
Class Allomalorhagida Sørensen et al., 2015
Pycnophyidae Zelinka, 1986
Order Anomoirhaga Herranz et al., 2022
Cateriidae? Gerlach, 1956 (following Herranz et al.)
Dracoderidae Higgins & Shirayama, 1990
Franciscideridae Sørensen et al., 2015
Neocentrophyidae Higgins, 1969
| Biology and health sciences | Ecdysozoa | Animals |
238377 | https://en.wikipedia.org/wiki/Chimney | Chimney | A chimney is an architectural ventilation structure made of masonry, clay or metal that isolates hot toxic exhaust gases or smoke produced by a boiler, stove, furnace, incinerator, or fireplace from human living areas. Chimneys are typically vertical, or as near as possible to vertical, to ensure that the gases flow smoothly, drawing air into the combustion in what is known as the stack, or chimney effect. The space inside a chimney is called the flue. Chimneys are adjacent to large industrial refineries, fossil fuel combustion facilities or part of buildings, steam locomotives and ships.
In the United States, the term smokestack industry refers to the environmental impacts of burning fossil fuels by industrial society, including the electric industry during its earliest history. The term smokestack (colloquially, stack) is also used when referring to locomotive chimneys or ship chimneys, and the term funnel can also be used.
The height of a chimney influences its ability to transfer flue gases to the external environment via stack effect. Additionally, the dispersion of pollutants at higher altitudes can reduce their impact on the immediate surroundings. The dispersion of pollutants over a greater area can reduce their concentrations and facilitate compliance with regulatory limits.
History
Industrial chimney use dates to the Romans, who drew smoke from their bakeries with tubes embedded in the walls. However, domestic chimneys first appeared in large dwellings in northern Europe in the 12th century. The earliest surviving example of an English chimney is at the keep of Conisbrough Castle in Yorkshire, which dates from 1185 AD, but they did not become common in houses until the 16th and 17th centuries. Smoke hoods were an early method of collecting the smoke into a chimney. These were typically much wider than modern chimneys and started relatively high above the fire, meaning more heat could escape into the room. Because the air going up the shaft was cooler, these could be made of less fireproof materials. Another step in the development of chimneys was the use of built-in ovens which allowed the household to bake at home. Industrial chimneys became common in the late 18th century.
Chimneys in ordinary dwellings were first built of wood and plaster or mud. Since then chimneys have traditionally been built of brick or stone, both in small and large buildings. Early chimneys were of simple brick construction. Later chimneys were constructed by placing the bricks around tile liners. To control downdrafts, venting caps (often called chimney pots) with a variety of designs are sometimes placed on the top of chimneys.
In the 18th and 19th centuries, the methods used to extract lead from its ore produced large amounts of toxic fumes. In the north of England, long near-horizontal chimneys were built, often more than 3 km (2 mi) long, which typically terminated in a short vertical chimney in a remote location where the fumes would cause less harm. Lead and silver deposits formed on the inside of these long chimneys, and periodically workers would be sent along the chimneys to scrape off these valuable deposits.
Construction
As a result of the limited ability to handle transverse loads with brick, chimneys in houses were often built in a "stack", with a fireplace on each floor of the house sharing a single chimney, often with such a stack at the front and back of the house. Today's central heating systems have made chimney placement less critical, and the use of non-structural gas vent pipe allows a flue gas conduit to be installed around obstructions and through walls.
Most modern high-efficiency heating appliances do not require a chimney. Such appliances are generally installed near an external wall, and a noncombustible wall thimble allows a vent pipe to run directly through the external wall.
On a pitched roof where a chimney penetrates a roof, flashing is used to seal up the joints. The down-slope piece is called an apron, the sides receive step flashing and a cricket is used to divert water around the upper side of the chimney underneath the flashing.
Industrial chimneys are commonly referred to as flue-gas stacks and are generally external structures, as opposed to those built into the wall of a building. They are generally located adjacent to a steam-generating boiler or industrial furnace and the gases are carried to them with ductwork. Today the use of reinforced concrete has almost entirely replaced brick as a structural element in the construction of industrial chimneys. Refractory bricks are often used as a lining, particularly if the type of fuel being burned generates flue gases containing acids. Modern industrial chimneys sometimes consist of a concrete windshield with a number of flues on the inside.
The high steam plant chimney at the Secunda CTL's synthetic fuel plant in Secunda, South Africa consists of a 26 m (85 ft) diameter windshield with four 4.6 metre diameter concrete flues which are lined with refractory bricks built on rings of corbels spaced at 10 metre intervals. The reinforced concrete can be cast by conventional formwork or sliding formwork. The height is to ensure the pollutants are dispersed over a wider area to meet legal or other safety requirements.
Residential flue liners
A flue liner is a secondary barrier in a chimney that protects the masonry from the acidic products of combustion, helps prevent flue gas from entering the house, and reduces the size of an oversized flue. Since the 1950s, building codes in many locations require newly built chimneys to have a flue liner. Chimneys built without a liner can usually have a liner added, but the type of liner needs to match the type of appliance it services. Flue liners may be clay or concrete tile, metal, or poured in place concrete.
Clay tile flue liners are very common in the United States, although it is the only liner that does not meet Underwriters Laboratories 1777 approval and frequently they have problems such as cracked tiles and improper installation. Clay tiles are usually about long, available in various sizes and shapes, and are installed in new construction as the chimney is built. A refractory cement is used between each tile.
Metal liners may be stainless steel, aluminum, or galvanized iron and may be flexible or rigid pipes. Stainless steel is made in several types and thicknesses. Type 304 is used with firewood, wood pellet fuel, and non-condensing oil appliances, types 316 and 321 with coal, and type AL 29-4C is used with high efficiency condensing gas appliances. Stainless steel liners must have a cap and be insulated if they service solid fuel appliances, but following the manufacturer's instructions carefully. Aluminum and galvanized steel chimneys are known as class A and class B chimneys. Class A are either an insulated, double wall stainless steel pipe or triple wall, air-insulated pipe often known by its genericized trade name Metalbestos. Class B are uninsulated double wall pipes often called B-vent, and are only used to vent non-condensing gas appliances. These may have an aluminum inside layer and galvanized steel outside layer.
Concrete flue liners are like clay liners but are made of a refractory cement and are more durable than the clay liners.
Poured in place concrete liners are made by pouring special concrete into the existing chimney with a form. These liners are highly durable, work with any heating appliance, and can reinforce a weak chimney, but they are irreversible.
Chimney pots, caps, and tops
A chimney pot is placed on top of the chimney to expand the length of the chimney inexpensively, and to improve the chimney's draft. A chimney with more than one pot on it indicates that multiple fireplaces on different floors share the chimney.
A cowl is placed on top of the chimney to prevent birds and other animals from nesting in the chimney. They often feature a rain guard to prevent rain or snow from going down the chimney. A metal wire mesh is often used as a spark arrestor to minimize burning debris from rising out of the chimney and making it onto the roof. Although the masonry inside the chimney can absorb a large amount of moisture which later evaporates, rainwater can collect at the base of the chimney. Sometimes weep holes are placed at the bottom of the chimney to drain out collected water.
A chimney cowl or wind directional cap is a helmet-shaped chimney cap that rotates to align with the wind and prevent a downdraft of smoke and wind down the chimney.
An H-style cap is a chimney top constructed from chimney pipes shaped like the letter H. It is an age-old method of regulating draft in situations where prevailing winds or turbulences cause downdraft and back-puffing. Although the H cap has a distinct advantage over most other downdraft caps, it fell out of favor because of its bulky design. It is found mostly in marine use but has been regaining popularity due to its energy-saving functionality. The H-cap stabilizes the draft rather than increasing it. Other downdraft caps are based on the Venturi effect, solving downdraft problems by increasing the updraft constantly resulting in much higher fuel consumption.
A chimney damper is a metal plate that can be positioned to close off the chimney when not in use and prevent outside air from entering the interior space, and can be opened to permit hot gases to exhaust when a fire is burning. A top damper or cap damper is a metal spring door placed at the top of the chimney with a long metal chain that allows one to open and close the damper from the fireplace. A throat damper is a metal plate at the base of the chimney, just above the firebox, that can be opened and closed by a lever, gear, or chain to seal off the fireplace from the chimney. The advantage of a top damper is the tight weatherproof seal that it provides when closed, which prevents cold outside air from flowing down the chimney and into the living space—a feature that can rarely be matched by the metal-on-metal seal afforded by a throat damper. Additionally, because the throat damper is subjected to intense heat from the fire directly below, it is common for the metal to become warped over time, thus further degrading the ability of the throat damper to seal. However, the advantage of a throat damper is that it seals off the living space from the air mass in the chimney, which, especially for chimneys positioned on an outside of wall of the home, is generally very cold. It is possible in practice to use both a top damper and a throat damper to obtain the benefits of both. The two top damper designs currently on the market are the Lyemance (pivoting door) and the Lock Top (translating door).
In the late Middle Ages in Western Europe the design of stepped gables arose to allow maintenance access to the chimney top, especially for tall structures such as castles and great manor houses.
Chimney draught or draft
When coal, oil, natural gas, wood, or any other fuel is combusted in a stove, oven, fireplace, hot water boiler, or industrial furnace, the hot combustion product gases that are formed are called flue gases. Those gases are generally exhausted to the ambient outside air through chimneys or industrial flue-gas stacks (sometimes referred to as smokestacks).
The combustion flue gases inside the chimneys or stacks are much hotter than the ambient outside air and therefore less dense than the ambient air. That causes the bottom of the vertical column of hot flue gas to have a lower pressure than the pressure at the bottom of a corresponding column of outside air. That higher pressure outside the chimney is the driving force that moves the required combustion air into the combustion zone and also moves the flue gas up and out of the chimney. That movement or flow of combustion air and flue gas is called "natural draught/draft", "natural ventilation", "chimney effect", or "stack effect". The taller the stack, the more draught or draft is created. There can be cases of diminishing returns: if a stack is overly tall in relation to the heat being sent out of the stack, the flue gases may cool before reaching the top of the chimney. This condition can result in poor drafting, and in the case of wood burning appliances, the cooling of the gases before emission can cause creosote to condense near the top of the chimney. The creosote can restrict the exit of flue gases and may pose a fire hazard.
Designing chimneys and stacks to provide the correct amount of natural draft involves a number of design factors, many of which require iterative trial-and-error methods.
As a "first guess" approximation, the following equation can be used to estimate the natural draught/draft flow rate by assuming that the molecular mass (i.e., molecular weight) of the flue gas and the external air are equal and that the frictional pressure and heat losses are negligible:
where:
Q = chimney draught/draft flow rate, m3/s
A = cross-sectional area of chimney, m2 (assuming it has a constant cross-section)
C = discharge coefficient (usually taken to be from 0.65 to 0.70)
g = gravitational acceleration, 9.807 m/s2
H = height of chimney, m
Ti = average temperature inside the chimney, K
Te = external air temperature, K.
Combining two flows into chimney: At+Af<A, where At=7.1 inch2 is the minimum required flow area from water heater tank and Af=19.6 inch2 is the minimum flow area from a furnace of a central heating system.
Draft hood
Gas fired appliances must have a draft hood to cool combustion products entering the chimney and prevent updrafts or downdrafts.
Maintenance and problems
A characteristic problem of chimneys is they develop deposits of creosote on the walls of the structure when used with wood as a fuel. Deposits of this substance can interfere with the airflow and more importantly, they are combustible and can cause dangerous chimney fires if the deposits ignite in the chimney.
Heaters that burn natural gas drastically reduce the amount of creosote buildup due to natural gas burning much cleaner and more efficiently than traditional solid fuels. While in most cases there is no need to clean a gas chimney on an annual basis that does not mean that other parts of the chimney cannot fall into disrepair. Disconnected or loose chimney fittings caused by corrosion over time can pose serious dangers for residents due to leakage of carbon monoxide into the home. Thus, it is recommended—and in some countries even mandatory—that chimneys be inspected annually and cleaned on a regular basis to prevent these problems. The workers who perform this task are called chimney sweeps or steeplejacks. This work used to be done largely by child labour and, as such, features in Victorian literature. In the Middle Ages in some parts of Europe, a stepped gable design was developed, partly to provide access to chimneys without use of ladders.
Masonry (brick) chimneys have also proven to be particularly prone to crumbling during earthquakes. Government housing authorities in cities prone to earthquakes such as San Francisco, Los Angeles, and San Diego now recommend building new homes with stud-framed chimneys around a metal flue. Bracing or strapping old masonry chimneys has not proven to be very effective in preventing damage or injury from earthquakes. It is now possible to buy "faux-brick" facades to cover these modern chimney structures.
Other potential problems include:
"spalling" brick, in which moisture seeps into the brick and then freezes, cracking and flaking the brick and loosening mortar seals.
shifting foundations, which may degrade integrity of chimney masonry
nesting or infestation by unwanted animals such as squirrels, racoons, or chimney swifts
chimney leaks
drafting issues, which may allow smoke inside building
issues with fireplace or heating appliance may cause unwanted degradation or hazards to chimney
Chimneys of special interest
Chimneys with observation decks
Several chimneys with observation decks were built. The following possibly incomplete list shows them.
Chimneys used as electricity pylon
At several thermal power stations at least one smokestack is used as electricity pylon. The following possibly incomplete list shows them.
Nearly all this structures exist in an area, which was once part of the Soviet Union. Although this use has the disadvantage that conductor ropes may corrode faster due to the exhaust gases, one can find such structures also sometimes in countries not influenced by the former Soviet Union. An example herefore is one chimney of Scholven Power Plant in Gelsenkirchen, which carries one circuit of an outgoing 220 kV-line.
Chimneys used as water tower
Chimneys can also carry a water tank on their structure. This combination has the advantage that the warm smoke running through the chimney prevents the water in the tank from freezing. Before World War II such structures were not uncommon, especially in countries influenced by Germany.
Chimneys used as radio tower
Chimneys can carry antennas for radio relay services, cell phone transmissions, FM-radio and TV on their structure. Also long wire antennas for mediumwave transmissions can be fixed at chimneys.
In all cases it had to be considered that these objects can easily corrode especially when placed near the exhaust.
Sometimes chimneys were converted into radio towers and are not useable as ventilation structure any more.
Chimneys used for advertising
As chimneys are often the tallest part of a factory, they offer the possibility as advertising billboard either by writing the name of the company to which they belong on the shaft or by installing advertisement boards on their structure.
Cooling tower used as an industrial chimney
At some power stations, which are equipped with plants for the removal of sulfur dioxide and nitrogen oxides, it is possible to use the cooling tower as a chimney. Such cooling towers can be seen in Germany at the Großkrotzenburg Power Station and at the Rostock Power Station. At power stations that are not equipped for removing sulfur dioxide, such usage of cooling towers could result in serious corrosion problems which are not easy to prevent.
| Technology | Heating and cooling | null |
238446 | https://en.wikipedia.org/wiki/Lily%20of%20the%20valley | Lily of the valley | Lily of the valley (Convallaria majalis ), sometimes written lily-of-the-valley, is a woodland flowering plant with sweetly scented, pendent, bell-shaped white flowers borne in sprays in spring. It is native throughout the cool temperate Northern Hemisphere in Asia and Europe. Convallaria majalis var. montana, also known as the American lily of the valley, is native to North America.
Due to the concentration of cardiac glycosides (cardenolides), it is highly poisonous if consumed by humans or other animals.
Other names include May bells, Our Lady's tears, and Mary's tears. Its French name, muguet, sometimes appears in the names of perfumes imitating the flower's scent. In pre-modern England, the plant was known as glovewort (as it was a wort used to create a salve for sore hands), or Apollinaris (according to a legend that it was discovered by Apollo).
Description
Convallaria majalis is a herbaceous perennial plant that often forms extensive colonies by spreading underground stems called rhizomes. New upright shoots are formed at the ends of stolons in summer, these upright dormant stems are often called pips. These grow in the spring into new leafy shoots that still remain connected to the other shoots under ground. The stems grow to tall, with one or two leaves long; flowering stems have two leaves and a raceme of five to fifteen flowers on the stem apex.
The flowers have six white tepals (rarely pink), fused at the base to form a bell shape, diameter, and sweetly scented; flowering is in late spring, in mild winters in the Northern Hemisphere it is in early March. The fruit is a small orange-red berry diameter that contains a few large whitish to brownish colored seeds that dry to a clear translucent round bead wide. Plants are self-incompatible, and colonies consisting of a single clone do not set seed.
Taxonomy
In the APG III system, the genus is placed in the family Asparagaceae, subfamily Nolinoideae (formerly the family Ruscaceae). It was formerly placed in its own family Convallariaceae, and, like many lilioid monocots, before that in the lily family Liliaceae.
There are three varieties that have sometimes been separated out as distinct species or subspecies by some botanists.
Convallaria majalis var. keiskei – from China and Japan, with red fruit and bowl-shaped flowers (now widely cited as Convallaria keiskei)
C. majalis var. majalis – from Eurasia, with white midribs on the flowers
C. majalis var. montana – from the United States, maybe with green-tinted midribs on the flowers
Convallaria transcaucasica is recognised as a distinct species by some authorities, while the species formerly called Convallaria japonica is now classified as Ophiopogon japonicus.
Distribution
Convallaria majalis is a native of Europe, where it largely avoids the Mediterranean and Atlantic margins. An eastern variety, C. majalis var. keiskei, occurs in Japan and parts of eastern Asia. A limited native population of C. majalis var. montana (synonym C. majuscula) occurs in the Eastern United States. There is, however, some debate as to the native status of the American variety.
Like many perennial flowering plants, C. majalis exhibits dual reproductive modes by producing offspring asexually by vegetative means and sexually by seed, produced via the fusion of gametes.
Ecology
Convallaria majalis is a plant of partial shade, and a mesophile type that prefers warm summers. It likes soils that are silty or sandy and acid to moderately alkaline, with preferably a plentiful amount of humus. The Royal Horticultural Society states that slightly alkaline soils are the most favored. It is a Euroasiatic and suboceanic species that lives in mountains up to in elevation.
Convallaria majalis is used as a food plant by the larvae of some moth and butterfly (Lepidoptera) species including the grey chi. Adults and larvae of the leaf beetle Lilioceris merdigera are also able to tolerate the cardenolides and thus feed on the leaves.
Cultivars
Convallaria majalis is widely grown in gardens for its scented flowers and ground-covering abilities in shady locations. It has gained the Royal Horticultural Society's Award of Garden Merit. In favourable conditions it can form large colonies.
Various kinds and cultivars are grown, including those with double flowers, rose-colored flowers, variegated foliage and ones that grow larger than the typical species.
C. majalis 'Albostriata' has white-striped leaves
C. majalis 'Green Tapestry', 'Haldon Grange', 'Hardwick Hall', 'Hofheim', 'Marcel', 'Variegata' and 'Vic Pawlowski's Gold' are other variegated cultivars
C. majalis 'Berlin Giant' and C. majalis 'Géant de Fortin' (syn. 'Fortin's Giant') are larger-growing cultivars
C. majalis 'Flore Pleno' has double flowers.
C. majalis 'Rosea' sometimes found under the name C. majalis var. rosea, has pink flowers.
Traditionally, Convallaria majalis has been grown in pots and winter forced to provide flowers during the winter months, both in potted plants and as cut flowers.
Chemistry
Roughly 38 different cardiac glycosides (cardenolides) – which are highly toxic if consumed by humans or animals – occur in the plant, including:
The odor of lily of the valley, specifically the ligand bourgeonal, was once thought to attract mammalian sperm. The 2003 discovery of this phenomenon prompted research into odor reception, but a 2012 study demonstrated instead that at high concentrations, bourgeonal imitated the role of progesterone in stimulating sperm to swim (chemotaxis), a process unrelated to odor reception.
Toxicology
All parts of the plant are potentially poisonous, including the red berries which may be attractive to children. If ingested, the plant can cause abdominal pain, nausea, vomiting, and irregular heartbeats.
Uses
Perfume
In 1956, the French firm Dior produced a fragrance simulating lily of the valley, which was Christian Dior's favorite flower. Diorissimo was designed by Edmond Roudnitska. Although it has since been reformulated, it is considered a classic. Because no natural aromatic extract can be produced from lily of the valley, its scent must be recreated synthetically; while Diorissimo originally achieved this with hydroxycitronellal, the European Chemicals Agency now considers it a skin sensitizer and its use has been restricted.
Other perfumes imitating or based on the flower include Henri Robert's Muguet de Bois (1936), Penhaligon's Lily of the Valley (1976), and Olivia Giacobetti's En Passant (2000).
Weddings and other celebrations
Lily of the valley has been used in weddings and off-season can be very expensive. Lily of the valley was featured in the bridal bouquet at the wedding of Prince William and Catherine Middleton. Lily of the valley was also the flower chosen by Princess Grace of Monaco to be featured in her bridal bouquet.
At the beginning of the 20th century, it became tradition in France to sell lily of the valley on international Labour Day, 1 May (also called La Fête du Muguet or Lily of the Valley Day) by labour organisations and private persons without paying sales tax (on that day only) as a symbol of spring.
Lily of the valley is worn in Helston (Cornwall, UK) on Flora Day (8 May each year, see Furry Dance) representing the coming of "the May-o" and the summer. There is also a song sung in pubs around Cornwall (and on Flora Day in Cadgwith, near Helston) called "Lily of the Valley"; the song, strangely, came from the Jubilee Singers from Fisk University in Nashville, Tennessee.
Folk medicine
The plant has been used in folk medicine for centuries. There is a reference to "Lilly of the valley water" in Robert Louis Stevenson's 1886 novel Kidnapped, where it is said to be "good against the Gout", and that it "comforts the heart and strengthens the memory" and "restores speech to those that have the dumb palsey". There is no scientific evidence that lily of the valley has any effective medicinal uses for treating human diseases.
Cultural symbolism
The lily of the valley was the national flower of Yugoslavia, and it also became the national flower of Finland in 1967.
In the "language of flowers", the lily of the valley signifies the return of happiness.
Myths and religion
The name "lily of the valley", like its correspondences in some other European languages, is apparently a reference to the phrase "lily of the valleys" (sometimes also translated as "lily of the valley") in Song of Songs 2:1 (). European herbalists' use of the phrase to refer to a specific plant species seems to have appeared relatively late in the 16th or 15th century. The Neo-Latin term convallaria (coined by Carl Linnaeus) and, for example, the Swedish name derives from the corresponding phrase lilium convallium in the Vulgate.
In culture
It is widely represented in the decorative arts.
The flower is the theme of a poem by Paul Laurence Dunbar.
Tchaikovsky wrote the poem "Lilies of the Valley" (Ландыши) in December 1878 while in Florence.
In Anton Chekhov's 1898 short story "A Doctor's Visit", drops of convallaria are mentioned as medicine.
"Lilies-of-the-Valley" is a 1916 Marc Chagall painting,
The eponymous song by English rock band Queen.
The 46th episode of the television series Breaking Bad includes lily of the valley's use as a poison.
In the third episode of Outlander, children are revealed to have been dying after confusing Lily of the Valley for garlic and eating it.
In 2022, lily of the valley, reputedly Queen Elizabeth II's favourite flower, was the theme of the poem "Floral Tribute" by the Poet Laureate Simon Armitage, written in memory of the Queen and published in the week after her death.
Gallery
| Biology and health sciences | Monocots | null |
238508 | https://en.wikipedia.org/wiki/Lead%28II%29%20nitrate | Lead(II) nitrate | Lead(II) nitrate is an inorganic compound with the chemical formula Pb(NO3)2. It commonly occurs as a colourless crystal or white powder and, unlike most other lead(II) salts, is soluble in water.
Known since the Middle Ages by the name plumbum dulce, the production of lead(II) nitrate from either metallic lead or lead oxide in nitric acid was small-scale, for direct use in making other lead compounds. In the nineteenth century lead(II) nitrate began to be produced commercially in Europe and the United States. Historically, the main use was as a raw material in the production of pigments for lead paints, but such paints have been superseded by less toxic paints based on titanium dioxide. Other industrial uses included heat stabilization in nylon and polyesters, and in coatings of photothermographic paper. Since around the year 2000, lead(II) nitrate has begun to be used in gold cyanidation.
Lead(II) nitrate is toxic and must be handled with care to prevent inhalation, ingestion and skin contact. Due to its hazardous nature, the limited applications of lead(II) nitrate are under constant scrutiny.
History
Lead nitrate was first identified in 1597 by the alchemist Andreas Libavius, who called the substance plumbum dulce, meaning "sweet lead", because of its taste. It is produced commercially by reaction of metallic lead with concentrated nitric acid in which it is sparingly soluble. It has been produced as a raw material for making pigments such as chrome yellow (lead(II) chromate, PbCrO4) and chrome orange (basic lead(II) chromate, Pb2CrO5) and Naples yellow. These pigments were used for dyeing and printing calico and other textiles. It has been used as an oxidizer in black powder and together with lead azide in special explosives.
Production
Lead nitrate is produced by reaction of lead(II) oxide with concentrated nitric acid:
PbO + 2 HNO3(concentrated) → Pb(NO3)2↓ + H2O
It may also be obtained evaporation of the solution obtained by reacting metallic lead with dilute nitric acid.
Pb + 4 HNO3 → Pb(NO3)2 + 2 NO2 + 2 H2O
Solutions and crystals of lead(II) nitrate are formed in the processing of lead–bismuth wastes from lead refineries.
Structure
The crystal structure of solid lead(II) nitrate has been determined by neutron diffraction. The compound crystallizes in the cubic system with the lead atoms in a face-centred cubic system. Its space group is Pa3Z=4 (Bravais lattice notation), with each side of the cube with length 784 picometres.
The black dots represent the lead atoms, the white dots the nitrate groups 27 picometres above the plane of the lead atoms, and the blue dots the nitrate groups the same distance below this plane. In this configuration, every lead atom is bonded to twelve oxygen atoms (bond length: 281 pm). All N–O bond lengths are identical, at 127 picometres.
Research interest in the crystal structure of lead(II) nitrate was partly based on the possibility of free internal rotation of the nitrate groups within the crystal lattice at elevated temperatures, but this did not materialise.
Chemical properties and reactions
Lead nitrate decomposes on heating, a property that has been used in pyrotechnics . It is soluble in water and dilute nitric acid.
Basic nitrates are formed in when alkali is added to a solution. Pb2(OH)2(NO3)2 is the predominant species formed at low pH. At higher pH Pb6(OH)5(NO3) is formed. The cation [Pb6O(OH)6]4+ is unusual in having an oxide ion inside a cluster of 3 face-sharing PbO4 tetrahedra.
There is no evidence for the formation of the hydroxide, Pb(OH)2, in aqueous solution below pH 12.
Solutions of lead nitrate can be used to form co-ordination complexes. Lead(II) is a hard acceptor; it forms stronger complexes with nitrogen and oxygen electron-donating ligands. For example, combining lead nitrate and pentaethylene glycol (EO5) in a solution of acetonitrile and methanol followed by slow evaporation produced the compound [Pb(NO3)2(EO5)]. In the crystal structure for this compound, the EO5 chain is wrapped around the lead ion in an equatorial plane similar to that of a crown ether. The two bidentate nitrate ligands are in trans configuration. The total coordination number is 10, with the lead ion in a bicapped square antiprism molecular geometry.
The complex formed by lead nitrate with a bithiazole bidentate N-donor ligand is binuclear. The crystal structure shows that the nitrate group forms a bridge between two lead atoms. One interesting aspect of this type of complexes is the presence of a physical gap in the coordination sphere; i.e., the ligands are not placed symmetrically around the metal ion. This is potentially due to a lead lone pair of electrons, also found in lead complexes with an imidazole ligand.
Applications
Lead nitrate has been used as a heat stabiliser in nylon and polyesters, as a coating for photothermographic paper, and in rodenticides.
Heating lead nitrate is convenient means of making nitrogen dioxide
2 Pb(NO_3)_2->[\Delta]2PbO + 4NO_2 +O_2
In the gold cyanidation process, addition of lead(II) nitrate solution improves the leaching process. Only limited amounts (10 to 100 milligrams lead nitrate per kilogram gold) are required.
In organic chemistry, it may be used in the preparation of isothiocyanates from dithiocarbamates. Its use as a bromide scavenger during SN1 substitution has been reported.
Safety
Lead(II) nitrate is toxic, and ingestion may lead to acute lead poisoning, as is applicable for all soluble lead compounds. All inorganic lead compounds are classified by the International Agency for Research on Cancer (IARC) as probably carcinogenic to humans (Category 2A). They have been linked to renal cancer and glioma in experimental animals and to renal cancer, brain cancer and lung cancer in humans, although studies of workers exposed to lead are often complicated by concurrent exposure to arsenic. Lead is known to substitute for zinc in a number of enzymes, including δ-aminolevulinic acid dehydratase (porphobilinogen synthase) in the haem biosynthetic pathway and pyrimidine-5′-nucleotidase, important for the correct metabolism of DNA and can therefore cause fetal damage.
| Physical sciences | Nitric oxyanions | Chemistry |
238525 | https://en.wikipedia.org/wiki/Arsine | Arsine | Arsine (IUPAC name: arsane) is an inorganic compound with the formula AsH3. This flammable, pyrophoric, and highly toxic pnictogen hydride gas is one of the simplest compounds of arsenic. Despite its lethality, it finds some applications in the semiconductor industry and for the synthesis of organoarsenic compounds. The term arsine is commonly used to describe a class of organoarsenic compounds of the formula AsH3−xRx, where R = aryl or alkyl. For example, As(C6H5)3, called triphenylarsine, is referred to as "an arsine".
General properties
In its standard state arsine is a colorless, denser-than-air gas that is slightly soluble in water (2% at 20 °C) and in many organic solvents as well. Arsine itself is odorless, but it oxidizes in air and this creates a slight garlic or fish-like scent when the compound is present above 0.5ppm. This compound is kinetically stable: at room temperature it decomposes only slowly. At temperatures of ca. 230 °C, decomposition to arsenic and hydrogen is sufficiently rapid to be the basis of the Marsh test for arsenic presence. Similar to stibine, the decomposition of arsine is autocatalytic, as the arsenic freed during the reaction acts as a catalyst for the same reaction. Several other factors, such as humidity, presence of light and certain catalysts (namely alumina) facilitate the rate of decomposition.
AsH3 is a trigonal pyramidal molecule with H–As–H angles of 91.8° and three equivalent As–H bonds, each of 1.519 Å length.
Discovery and synthesis
AsH3 is generally prepared by the reaction of As3+ sources with H− equivalents.
4 AsCl3 + 3 NaBH4 → 4 AsH3 + 3 NaCl + 3 BCl3
As reported in 1775, Carl Scheele reduced arsenic(III) oxide with zinc in the presence of acid. This reaction is a prelude to the Marsh test.
Alternatively, sources of As3− react with protonic reagents to also produce this gas. Zinc arsenide and sodium arsenide are suitable precursors:
Zn3As2 + 6 H+ → 2 AsH3 + 3 Zn2+
Na3As + 3 HBr → AsH3 + 3 NaBr
Reactions
The understanding of the chemical properties of AsH3 is well developed and can be anticipated based on an average of the behavior of pnictogen counterparts, such as PH3 and SbH3.
Thermal decomposition
Typical for a heavy hydride (e.g., , , ), is unstable with respect to its elements. In other words, it is stable kinetically but not thermodynamically.
This decomposition reaction is the basis of the Marsh test, which detects elemental As.
Oxidation
Continuing the analogy to SbH3, AsH3 is readily oxidized by concentrated O2 or the dilute O2 concentration in air:
2 AsH3 + 3 O2 → As2O3 + 3 H2O
Arsine will react violently in presence of strong oxidizing agents, such as potassium permanganate, sodium hypochlorite, or nitric acid.
Precursor to metallic derivatives
AsH3 is used as a precursor to metal complexes of "naked" (or "nearly naked") arsenic. An example is the dimanganese species [(C5H5)Mn(CO)2]2AsH, wherein the Mn2AsH core is planar.
Gutzeit test
A characteristic test for arsenic involves the reaction of AsH3 with Ag+, called the Gutzeit test for arsenic. Although this test has become obsolete in analytical chemistry, the underlying reactions further illustrate the affinity of AsH3 for "soft" metal cations. In the Gutzeit test, AsH3 is generated by reduction of aqueous arsenic compounds, typically arsenites, with Zn in the presence of H2SO4. The evolved gaseous AsH3 is then exposed to AgNO3 either as powder or as a solution. With solid AgNO3, AsH3 reacts to produce yellow Ag4AsNO3, whereas AsH3 reacts with a solution of AgNO3 to give black Ag3As.
Acid-base reactions
The acidic properties of the As–H bond are often exploited. Thus, AsH3 can be deprotonated:
AsH3 + NaNH2 → NaAsH2 + NH3
Upon reaction with the aluminium trialkyls, AsH3 gives the trimeric [R2AlAsH2]3, where R = (CH3)3C. This reaction is relevant to the mechanism by which GaAs forms from AsH3 (see below).
AsH3 is generally considered non-basic, but it can be protonated by superacids to give isolable salts of the tetrahedral species [AsH4]+.
Reaction with halogen compounds
Reactions of arsine with the halogens (fluorine and chlorine) or some of their compounds, such as nitrogen trichloride, are extremely dangerous and can result in explosions.
Catenation
In contrast to the behavior of PH3, AsH3 does not form stable chains, although diarsine (or diarsane) H2As–AsH2, and even triarsane H2As–As(H)–AsH2 have been detected. The diarsine is unstable above −100 °C.
Applications
Microelectronics applications
AsH3 is used in the synthesis of semiconducting materials related to microelectronics and solid-state lasers. Related to phosphorus, arsenic is an n-dopant for silicon and germanium. More importantly, AsH3 is used to make the semiconductor GaAs by chemical vapor deposition (CVD) at 700–900 °C:
Ga(CH3)3 + AsH3 → GaAs + 3 CH4
For microelectronic applications, arsine can be provided by a sub-atmospheric gas source (a source that supplies less than atmospheric pressure). In this type of gas package, the arsine is adsorbed on a solid microporous adsorbent inside a gas cylinder. This method allows the gas to be stored without pressure, significantly reducing the risk of an arsine gas leak from the cylinder. With this apparatus, arsine is obtained by applying vacuum to the gas cylinder valve outlet. For semiconductor manufacturing, this method is feasible, as processes such as ion implantation operate under high vacuum.
Chemical warfare
Since before WWII AsH3 was proposed as a possible chemical warfare weapon. The gas is colorless, almost odorless, and 2.5 times denser than air, as required for a blanketing effect sought in chemical warfare. It is also lethal in concentrations far lower than those required to smell its garlic-like scent. In spite of these characteristics, arsine was never officially used as a weapon, because of its high flammability and its lower efficacy when compared to the non-flammable alternative phosgene. On the other hand, several organic compounds based on arsine, such as lewisite (β-chlorovinyldichloroarsine), adamsite (diphenylaminechloroarsine), Clark 1 (diphenylchloroarsine) and Clark 2 (diphenylcyanoarsine) have been effectively developed for use in chemical warfare.
Forensic science and the Marsh test
AsH3 is well known in forensic science because it is a chemical intermediate in the detection of arsenic poisoning. The old (but extremely sensitive) Marsh test generates AsH3 in the presence of arsenic. This procedure, published in 1836 by James Marsh, is based upon treating an As-containing sample of a victim's body (typically the stomach contents) with As-free zinc and dilute sulfuric acid: if the sample contains arsenic, gaseous arsine will form. The gas is swept into a glass tube and decomposed by means of heating around 250–300 °C. The presence of As is indicated by formation of a deposit in the heated part of the equipment. On the other hand, the appearance of a black mirror deposit in the cool part of the equipment indicates the presence of antimony (the highly unstable SbH3 decomposes even at low temperatures).
The Marsh test was widely used by the end of the 19th century and the start of the 20th; nowadays more sophisticated techniques such as atomic spectroscopy, inductively coupled plasma, and x-ray fluorescence analysis are employed in the forensic field. Though neutron activation analysis was used to detect trace levels of arsenic in the mid 20th century, it has since fallen out of use in modern forensics.
Toxicology
The toxicity of arsine is distinct from that of other arsenic compounds. The main route of exposure is by inhalation, although poisoning after skin contact has also been described. Arsine attacks hemoglobin in the red blood cells, causing them to be destroyed by the body.
The first signs of exposure, which can take several hours to become apparent, are headaches, vertigo, and nausea, followed by the symptoms of haemolytic anaemia (high levels of unconjugated bilirubin), haemoglobinuria and nephropathy. In severe cases, the damage to the kidneys can be long-lasting.
Exposure to arsine concentrations of 250 ppm is rapidly fatal: concentrations of 25–30 ppm are fatal for 30 min exposure, and concentrations of 10 ppm can be fatal at longer exposure times. Symptoms of poisoning appear after exposure to concentrations of 0.5 ppm. There is little information on the chronic toxicity of arsine, although it is reasonable to assume that, in common with other arsenic compounds, a long-term exposure could lead to arsenicosis.
Arsine can cause pneumonia in two different ways either the "extensive edema of the acute stage may become diffusely infiltrated with polymorphonuclear leucocytes, and the edema may change to ringed with leucocytes, their epithelium degenerated, their walls infiltrated, and each bronchiole the center of a small focus or nodule of pneumonic consolidation", and In the second Case "the areas involved are practically always the anterior tips of the middle and upper lobes, while the posterior portions of these lobes and the whole of the lower lobes present an air-containing and emphysematous condition, sometimes with slight congestion, sometimes with none." which can result in death.
It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
Occupational exposure limits
| Physical sciences | Hydrogen compounds | Chemistry |
238527 | https://en.wikipedia.org/wiki/Elimination%20reaction | Elimination reaction | An elimination reaction is a type of organic reaction in which two substituents are removed from a molecule in either a one- or two-step mechanism. The one-step mechanism is known as the E2 reaction, and the two-step mechanism is known as the E1 reaction. The numbers refer not to the number of steps in the mechanism, but rather to the kinetics of the reaction: E2 is bimolecular (second-order) while E1 is unimolecular (first-order). In cases where the molecule is able to stabilize an anion but possesses a poor leaving group, a third type of reaction, E1CB, exists. Finally, the pyrolysis of xanthate and acetate esters proceed through an "internal" elimination mechanism, the Ei mechanism.
E2 mechanism
The E2 mechanism, where E2 stands for bimolecular elimination, involves a one-step mechanism in which carbon-hydrogen and carbon-halogen bonds break to form a double bond (C=C Pi bond).
The specifics of the reaction are as follows:
E2 is a single step elimination, with a single transition state.
It is typically undergone by primary substituted alkyl halides, but is possible with some secondary alkyl halides and other compounds.
The reaction rate is second order, because it's influenced by both the alkyl halide and the base (bimolecular).
Because the E2 mechanism results in the formation of a pi bond, the two leaving groups (often a hydrogen and a halogen) need to be antiperiplanar. An antiperiplanar transition state has staggered conformation with lower energy than a synperiplanar transition state which is in eclipsed conformation with higher energy. The reaction mechanism involving staggered conformation is more favorable for E2 reactions (unlike E1 reactions).
E2 typically uses a strong base. It must be strong enough to remove a weakly acidic hydrogen.
In order for the pi bond to be created, the hybridization of carbons needs to be lowered from sp3 to sp2.
The C-H bond is weakened in the rate determining step and therefore a primary deuterium isotope effect much larger than 1 (commonly 2-6) is observed.
E2 competes with the SN2 reaction mechanism if the base can also act as a nucleophile (true for many common bases).
An example of this type of reaction in scheme 1 is the reaction of isobutylbromide with potassium ethoxide in ethanol. The reaction products are isobutene, ethanol and potassium bromide.
E1 mechanism
E1 is a model to explain a particular type of chemical elimination reaction. E1 stands for unimolecular elimination and has the following specifications
It is a two-step process of elimination: ionization and deprotonation.
Ionization: the carbon-halogen bond breaks to give a carbocation intermediate.
deprotonation of the carbocation.
E1 typically takes place with tertiary alkyl halides, but is possible with some secondary alkyl halides.
The reaction rate is influenced only by the concentration of the alkyl halide because carbocation formation is the slowest step, as known as the rate-determining step. Therefore, first-order kinetics apply (unimolecular).
The reaction usually occurs in the complete absence of a base or the presence of only a weak base (acidic conditions and high temperature).
E1 reactions are in competition with SN1 reactions because they share a common carbocationic intermediate.
A secondary deuterium isotope effect of slightly larger than 1 (commonly 1 - 1.5) is observed.
There is no antiperiplanar requirement. An example is the pyrolysis of a certain sulfonate ester of menthol:
Only reaction product A results from antiperiplanar elimination. The presence of product B is an indication that an E1 mechanism is occurring.
It is accompanied by carbocationic rearrangement reactions
An example in scheme 2 is the reaction of tert-butylbromide with potassium ethoxide in ethanol.
E1 eliminations happen with highly substituted alkyl halides for two main reasons.
Highly substituted alkyl halides are bulky, limiting the room for the E2 one-step mechanism; therefore, the two-step E1 mechanism is favored.
Highly substituted carbocations are more stable than methyl or primary substituted cations. Such stability gives time for the two-step E1 mechanism to occur.
If SN1 and E1 pathways are competing, the E1 pathway can be favored by increasing the heat.
Specific features :
Rearrangement possible
Independent of concentration and basicity of base
Competition among mechanisms
The reaction rate is influenced by the reactivity of halogens, iodide and bromide being favored. Fluoride is not a good leaving group, so eliminations with fluoride as the leaving group have slower rates than other halogens .
There is a certain level of competition between the elimination reaction and nucleophilic substitution. More precisely, there are competitions between E2 and SN2 and also between E1 and SN1. Generally, elimination is favored over substitution when
steric hindrance around the α-carbon increases.
a stronger base is used.
temperature increases (increase entropy)
the base is a poor nucleophile. Bases with steric bulk, (such as in potassium tert-butoxide), are often poor nucleophiles.
For example, when a 3° haloalkane is reacts with an alkoxide, due to strong basic character of the alkoxide and unreactivity of 3° group towards SN2, only alkene formation by E2 elimination is observed. Thus, elimination by E2 limits the scope of the Williamson ether synthesis (an SN2 reaction) to essentially only 1° haloalkanes; 2° haloalkanes generally do not give synthetically useful yields, while 3° haloalkanes fail completely.
With strong base, 3° haloalkanes give elimination by E2. With weak bases, mixtures of elimination and substitution products form by competing SN1 and E1 pathways.
The case of 2° haloalkanes is relatively complex. For strongly basic nucleophiles (pKaH > 11, e.g., hydroxide, alkoxide, acetylide), the result is generally elimination by E2, while weaker bases that are still good nucleophiles (e.g., acetate, azide, cyanide, iodide) will give primarily SN2. Finally, weakly nucleophilic species (e.g., water, alcohols, carboxylic acids) will give a mixture of SN1 and E1.
For 1° haloalkanes with β-branching, E2 elimination is still generally preferred over SN2 for strongly basic nucleophiles. Unhindered 1° haloalkanes favor SN2 when the nucleophile is also unhindered. However, strongly basic and hindered nucleophiles favor E2.
In general, with the exception of reactions in which E2 is impossible because β hydrogens are unavailable (e.g. methyl, allyl, and benzyl halides), clean SN2 substitution is hard to achieve when strong bases are used, as alkene products arising from elimination are almost always observed to some degree. On the other hand, clean E2 can be achieved by simply selecting a sterically hindered base (e.g., potassium tert-butoxide). Similarly, attempts to effect substitution by SN1 almost always result in a product mixture contaminated by some E1 product (again, with the exception of cases where the lack of β hydrogens makes elimination impossible).
In one study the kinetic isotope effect (KIE) was determined for the gas phase reaction of several alkyl halides with the chlorate ion. In accordance with an E2 elimination the reaction with t-butyl chloride results in a KIE of 2.3. The methyl chloride reaction (only SN2 possible) on the other hand has a KIE of 0.85 consistent with a SN2 reaction because in this reaction type the C-H bonds tighten in the transition state. The KIE's for the ethyl (0.99) and isopropyl (1.72) analogues suggest competition between the two reaction modes.
Elimination reactions other than β-elimination
β-Elimination, with loss of electrofuge and nucleofuge on vicinal carbon atoms, is by far the most common type of elimination. The ability to form a stable product containing a C=C or C=X bond, as well as orbital alignment considerations, strongly favors β-elimination over other elimination processes. However, other types are known, generally for systems where β-elimination cannot occur.
The next most common type of elimination reaction is α-elimination. For a carbon center, the result of α-elimination is the formation of a carbene, which includes "stable carbenes" such as carbon monoxide or isocyanides. For instance, α-elimination the elements of HCl from chloroform (CHCl3) in the presence of strong base is a classic approach for the generation of dichlorocarbene, :CCl2, as a reactive intermediate. On the other hand, formic acid undergoes α-elimination to afford the stable products water and carbon monoxide under acidic conditions. α-Elimination may also occur on a metal center, one particularly common result of which is lowering of both the metal oxidation state and coordination number by 2 units in a process known as reductive elimination. (Confusingly, in organometallic terminology, the terms α-elimination and α-abstraction refer to processes that result in formation of a metal-carbene complex. In these reactions, it is the carbon adjacent to the metal that undergoes α-elimination.)
In certain special cases, γ- and higher eliminations to form three-membered or larger rings is also possible in both organic and organometallic processes. For instance, certain Pt(II) complexes undergo γ- and δ-elimination to give metallocycles. More recently, γ-silyl elimination of a silylcyclobutyl tosylate has been used to prepare strained bicyclic systems.
History
Many of the concepts and terminology related to elimination reactions were proposed by Christopher Kelk Ingold in the 1920s.
| Physical sciences | Organic reactions | Chemistry |
238528 | https://en.wikipedia.org/wiki/Nitrogen%20dioxide | Nitrogen dioxide | Nitrogen dioxide is a chemical compound with the formula . One of several nitrogen oxides, nitrogen dioxide is a reddish-brown gas. It is a paramagnetic, bent molecule with C2v point group symmetry. Industrially, is an intermediate in the synthesis of nitric acid, millions of tons of which are produced each year, primarily for the production of fertilizers.
Nitrogen dioxide is poisonous and can be fatal if inhaled in large quantities. Cooking with a gas stove produces nitrogen dioxide which causes poorer indoor air quality. Combustion of gas can lead to increased concentrations of nitrogen dioxide throughout the home environment which is linked to respiratory issues and diseases. The LC50 (median lethal dose) for humans has been estimated to be 174 ppm for a 1-hour exposure. It is also included in the NOx family of atmospheric pollutants.
Properties
Nitrogen dioxide is a reddish-brown gas with a pungent, acrid odor above and becomes a yellowish-brown liquid below . It forms an equilibrium with its dimer, dinitrogen tetroxide (), and converts almost entirely to below .
The bond length between the nitrogen atom and the oxygen atom is 119.7 pm. This bond length is consistent with a bond order between one and two.
Unlike ozone () the ground electronic state of nitrogen dioxide is a doublet state, since nitrogen has one unpaired electron, which decreases the alpha effect compared with nitrite and creates a weak bonding interaction with the oxygen lone pairs. The lone electron in also means that this compound is a free radical, so the formula for nitrogen dioxide is often written as .
The reddish-brown color is a consequence of preferential absorption of light in the blue region of the spectrum (400–500 nm), although the absorption extends throughout the visible (at shorter wavelengths) and into the infrared (at longer wavelengths). Absorption of light at wavelengths shorter than about 400 nm results in photolysis (to form , atomic oxygen); in the atmosphere the addition of the oxygen atom so formed to results in ozone.
Preparation
Industrially, nitrogen dioxide is produced and transported as its cryogenic liquid dimer, dinitrogen tetroxide. It is produced industrially by the oxidation of ammonia, the Ostwald Process. This reaction is the first step in the production of nitric acid:
It can also be produced by the oxidation of nitrosyl chloride:
Instead, most laboratory syntheses stabilize and then heat the nitric acid to accelerate the decomposition. For example, the thermal decomposition of some metal nitrates generates :
Alternatively, dehydration of nitric acid produces nitronium nitrate...
...which subsequently undergoes thermal decomposition:
is generated by the reduction of concentrated nitric acid with a metal (such as copper):
Selected reactions
Nitric acid decomposes slowly to nitrogen dioxide by the overall reaction:
4 → 4 + 2 +
The nitrogen dioxide so formed confers the characteristic yellow color often exhibited by this acid. However, the reaction is too slow to be a practical source of .
Thermal properties
At low temperatures, reversibly converts to the colourless gas dinitrogen tetroxide ():
The exothermic equilibrium has enthalpy change .
At , decomposes with release of oxygen via an endothermic process ():
2 NO2 →2 NO +
As an oxidizer
As suggested by the weakness of the N–O bond, is a good oxidizer. Consequently, it will combust, sometimes explosively, in the presence of hydrocarbons.
Hydrolysis
NO2 reacts with water to give nitric acid and nitrous acid:
This reaction is one of the steps in the Ostwald process for the industrial production of nitric acid from ammonia. This reaction is negligibly slow at low concentrations of NO2 characteristic of the ambient atmosphere, although it does proceed upon NO2 uptake to surfaces. Such surface reaction is thought to produce gaseous HNO2 (often written as HONO) in outdoor and indoor environments.
Conversion to nitrates
is used to generate anhydrous metal nitrates from the oxides:
Alkyl and metal iodides give the corresponding nitrates:
With organic compounds
The reactivity of nitrogen dioxide toward organic compounds has long been known. For example, it reacts with amides to give N-nitroso derivatives. It is used for nitrations under anhydrous conditions.
Uses
is used as an intermediate in the manufacturing of nitric acid, as a nitrating agent in the manufacturing of chemical explosives, as a polymerization inhibitor for acrylates, as a flour bleaching agent, and as a room temperature sterilization agent. It is also used as an oxidizer in rocket fuel, for example in red fuming nitric acid; it was used in the Titan rockets, to launch Project Gemini, in the maneuvering thrusters of the Space Shuttle, and in uncrewed space probes sent to various planets.
Environmental presence
Nitrogen dioxide typically arises via the oxidation of nitric oxide by oxygen in air (e.g. as result of corona discharge):
2
is introduced into the environment by natural causes, including entry from the stratosphere, bacterial respiration, volcanos, and lightning. These sources make a trace gas in the atmosphere of Earth, where it plays a role in absorbing sunlight and regulating the chemistry of the troposphere, especially in determining ozone concentrations.
Anthropogenic sources
Nitrogen dioxide also forms in most combustion processes. At elevated temperatures nitrogen combines with oxygen to form nitrogen dioxide:
For the general public, the most prominent sources of are internal combustion engines, as combustion temperatures are high enough to thermally combine some of the nitrogen and oxygen in the air to form .
Outdoors, can be a result of traffic from motor vehicles. Indoors, exposure arises from cigarette smoke, and butane and kerosene heaters and stoves. Indoor exposure levels of are, on average, at least three times higher in homes with gas stoves compared to electric stove. Workers in industries where is used are also exposed and are at risk for occupational lung diseases, and NIOSH has set exposure limits and safety standards. Workers in high voltage areas especially those with spark or plasma creation are at risk. Agricultural workers can be exposed to arising from grain decomposing in silos; chronic exposure can lead to lung damage in a condition called "silo-filler's disease".
Toxicity
diffuses into the epithelial lining fluid (ELF) of the respiratory epithelium and dissolves. There, it chemically reacts with antioxidant and lipid molecules in the ELF. The health effects of are caused by the reaction products or their metabolites, which are reactive nitrogen species and reactive oxygen species that can drive bronchoconstriction, inflammation, reduced immune response, and may have effects on the heart.
Acute exposure
Acute harm due to exposure is rare. 100–200 ppm can cause mild irritation of the nose and throat, 250–500 ppm can cause edema, leading to bronchitis or pneumonia, and levels above 1000 ppm can cause death due to asphyxiation from fluid in the lungs. There are often no symptoms at the time of exposure other than transient cough, fatigue or nausea, but over hours inflammation in the lungs causes edema.
For skin or eye exposure, the affected area is flushed with saline. For inhalation, oxygen is administered, bronchodilators may be administered, and if there are signs of methemoglobinemia, a condition that arises when nitrogen-based compounds affect the hemoglobin in red blood cells, methylene blue may be administered.
It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and it is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
Long-term
Exposure to low levels of over time can cause changes in lung function. Cooking with a gas stove is associated with poorer indoor air quality. Combustion of gas can lead to increased concentrations of nitrogen dioxide throughout the home environment which is linked to respiratory issues and diseases. Children exposed to are more likely to be admitted to hospital with asthma.
Environmental effects
Interaction of and other with water, oxygen and other chemicals in the atmosphere can form acid rain which harms sensitive ecosystems such as lakes and forests. Elevated levels of can also harm vegetation, decreasing growth, and reduce crop yields.
| Physical sciences | Covalent oxides | Chemistry |
238631 | https://en.wikipedia.org/wiki/Crane%20fly | Crane fly | A crane fly is any member of the dipteran superfamily Tipuloidea, which contains the living families Cylindrotomidae, Limoniidae, Pediciidae and Tipulidae, as well as several extinct families. "Winter crane flies", members of the family Trichoceridae, are sufficiently different from the typical crane flies of Tipuloidea to be excluded from the superfamily Tipuloidea, and are placed as their sister group within Tipulomorpha.
The classification of crane flies has been varied in the past, with some or all of these families treated as subfamilies, but the following classification is currently accepted. (Species counts are approximate, and vary over time.)
Infraorder Tipulomorpha
Superfamily Tipuloidea (Typical Crane Flies)
Family Cylindrotomidae (Cylindrotomid or Long-Bodied Crane Flies, 67 species)
Family Limoniidae (Limoniid Crane Flies, 10,786 species, possibly paraphyletic)
Family Pediciidae (Hairy-eyed Crane Flies, 498 species)
Family Tipulidae (Large Crane Flies, 4,351 species)
Family Trichoceridae (Winter Crane Flies)
In colloquial speech, crane flies are known as mosquito hawks or "skeeter-eaters", though they do not actually prey on adult mosquitos or other insects. They are also sometimes called "daddy longlegs", a name which is also used for arachnids of the family Pholcidae and the order Opiliones. The larvae of crane flies are known commonly as leatherjackets.
Crane flies first appeared during the Middle Triassic, around 245 million years ago, making them one of the oldest known groups of flies, and are found worldwide, though individual species usually have limited ranges. They are most diverse in the tropics but are also common in northern latitudes and high elevations.
More than 15,500 species and over 500 genera of crane flies have been described, the majority by Charles Paul Alexander, who published descriptions of 10,890 new species and subspecies, and 256 new genera and subgenera over a period of 71 years, from 1910 to 1981.
Description
Summary
An adult crane fly, resembling an oversized male mosquito, typically has a slender body and long, stilt-like legs that are deciduous, easily coming off the body. Like other insects, their wings are marked with wing interference patterns which vary among species, thus are useful for species identification. They occur in moist, temperate environments such as vegetation near lakes and streams. They generally do not feed, but some species consume nectar, pollen and/or water.
The wingspan is generally about , though some species of Holorusia can reach . The antennae have up to 19 segments. It is also characterized by a V-shaped suture or groove on the back of the thorax (mesonotum) and by its wing venation. The rostrum is long and in some species as long as the head and thorax together.
Larvae occur in various habitats including marshes, springs, decaying wood, moist soil, leaf litter, fungi, vertebrate nests and vegetation. They usually feed on decaying plant matter and microbes associated with this, but some species instead feed on living plants, fungi or other invertebrates.
Formal
Tipuloidea are medium to large-sized flies () with elongated legs, wings, and abdomen. Their colour is yellow, brown or grey. Ocelli are absent. The rostrum (a snout) is short to very short with a beak-like point called the nasus (rarely absent). The apical segment of the maxillary palpi is flagelliform (whip-like) and much longer than the subapical segment. The antennae have 13 segments (exceptionally 14–19). These are whorled, serrate, or ctenidial (comb-like). There is a distinct V-shaped suture between the mesonotal prescutum and scutum (near the level of the wing bases). The wings are monochromatic, longitudinally striped or marbled. In females the wings are sometimes rudimentary. The sub-costal vein (Sc) joins through Sc2 with the radial vein, Sc1 is at most a short stump. There are four, rarely (when R2 is reduced) three branches of the radial vein merging into the alar margin. The discoidal wing cell is usually present. The wing has two anal veins. Sternite 9 of the male genitalia has, with few exceptions, two pairs of appendages. Sometimes appendages are also present on sternite 8. The female ovipositor has sclerotized valves and the cerci have a smooth or dentate lower margin. The valves are sometimes modified into thick bristles or short teeth.
The larva is elongated, usually cylindrical. The posterior two-thirds of the head capsule is enclosed or retracted within the prothoracic segment. The larva is metapneustic (with only one pair of spiracles, these on the anal segment of the abdomen), but often with vestigial lateral spiracles (rarely apneustic). The head capsule is sclerotized anteriorly and deeply incised ventrally and often dorsolaterally. The mandibles are opposed and move in the horizontal or oblique plane. The abdominal segments have transverse creeping welts. The terminal segments of the abdomen are glabrous, often partially sclerotized and bearing posterior spiracles. The spiracular disc is usually surrounded by lobe-like projections and anal papillae or lobes.
Biology
Adults have a lifespan of 10 to 15 days. The adult female usually contains mature eggs as she emerges from her pupa, and often mates immediately if a male is available. Males also search for females by walking or flying. Copulation takes a few minutes to hours and may be accomplished in flight. The female immediately oviposits, usually in wet soil or mats of algae. Some lay eggs on the surface of a water body or in dry soils, and some reportedly simply drop them in flight. Most crane fly eggs are black in color. They often have a filament, which may help anchor the egg in wet or aquatic environments.
Crane fly larvae (leatherjackets) have been observed in many habitat types on dry land and in water, including marine, brackish, and fresh water. They are cylindrical in shape, but taper toward the front end, and the head capsule is often retracted into the thorax. The abdomen may be smooth, lined with hairs, or studded with projections or welt-like spots. Projections may occur around the spiracles. Larvae may eat algae, microflora, and living or decomposing plant matter, including wood. Some are predatory.
Ecology
Larval habitats include all kinds of freshwater, semiaquatic environments. Some Tipuloidea, including Dolichopeza, are found in moist to wet cushions of mosses or liverworts. Ctenophora species are found in decaying wood or sodden logs. Nephrotoma and Tipula larvae are found in dry soils of pasturelands, lawns, and steppe. Tipuloidea larvae are also found in rich organic earth and mud, in wet spots in woods where the humus is saturated, in leaf litter or mud, decaying plant materials, or fruits in various stages of putrefaction.
Larvae can be important in the soil ecosystem, because they process organic material and increase microbial activity. Larvae and adults are also valuable prey items for many animals, including insects, spiders, fish, amphibians, birds, and mammals.
Adult crane flies may be used for transport by aquatic species of the mite family Ascidae. This is known as phoresis.
Pest status
Some members of the tipulid genus Tipula, such as the European crane fly, Tipula paludosa and the marsh crane fly T. oleracea are agricultural pests in Europe. The larvae of these species live in the top layers of soil where they feed on the roots, root hairs, crown, and sometimes the leaves of crops, stunting their growth or killing the plants. They are pests on a variety of commodities. Since the late 1900s, T. paludosa and T. oleracea have become invasive in the United States. The larvae have been observed on many crops, including vegetables, fruits, cereals, pasture, lawn grasses, and ornamental plants.
In 1935, Lord's Cricket Ground in London was among venues affected by leatherjackets. Several thousand were collected by ground staff and burned, because they caused bald patches on the pitch and the pitch took unaccustomed spin for much of the season.
Phylogenetics
The phylogenetic position of the Tipuloidea remains uncertain. The classical viewpoint that they are an early branch of Diptera—perhaps (with the Trichoceridae) the sister group of all other Diptera—is giving way to modern views that they are more highly derived. This is thanks to evidence from molecular studies, which is consistent with the more derived larval characters similar to those of 'higher' Diptera. The Pediciidae and Tipulidae are sister groups (the "limoniids" are a paraphyletic clade). Specifically, Limoniidae has recently been treated by numerous authors at the rank of family, but subsequent phylogenetic analyses revealed that the remaining groups of tipulids render the group paraphyletic. The Cylindrotomidae appear to be a relict group that was much better represented in the Tertiary. Tipulidae probably evolved from ancestors in the Upper Jurassic, the Architipulidae, and representatives of the Limoniidae are known from the Upper Triassic.
Common names
Numerous common names have been applied to the crane fly. Many of the names are more or less regional in the U.S., including mosquito hawk, mosquito eater, gallinipper, and gollywhopper. They are also known as "daddy longlegs" in English-speaking countries outside the U.S., not to be confused with the U.S. usages of "daddy long legs" that refer to either arachnids of the order Opiliones or the family Pholcidae. The larvae of crane flies are known commonly as leatherjackets.
They are also known as Jenny long legs in Scotland. In Ireland, they are generally called 'daddy long legs' in English, whereas in Irish they are commonly known as Pilib an Gheataire, which means Skinny Philip.
Misconceptions
There is an enduring urban legend that crane flies are the most venomous insects in the world; however, they have neither venom nor the ability to bite. The myth probably arose due to their being confused with the cellar spider as they are also informally called "daddy longlegs", and although the arachnid does possess venom, it is not especially potent.
Despite widely held beliefs that adult crane flies (or "mosquito hawks") prey on mosquito populations, the adult crane fly is anatomically incapable of killing or consuming other insects. Although the adults of some species may feed on nectar, the adults of many species have such short lifespans that they do not eat at all.
| Biology and health sciences | Flies (Diptera) | null |
238646 | https://en.wikipedia.org/wiki/Protostar | Protostar | A protostar is a very young star that is still gathering mass from its parent molecular cloud. It is the earliest phase in the process of stellar evolution. For a low-mass star (i.e. that of the Sun or lower), it lasts about 500,000 years. The phase begins when a molecular cloud fragment first collapses under the force of self-gravity and an opaque, pressure-supported core forms inside the collapsing fragment. It ends when the infalling gas is depleted, leaving a pre-main-sequence star, which contracts to later become a main-sequence star at the onset of hydrogen fusion producing helium.
History
The modern picture of protostars, summarized above, was first suggested by Chushiro Hayashi in 1966. In the first models, the size of protostars was greatly overestimated. Subsequent numerical calculations clarified the issue, and showed that protostars are only modestly larger than main-sequence stars of the same mass. This basic theoretical result has been confirmed by observations, which find that the largest pre-main-sequence stars are also of modest size.
Protostellar evolution
Star formation begins in relatively small molecular clouds called dense cores. Each dense core is initially in balance between self-gravity, which tends to compress the object, and both gas pressure and magnetic pressure, which tend to inflate it. As the dense core accrues mass from its larger, surrounding cloud, self-gravity begins to overwhelm pressure, and collapse begins. Theoretical modeling of an idealized spherical cloud initially supported only by gas pressure indicates that the collapse process spreads from the inside toward the outside. Spectroscopic observations of dense cores that do not yet contain stars indicate that contraction indeed occurs. So far, however, the predicted outward spread of the collapse region has not been observed.
The gas that collapses toward the center of the dense core first builds up a low-mass protostar, and then a protoplanetary disk orbiting the object. As the collapse continues, an increasing amount of gas impacts the disk rather than the star, a consequence of angular momentum conservation. Exactly how material in the disk spirals inward onto the protostar is not yet understood, despite a great deal of theoretical effort. This problem is illustrative of the larger issue of accretion disk theory, which plays a role in much of astrophysics.
Regardless of the details, the outer surface of a protostar consists at least partially of shocked gas that has fallen from the inner edge of the disk. The surface is thus very different from the relatively quiescent photosphere of a pre-main sequence or main-sequence star. Within its deep interior, the protostar has lower temperature than an ordinary star. At its center, hydrogen-1 is not yet fusing with itself. Theory predicts, however, that the hydrogen isotope deuterium (hydrogen-2) fuses with hydrogen-1, creating helium-3. The heat from this fusion reaction tends to inflate the protostar, and thereby helps determine the size of the youngest observed pre-main-sequence stars.
The energy generated from ordinary stars comes from the nuclear fusion occurring at their centers. Protostars also generate energy, but it comes from the radiation liberated at the shocks on its surface and on the surface of its surrounding disk. The radiation thus created must traverse the interstellar dust in the surrounding dense core. The dust absorbs all impinging photons and reradiates them at longer wavelengths. Consequently, a protostar is not detectable at optical wavelengths, and cannot be placed in the Hertzsprung–Russell diagram, unlike the more evolved pre-main-sequence stars.
The actual radiation emanating from a protostar is predicted to be in the infrared and millimeter regimes. Point-like sources of such long-wavelength radiation are commonly seen in regions that are obscured by molecular clouds. It is commonly believed that those conventionally labeled as Class 0 or Class I sources are protostars. However, there is still no definitive evidence for this identification.
Observed classes of young stars
Gallery
| Physical sciences | Stellar astronomy | null |
238686 | https://en.wikipedia.org/wiki/Binary%20number | Binary number | A binary number is a number expressed in the base-2 numeral system or binary numeral system, a method for representing numbers that uses only two symbols for the natural numbers: typically "0" (zero) and "1" (one). A binary number may also refer to a rational number that has a finite representation in the binary numeral system, that is, the quotient of an integer by a power of two.
The base-2 numeral system is a positional notation with a radix of 2. Each digit is referred to as a bit, or binary digit. Because of its straightforward implementation in digital electronic circuitry using logic gates, the binary system is used by almost all modern computers and computer-based devices, as a preferred system of use, over various other human techniques of communication, because of the simplicity of the language and the noise immunity in physical implementation.
History
The modern binary number system was studied in Europe in the 16th and 17th centuries by Thomas Harriot, and Gottfried Leibniz. However, systems related to binary numbers have appeared earlier in multiple cultures including ancient Egypt, China, Europe and India.
Egypt
The scribes of ancient Egypt used two different systems for their fractions, Egyptian fractions (not related to the binary number system) and Horus-Eye fractions (so called because many historians of mathematics believe that the symbols used for this system could be arranged to form the eye of Horus, although this has been disputed). Horus-Eye fractions are a binary numbering system for fractional quantities of grain, liquids, or other measures, in which a fraction of a hekat is expressed as a sum of the binary fractions 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64. Early forms of this system can be found in documents from the Fifth Dynasty of Egypt, approximately 2400 BC, and its fully developed hieroglyphic form dates to the Nineteenth Dynasty of Egypt, approximately 1200 BC.
The method used for ancient Egyptian multiplication is also closely related to binary numbers. In this method, multiplying one number by a second is performed by a sequence of steps in which a value (initially the first of the two numbers) is either doubled or has the first number added back into it; the order in which these steps are to be performed is given by the binary representation of the second number. This method can be seen in use, for instance, in the Rhind Mathematical Papyrus, which dates to around 1650 BC.
China
The I Ching dates from the 9th century BC in China. The binary notation in the I Ching is used to interpret its quaternary divination technique.
It is based on taoistic duality of yin and yang. Eight trigrams (Bagua) and a set of 64 hexagrams ("sixty-four" gua), analogous to the three-bit and six-bit binary numerals, were in use at least as early as the Zhou dynasty of ancient China.
The Song dynasty scholar Shao Yong (1011–1077) rearranged the hexagrams in a format that resembles modern binary numbers, although he did not intend his arrangement to be used mathematically. Viewing the least significant bit on top of single hexagrams in Shao Yong's square
and reading along rows either from bottom right to top left with solid lines as 0 and broken lines as 1 or from top left to bottom right with solid lines as 1 and broken lines as 0 hexagrams can be interpreted as sequence from 0 to 63.
Classical antiquity
Etruscans divided the outer edge of divination livers into sixteen parts, each inscribed with the name of a divinity and its region of the sky. Each liver region produced a binary reading which was combined into a final binary for divination.
Divination at Ancient Greek Dodona oracle worked by drawing from separate jars, questions tablets and "yes" and "no" pellets. The result was then combined to make a final prophecy.
India
The Indian scholar Pingala (c. 2nd century BC) developed a binary system for describing prosody. He described meters in the form of short and long syllables (the latter equal in length to two short syllables). They were known as laghu (light) and guru (heavy) syllables.
Pingala's Hindu classic titled Chandaḥśāstra (8.23) describes the formation of a matrix in order to give a unique value to each meter. "Chandaḥśāstra" literally translates to science of meters in Sanskrit. The binary representations in Pingala's system increases towards the right, and not to the left like in the binary numbers of the modern positional notation. In Pingala's system, the numbers start from number one, and not zero. Four short syllables "0000" is the first pattern and corresponds to the value one. The numerical value is obtained by adding one to the sum of place values.
Africa
The Ifá is an African divination system. Similar to the I Ching, but has up to 256 binary signs, unlike the I Ching which has 64. The Ifá originated in 15th century West Africa among Yoruba people. In 2008, UNESCO added Ifá to its list of the "Masterpieces of the Oral and Intangible Heritage of Humanity".
Other cultures
The residents of the island of Mangareva in French Polynesia were using a hybrid binary-decimal system before 1450. Slit drums with binary tones are used to encode messages across Africa and Asia.
Sets of binary combinations similar to the I Ching have also been used in traditional African divination systems, such as Ifá among others, as well as in medieval Western geomancy. The majority of Indigenous Australian languages use a base-2 system.
Western predecessors to Leibniz
In the late 13th century Ramon Llull had the ambition to account for all wisdom in every branch of human knowledge of the time. For that purpose he developed a general method or "Ars generalis" based on binary combinations of a number of simple basic principles or categories, for which he has been considered a predecessor of computing science and artificial intelligence.
In 1605, Francis Bacon discussed a system whereby letters of the alphabet could be reduced to sequences of binary digits, which could then be encoded as scarcely visible variations in the font in any random text. Importantly for the general theory of binary encoding, he added that this method could be used with any objects at all: "provided those objects be capable of a twofold difference only; as by Bells, by Trumpets, by Lights and Torches, by the report of Muskets, and any instruments of like nature". (See Bacon's cipher.)
In 1617, John Napier described a system he called location arithmetic for doing binary calculations using a non-positional representation by letters.
Thomas Harriot investigated several positional numbering systems, including binary, but did not publish his results; they were found later among his papers.
Possibly the first publication of the system in Europe was by Juan Caramuel y Lobkowitz, in 1700.
Leibniz
Leibniz wrote in excess of a hundred manuscripts on binary, most of them remaining unpublished. Before his first dedicated work in 1679, numerous manuscripts feature early attempts to explore binary concepts, including tables of numbers and basic calculations, often scribbled in the margins of works unrelated to mathematics.
His first known work on binary, “On the Binary Progression", in 1679, Leibniz introduced conversion between decimal and binary, along with algorithms for performing basic arithmetic operations such as addition, subtraction, multiplication, and division using binary numbers. He also developed a form of binary algebra to calculate the square of a six-digit number and to extract square roots.
His most well known work appears in his article Explication de l'Arithmétique Binaire (published in 1703).
The full title of Leibniz's article is translated into English as the "Explanation of Binary Arithmetic, which uses only the characters 1 and 0, with some remarks on its usefulness, and on the light it throws on the ancient Chinese figures of Fu Xi". Leibniz's system uses 0 and 1, like the modern binary numeral system. An example of Leibniz's binary numeral system is as follows:
0 0 0 1 numerical value 20
0 0 1 0 numerical value 21
0 1 0 0 numerical value 22
1 0 0 0 numerical value 23
While corresponding with the Jesuit priest Joachim Bouvet in 1700, who had made himself an expert on the I Ching while a missionary in China, Leibniz explained his binary notation, and Bouvet demonstrated in his 1701 letters that the I Ching was an independent, parallel invention of binary notation.
Leibniz & Bouvet concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired. Of this parallel invention, Leibniz wrote in his "Explanation Of Binary Arithmetic" that "this restitution of their meaning, after such a great interval of time, will seem all the more curious."
The relation was a central idea to his universal concept of a language or characteristica universalis, a popular idea that would be followed closely by his successors such as Gottlob Frege and George Boole in forming modern symbolic logic.
Leibniz was first introduced to the I Ching through his contact with the French Jesuit Joachim Bouvet, who visited China in 1685 as a missionary. Leibniz saw the I Ching hexagrams as an affirmation of the universality of his own religious beliefs as a Christian. Binary numerals were central to Leibniz's theology. He believed that binary numbers were symbolic of the Christian idea of creatio ex nihilo or creation out of nothing.
Later developments
In 1854, British mathematician George Boole published a landmark paper detailing an algebraic system of logic that would become known as Boolean algebra. His logical calculus was to become instrumental in the design of digital electronic circuitry.
In 1937, Claude Shannon produced his master's thesis at MIT that implemented Boolean algebra and binary arithmetic using electronic relays and switches for the first time in history. Entitled A Symbolic Analysis of Relay and Switching Circuits, Shannon's thesis essentially founded practical digital circuit design.
In November 1937, George Stibitz, then working at Bell Labs, completed a relay-based computer he dubbed the "Model K" (for "Kitchen", where he had assembled it), which calculated using binary addition. Bell Labs authorized a full research program in late 1938 with Stibitz at the helm. Their Complex Number Computer, completed 8 January 1940, was able to calculate complex numbers. In a demonstration to the American Mathematical Society conference at Dartmouth College on 11 September 1940, Stibitz was able to send the Complex Number Calculator remote commands over telephone lines by a teletype. It was the first computing machine ever used remotely over a phone line. Some participants of the conference who witnessed the demonstration were John von Neumann, John Mauchly and Norbert Wiener, who wrote about it in his memoirs.
The Z1 computer, which was designed and built by Konrad Zuse between 1935 and 1938, used Boolean logic and binary floating-point numbers.
Representation
Any number can be represented by a sequence of bits (binary digits), which in turn may be represented by any mechanism capable of being in two mutually exclusive states. Any of the following rows of symbols can be interpreted as the binary numeric value of 667:
The numeric value represented in each case depends on the value assigned to each symbol. In the earlier days of computing, switches, punched holes, and punched paper tapes were used to represent binary values. In a modern computer, the numeric values may be represented by two different voltages; on a magnetic disk, magnetic polarities may be used. A "positive", "yes", or "on" state is not necessarily equivalent to the numerical value of one; it depends on the architecture in use.
In keeping with the customary representation of numerals using Arabic numerals, binary numbers are commonly written using the symbols 0 and 1. When written, binary numerals are often subscripted, prefixed, or suffixed to indicate their base, or radix. The following notations are equivalent:
100101 binary (explicit statement of format)
100101b (a suffix indicating binary format; also known as Intel convention)
100101B (a suffix indicating binary format)
bin 100101 (a prefix indicating binary format)
1001012 (a subscript indicating base-2 (binary) notation)
%100101 (a prefix indicating binary format; also known as Motorola convention)
0b100101 (a prefix indicating binary format, common in programming languages)
6b100101 (a prefix indicating number of bits in binary format, common in programming languages)
#b100101 (a prefix indicating binary format, common in Lisp programming languages)
When spoken, binary numerals are usually read digit-by-digit, to distinguish them from decimal numerals. For example, the binary numeral 100 is pronounced one zero zero, rather than one hundred, to make its binary nature explicit and for purposes of correctness. Since the binary numeral 100 represents the value four, it would be confusing to refer to the numeral as one hundred (a word that represents a completely different value, or amount). Alternatively, the binary numeral 100 can be read out as "four" (the correct value), but this does not make its binary nature explicit.
Counting in binary
Counting in binary is similar to counting in any other number system. Beginning with a single digit, counting proceeds through each symbol, in increasing order. Before examining binary counting, it is useful to briefly discuss the more familiar decimal counting system as a frame of reference.
Decimal counting
Decimal counting uses the ten symbols 0 through 9. Counting begins with the incremental substitution of the least significant digit (rightmost digit) which is often called the first digit. When the available symbols for this position are exhausted, the least significant digit is reset to 0, and the next digit of higher significance (one position to the left) is incremented (overflow), and incremental substitution of the low-order digit resumes. This method of reset and overflow is repeated for each digit of significance. Counting progresses as follows:
000, 001, 002, ... 007, 008, 009, (rightmost digit is reset to zero, and the digit to its left is incremented)
010, 011, 012, ...
...
090, 091, 092, ... 097, 098, 099, (rightmost two digits are reset to zeroes, and next digit is incremented)
100, 101, 102, ...
Binary counting
Binary counting follows the exact same procedure, and again the incremental substitution begins with the least significant binary digit, or bit (the rightmost one, also called the first bit), except that only the two symbols 0 and 1 are available. Thus, after a bit reaches 1 in binary, an increment resets it to 0 but also causes an increment of the next bit to the left:
0000,
0001, (rightmost bit starts over, and the next bit is incremented)
0010, 0011, (rightmost two bits start over, and the next bit is incremented)
0100, 0101, 0110, 0111, (rightmost three bits start over, and the next bit is incremented)
1000, 1001, 1010, 1011, 1100, 1101, 1110, 1111 ...
In the binary system, each bit represents an increasing power of 2, with the rightmost bit representing 20, the next representing 21, then 22, and so on. The value of a binary number is the sum of the powers of 2 represented by each "1" bit. For example, the binary number 100101 is converted to decimal form as follows:
1001012 = [ ( 1 ) × 25 ] + [ ( 0 ) × 24 ] + [ ( 0 ) × 23 ] + [ ( 1 ) × 22 ] + [ ( 0 ) × 21 ] + [ ( 1 ) × 20 ]
1001012 = [ 1 × 32 ] + [ 0 × 16 ] + [ 0 × 8 ] + [ 1 × 4 ] + [ 0 × 2 ] + [ 1 × 1 ]
1001012 = 3710
Fractions
Fractions in binary arithmetic terminate only if the denominator is a power of 2. As a result, 1/10 does not have a finite binary representation (10 has prime factors 2 and 5). This causes 10 × 1/10 not to precisely equal 1 in binary floating-point arithmetic. As an example, to interpret the binary expression for 1/3 = .010101..., this means: 1/3 = 0 × 2−1 + 1 × 2−2 + 0 × 2−3 + 1 × 2−4 + ... = 0.3125 + ... An exact value cannot be found with a sum of a finite number of inverse powers of two, the zeros and ones in the binary representation of 1/3 alternate forever.
Binary arithmetic
Arithmetic in binary is much like arithmetic in other positional notation numeral systems. Addition, subtraction, multiplication, and division can be performed on binary numerals.
Addition
The simplest arithmetic operation in binary is addition. Adding two single-digit binary numbers is relatively simple, using a form of carrying:
0 + 0 → 0
0 + 1 → 1
1 + 0 → 1
1 + 1 → 0, carry 1 (since 1 + 1 = 2 = 0 + (1 × 21) )
Adding two "1" digits produces a digit "0", while 1 will have to be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented:
5 + 5 → 0, carry 1 (since 5 + 5 = 10 = 0 + (1 × 101) )
7 + 9 → 6, carry 1 (since 7 + 9 = 16 = 6 + (1 × 101) )
This is known as carrying. When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary:
0 1 1 0 1
+ 1 0 1 1 1
-------------
= 1 0 0 1 0 0 = 36
In this example, two numerals are being added together: 011012 (1310) and 101112 (2310). The top row shows the carry bits used. Starting in the rightmost column, 1 + 1 = 102. The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: 1 + 0 + 1 = 102 again; the 1 is carried, and 0 is written at the bottom. The third column: 1 + 1 + 1 = 112. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 1001002 (3610).
When computers must add two numbers, the rule that:
x xor y = (x + y) mod 2
for any two bits x and y allows for very fast calculation, as well.
Long carry method
A simplification for many binary addition problems is the "long carry method" or "Brookhouse Method of Binary Addition". This method is particularly useful when one of the numbers contains a long stretch of ones. It is based on the simple premise that under the binary system, when given a stretch of digits composed entirely of ones (where is any integer length), adding 1 will result in the number 1 followed by a string of zeros. That concept follows, logically, just as in the decimal system, where adding 1 to a string of 9s will result in the number 1 followed by a string of 0s:
Binary Decimal
1 1 1 1 1 likewise 9 9 9 9 9
+ 1 + 1
——————————— ———————————
1 0 0 0 0 0 1 0 0 0 0 0
Such long strings are quite common in the binary system. From that one finds that large binary numbers can be added using two simple steps, without excessive carry operations. In the following example, two numerals are being added together: 1 1 1 0 1 1 1 1 1 02 (95810) and 1 0 1 0 1 1 0 0 1 12 (69110), using the traditional carry method on the left, and the long carry method on the right:
Traditional Carry Method Long Carry Method
vs.
carry the 1 until it is one digit past the "string" below
1 1 1 0 1 1 1 1 1 0 1 1 1 0 1 1 1 1 1 0 cross out the "string",
+ 1 0 1 0 1 1 0 0 1 1 + 1 0 1 0 1 1 0 0 1 1 and cross out the digit that was added to it
——————————————————————— ——————————————————————
= 1 1 0 0 1 1 1 0 0 0 1 1 1 0 0 1 1 1 0 0 0 1
The top row shows the carry bits used. Instead of the standard carry from one column to the next, the lowest-ordered "1" with a "1" in the corresponding place value beneath it may be added and a "1" may be carried to one digit past the end of the series. The "used" numbers must be crossed off, since they are already added. Other long strings may likewise be cancelled using the same technique. Then, simply add together any remaining digits normally. Proceeding in this manner gives the final answer of 1 1 0 0 1 1 1 0 0 0 12 (164910). In our simple example using small numbers, the traditional carry method required eight carry operations, yet the long carry method required only two, representing a substantial reduction of effort.
Addition table
The binary addition table is similar to, but not the same as, the truth table of the logical disjunction operation . The difference is that , while .
Subtraction
Subtraction works in much the same way:
0 − 0 → 0
0 − 1 → 1, borrow 1
1 − 0 → 1
1 − 1 → 0
Subtracting a "1" digit from a "0" digit produces the digit "1", while 1 will have to be subtracted from the next column. This is known as borrowing. The principle is the same as for carrying. When the result of a subtraction is less than 0, the least possible value of a digit, the procedure is to "borrow" the deficit divided by the radix (that is, 10/10) from the left, subtracting it from the next positional value.
* * * * (starred columns are borrowed from)
1 1 0 1 1 1 0
− 1 0 1 1 1
----------------
= 1 0 1 0 1 1 1
* (starred columns are borrowed from)
1 0 1 1 1 1 1
– 1 0 1 0 1 1
----------------
= 0 1 1 0 1 0 0
Subtracting a positive number is equivalent to adding a negative number of equal absolute value. Computers use signed number representations to handle negative numbers—most commonly the two's complement notation. Such representations eliminate the need for a separate "subtract" operation. Using two's complement notation, subtraction can be summarized by the following formula:
Multiplication
Multiplication in binary is similar to its decimal counterpart. Two numbers and can be multiplied by partial products: for each digit in , the product of that digit in is calculated and written on a new line, shifted leftward so that its rightmost digit lines up with the digit in that was used. The sum of all these partial products gives the final result.
Since there are only two digits in binary, there are only two possible outcomes of each partial multiplication:
If the digit in is 0, the partial product is also 0
If the digit in is 1, the partial product is equal to
For example, the binary numbers 1011 and 1010 are multiplied as follows:
1 0 1 1 ()
× 1 0 1 0 ()
---------
0 0 0 0 ← to the rightmost 'zero' in
+ 1 0 1 1 ← to the next 'one' in
+ 0 0 0 0
+ 1 0 1 1
---------------
= 1 1 0 1 1 1 0
Binary numbers can also be multiplied with bits after a binary point:
1 0 1 . 1 0 1 (5.625 in decimal)
× 1 1 0 . 0 1 (6.25 in decimal)
-------------------
1 . 0 1 1 0 1 ← to a 'one' in
+ 0 0 . 0 0 0 0 ← to a 'zero' in
+ 0 0 0 . 0 0 0
+ 1 0 1 1 . 0 1
+ 1 0 1 1 0 . 1
---------------------------
= 1 0 0 0 1 1 . 0 0 1 0 1 (35.15625 in decimal)
| Mathematics | Basics | null |
238729 | https://en.wikipedia.org/wiki/Eosinophil | Eosinophil | acidophils, are a variety of white blood cells and one of the immune system components responsible for combating multicellular parasites and certain infections in vertebrates. Along with mast cells and basophils, they also control mechanisms associated with allergy and asthma. They are granulocytes that develop during hematopoiesis in the bone marrow before migrating into blood, after which they are terminally differentiated and do not multiply.
These cells are eosinophilic or "acid-loving" due to their large acidophilic cytoplasmic granules, which show their affinity for acids by their affinity to coal tar dyes: Normally transparent, it is this affinity that causes them to appear brick-red after staining with eosin, a red dye, using the Romanowsky method. The staining is concentrated in small granules within the cellular cytoplasm, which contain many chemical mediators, such as eosinophil peroxidase, ribonuclease (RNase), deoxyribonucleases (DNase), lipase, plasminogen, and major basic protein. These mediators are released by a process called degranulation following activation of the eosinophil, and are toxic to both parasite and host tissues.
In normal individuals, eosinophils make up about 1–3% of white blood cells, and are about 12–17 micrometres in size with bilobed nuclei. While eosinophils are released into the bloodstream, they reside in tissue. They are found in the medulla and the junction between the cortex and medulla of the thymus, and, in the lower gastrointestinal tract, ovaries, uterus, spleen, prostate, and lymph nodes, but not in the lungs, skin, esophagus, or some other internal organs under normal conditions. The presence of eosinophils in these latter organs is associated with disease. For instance, patients with eosinophilic asthma have high levels of eosinophils that lead to inflammation and tissue damage, making it more difficult for patients to breathe. Eosinophils persist in the circulation for 8–12 hours, and can survive in tissue for an additional 8–12 days in the absence of stimulation. Pioneering work in the 1980s elucidated that eosinophils were unique granulocytes, having the capacity to survive for extended periods of time after their maturation as demonstrated by ex-vivo culture experiments.
Development
TH2 and ILC2 cells both express the transcription factor GATA-3, which promotes the production of TH2 cytokines, including the interleukins (ILs). IL-5 controls the development of eosinophils in the bone marrow, as they differentiate from myeloid precursor cells. Their lineage fate is determined by transcription factors, including GATA and C/EBP. Eosinophils produce and store many secondary granule proteins prior to their exit from the bone marrow. After maturation, eosinophils circulate in blood and migrate to inflammatory sites in tissues, or to sites of helminth infection in response to chemokines like CCL11 (eotaxin-1), CCL24 (eotaxin-2), CCL5 (RANTES), 5-hydroxyicosatetraenoic acid and 5-oxo-eicosatetraenoic acid, and certain leukotrienes like leukotriene B4 (LTB4) and MCP1/4. Interleukin-13, another TH2 cytokine, primes eosinophilic exit from the bone marrow by lining vessel walls with adhesion molecules such as VCAM-1 and ICAM-1.
When eosinophils are activated, they undergo cytolysis, where the breaking of the cell releases eosinophilic granules found in extracellular DNA traps. High concentrations of these DNA traps are known to cause cellular damage, as the granules they contain are responsible for the ligand-induced secretion of eosinophilic toxins which cause structural damage. There is evidence to suggest that eosinophil granule protein expression is regulated by the non-coding RNA EGOT.
Function
Following activation, eosinophils effector functions include production of the following:
Cationic granule proteins and their release by degranulation
Reactive oxygen species such as hypobromite, superoxide, and peroxide (hypobromous acid, which is preferentially produced by eosinophil peroxidase)
Lipid mediators like the eicosanoids from the leukotriene (e.g., LTC4, LTD4, LTE4) and prostaglandin (e.g., PGE2) families
Enzymes, such as elastase
Growth factors such as TGF beta, VEGF, and PDGF
Cytokines such as IL-1, IL-2, IL-4, IL-5, IL-6, IL-8, IL-9, IL-13, and TNF alpha
There are also eosinophils that play a role in fighting viral infections, which is evident from the abundance of RNases they contain within their granules, and in fibrin removal during inflammation. Eosinophils, along with basophils and mast cells, are important mediators of allergic responses and asthma pathogenesis and are associated with disease severity. They also fight helminth (worm) colonization and may be slightly elevated in the presence of certain parasites. Eosinophils are also involved in many other biological processes, including postpubertal mammary gland development, oestrus cycling, allograft rejection and neoplasia. They have also been implicated in antigen presentation to T cells.
Eosinophils are responsible for tissue damage and inflammation in many diseases, including asthma. High levels of interleukin-5 has been observed to up regulate the expression of adhesion molecules, which then facilitate the adhesion of eosinophils to endothelial cells, thereby causing inflammation and tissue damage.
An accumulation of eosinophils in the nasal mucosa is considered a major diagnostic criterion for allergic rhinitis (nasal allergies).
Granule proteins
Following activation by an immune stimulus, eosinophils degranulate to release an array of cytotoxic granule cationic proteins that are capable of inducing tissue damage and dysfunction. These include:
major basic protein (MBP)
eosinophil cationic protein (ECP)
eosinophil peroxidase (EPX)
eosinophil-derived neurotoxin (EDN)
Major basic protein, eosinophil peroxidase, and eosinophil cationic protein are toxic to many tissues. Eosinophil cationic protein and eosinophil-derived neurotoxin are ribonucleases with antiviral activity. Major basic protein induces mast cell and basophil degranulation, and is implicated in peripheral nerve remodelling. Eosinophil cationic protein creates toxic pores in the membranes of target cells, allowing potential entry of other cytotoxic molecules to the cell, can inhibit proliferation of T cells, suppress antibody production by B cells, induce degranulation by mast cells, and stimulate fibroblast cells to secrete mucus and glycosaminoglycans. Eosinophil peroxidase forms reactive oxygen species and reactive nitrogen intermediates that promote oxidative stress in the target, causing cell death by apoptosis and necrosis.
Clinical significance
Blood count
Strong evidence indicates that blood eosinophil counts can predict the effectiveness of specific anti-inflammatory drugs. Despite their increasing use in clinical practice, data on "normal" blood eosinophil counts remain insufficient. Due to the right-skewed distribution of these counts, median values are more informative than mean values for determining normal levels. Few large-scale studies have reported median blood eosinophil counts, with the median for healthy individuals being 100 cells/μL and the 95th percentile at 420 cells/μL. Thus, it is now evident that the normal median blood eosinophil count in healthy adults is around 100 cells/μL, with counts above 400 cells/μL considered outside the normal range. Current cutoffs such as 150 or 300 cells/μL used in asthma or COPD management fall within the normal range.
Eosinophilia
An increase in eosinophils, i.e., the presence of more than 500 eosinophils/microlitre of blood is called an eosinophilia, and is typically seen in people with a parasitic infestation of the intestines; autoimmune and collagen vascular disease (such as rheumatoid arthritis) and Systemic lupus erythematosus; malignant diseases such as eosinophilic leukemia, clonal hypereosinophilia, and Hodgkin lymphoma; lymphocyte-variant hypereosinophilia; extensive skin diseases (such as exfoliative dermatitis); Addison's disease and other causes of low corticosteroid production (corticosteroids suppress blood eosinophil levels); reflux esophagitis (in which eosinophils will be found in the squamous epithelium of the esophagus) and eosinophilic esophagitis; and with the use of certain drugs such as penicillin. But, perhaps the most common cause for eosinophilia is an allergic condition such as asthma. In 1989, contaminated L-tryptophan supplements caused a deadly form of eosinophilia known as eosinophilia-myalgia syndrome, which was reminiscent of the toxic oil syndrome in Spain in 1981.
Eosinophils play an important role in asthma as the number of accumulated eosinophils corresponds to the severity of asthmatic reaction. Eosinophilia in mice models are shown to be associated with high interleukin-5 levels. Furthermore, mucosal bronchial biopsies conducted on patients with diseases such as asthma have been found to have higher levels of interleukin-5 leading to higher levels of eosinophils. The infiltration of eosinophils at these high concentrations causes an inflammatory reaction. This ultimately leads to airway remodelling and difficulty of breathing.
Eosinophils can also cause tissue damage in the lungs of asthmatic patients. High concentrations of eosinophil major basic protein and eosinophil-derived neurotoxin that approach cytotoxic levels are observed at degranulation sites in the lungs as well as in the asthmatic sputum.
Treatment
Treatments used to combat autoimmune diseases and conditions caused by eosinophils include:
corticosteroids – promote apoptosis. Numbers of eosinophils in blood are rapidly reduced
monoclonal antibody therapy – e.g., mepolizumab or reslizumab against IL-5, prevents eosinophilopoiesis, or benralizumab against IL-5 receptor, which eliminates eosinophils through ADCC
antagonists of leukotriene synthesis or receptors
imatinib (STI571) – inhibits PDGF-BB in hypereosinophilic leukemia
Monoclonal antibodies such as dupilumab and lebrikizumab target IL-13 and its receptor, which reduces eosinophilic inflammation in patients with asthma due to lowering the number of adhesion molecules present for eosinophils to bind to, thereby decreasing inflammation. Mepolizumab and benralizumab are other treatment options that target the alpha subunit of the IL-5 receptor, thereby inhibiting its function and reducing the number of developing eosinophils as well as the number of eosinophils leading to inflammation through antibody-dependent cell-mediated cytotoxicity and eosinophilic apoptosis. Lysosomotropic agents are an efficient means to target the lysosome-like eosinophil granules inducing eosinophil apoptosis.
Animal studies
Within the fat (adipose) tissue of CCR2 deficient mice, there is an increased number of eosinophils, greater alternative macrophage activation, and a propensity towards type 2 cytokine expression. Furthermore, this effect was exaggerated when the mice became obese from a high fat diet.
Mouse models of eosinophilia from mice infected with T. canis showed an increase in IL-5 mRNA in mice spleen. Mouse models of asthma from OVA show a higher TH2 response. When mice are administered IL-12 to induce the TH1 response, the TH2 response becomes suppressed, showing that mice without TH2 cytokines are significantly less likely to express asthma symptoms.
| Biology and health sciences | Circulatory system | Biology |
238790 | https://en.wikipedia.org/wiki/Methylene%20blue | Methylene blue | Methylthioninium chloride, commonly called methylene blue, is a salt used as a dye and as a medication. As a medication, it is mainly used to treat methemoglobinemia by chemically reducing the ferric iron in hemoglobin to ferrous iron. Specifically, it is used to treat methemoglobin levels that are greater than 30% or in which there are symptoms despite oxygen therapy. It has previously been used for treating cyanide poisoning and urinary tract infections, but this use is no longer recommended.
Methylene blue is typically given by injection into a vein. Common side effects include headache, nausea, and vomiting. While use during pregnancy may harm the fetus, not using it in methemoglobinemia is likely more dangerous.
Methylene blue was first prepared in 1876, by Heinrich Caro. It is on the World Health Organization's List of Essential Medicines.
Uses
Methemoglobinemia
Methylene blue is employed as a medication for the treatment of methemoglobinemia, which can arise from ingestion of certain pharmaceuticals, toxins, or broad beans in those susceptible. Normally, through the NADH- or NADPH-dependent methemoglobin reductase enzymes, methemoglobin is reduced back to hemoglobin. When large amounts of methemoglobin occur secondary to toxins, methemoglobin reductases are overwhelmed. Methylene blue, when injected intravenously as an antidote, is itself first reduced to leucomethylene blue, which then reduces the heme group from methemoglobin to hemoglobin. Methylene blue can reduce the half life of methemoglobin from hours to minutes. At high doses, however, methylene blue actually induces methemoglobinemia, reversing this pathway.
Methylphen
Cyanide poisoning
Since its reduction potential is similar to that of oxygen and can be reduced by components of the electron transport chain, large doses of methylene blue are sometimes used as an antidote to potassium cyanide poisoning, a method first successfully tested in 1933 by Matilda Moldenhauer Brooks in San Francisco, although first demonstrated by Bo Sahlin of Lund University, in 1926.
Dye or stain
Methylene blue is used in endoscopic polypectomy as an adjunct to saline or epinephrine, and is used for injection into the submucosa around the polyp to be removed. This allows the submucosal tissue plane to be identified after the polyp is removed, which is useful in determining if more tissue needs to be removed, or if there has been a high risk for perforation. Methylene blue is also used as a dye in chromoendoscopy, and is sprayed onto the mucosa of the gastrointestinal tract in order to identify dysplasia, or pre-cancerous lesions. Intravenously injected methylene blue is readily released into the urine and thus can be used to test the urinary tract for leaks or fistulas.
In surgeries such as sentinel lymph node dissections, methylene blue can be used to visually trace the lymphatic drainage of tested tissues. Similarly, methylene blue is added to bone cement in orthopedic operations to provide easy discrimination between native bone and cement. Additionally, methylene blue accelerates the hardening of bone cement, increasing the speed at which bone cement can be effectively applied. Methylene blue is used as an aid to visualisation/orientation in a number of medical devices, including a Surgical sealant film, TissuePatch. In fistulas and pilonidal sinuses it is used to identify the tract for complete excision. It can also be used during gastrointestinal surgeries (such as bowel resection or gastric bypass) to test for leaks.
It is sometimes used in cytopathology, in mixtures including Wright-Giemsa and Diff-Quik. It confers a blue color to both nuclei and cytoplasm, and makes the nuclei more visible. When methylene blue is "polychromed" (oxidized in solution or "ripened" by fungal metabolism, as originally noted in the thesis of Dr. D. L. Romanowsky in the 1890s), it gets serially demethylated and forms all the tri-, di-, mono- and non-methyl intermediates, which are Azure B, Azure A, Azure C, and thionine, respectively. This is the basis of the basophilic part of the spectrum of Romanowski-Giemsa effect. If only synthetic Azure B and Eosin Y is used, it may serve as a standardized Giemsa stain; but, without methylene blue, the normal neutrophilic granules tend to overstain and look like toxic granules. On the other hand, if methylene blue is used it might help to give the normal look of neutrophil granules and may also enhance the staining of nucleoli and polychromatophilic RBCs (reticulocytes).
A traditional application of methylene blue is the intravital or supravital staining of nerve fibers, an effect first described by Paul Ehrlich in 1887. A dilute solution of the dye is either injected into tissue or applied to small freshly removed pieces. The selective blue coloration develops with exposure to air (oxygen) and can be fixed by immersion of the stained specimen in an aqueous solution of ammonium molybdate. Vital methylene blue was formerly much used for examining the innervation of muscle, skin and internal organs. The mechanism of selective dye uptake is incompletely understood; vital staining of nerve fibers in skin is prevented by ouabain, a drug that inhibits the Na/K-ATPase of cell membranes.
Placebo
Methylene blue has been used as a placebo; physicians would tell their patients to expect their urine to change color and view this as a sign that their condition had improved. This same side effect makes methylene blue difficult to use in traditional placebo-controlled clinical studies, including those testing for its efficacy as a treatment.
Isobutyl nitrite toxicity
Isobutyl nitrite is one of the compounds used as poppers, an inhalant drug that induces a brief euphoria.
Isobutyl nitrite is known to cause methemoglobinemia. Severe methemoglobinemia may be treated with methylene blue.
Ifosfamide toxicity
Another use of methylene blue is to treat ifosfamide neurotoxicity. Methylene blue was first reported for treatment and prophylaxis of ifosfamide neuropsychiatric toxicity in 1994. A toxic metabolite of ifosfamide, chloroacetaldehyde (CAA), disrupts the mitochondrial respiratory chain, leading to an accumulation of nicotinamide adenine dinucleotide hydrogen (NADH). Methylene blue acts as an alternative electron acceptor, and reverses the NADH inhibition of hepatic gluconeogenesis while also inhibiting the transformation of chloroethylamine into chloroacetaldehyde, and inhibits multiple amine oxidase activities, preventing the formation of CAA. The dosing of methylene blue for treatment of ifosfamide neurotoxicity varies, depending upon its use simultaneously as an adjuvant in ifosfamide infusion, versus its use to reverse psychiatric symptoms that manifest after completion of an ifosfamide infusion. Reports suggest that methylene blue up to six doses a day have resulted in improvement of symptoms within 10 minutes to several days. Alternatively, it has been suggested that intravenous methylene blue every six hours for prophylaxis during ifosfamide treatment in people with history of ifosfamide neuropsychiatric toxicity. Prophylactic administration of methylene blue the day before initiation of ifosfamide, and three times daily during ifosfamide chemotherapy has been recommended to lower the occurrence of ifosfamide neurotoxicity.
Shock
It has also been used in septic shock and anaphylaxis.
Methylene blue consistently increases blood pressure in people with vasoplegic syndrome (redistributive shock), but has not been shown to improve delivery of oxygen to tissues or to decrease mortality.
Methylene blue has been used in calcium channel blocker toxicity as a rescue therapy for distributive shock unresponsive to first line agents. Evidence for its use in this circumstance is very poor and limited to a handful of case reports.
Side effects
Methylene blue is a monoamine oxidase inhibitor (MAOI) and, if infused intravenously at doses exceeding 5 mg/kg, may result in serotonin syndrome if combined with any selective serotonin reuptake inhibitors (SSRIs) or other serotonergic drugs (e.g., duloxetine, sibutramine, venlafaxine, clomipramine, imipramine).
It causes hemolytic anemia in carriers of the G6PD (favism) enzymatic deficiency.
Chemistry
Methylene blue is a formal derivative of phenothiazine. It is a dark green powder that yields a blue solution in water. The hydrated form has 3 molecules of water per unit of methylene blue.
Preparation
This compound is prepared by oxidation of 4-aminodimethylaniline in the presence of sodium thiosulfate to give the quinonediiminothiosulfonic acid, reaction with dimethylaniline, oxidation to the indamine, and cyclization to give the thiazine:
A green electrochemical procedure, using only dimethyl-4-phenylenediamine and sulfide ions has been proposed.
Light absorption properties
The maximum absorption of light is near 670 nm. The specifics of absorption depend on a number of factors, including protonation, adsorption to other materials, and metachromasy - the formation of dimers and higher-order aggregates depending on concentration and other interactions:
Other uses
Redox indicator
Methylene blue is widely used as a redox indicator in analytical chemistry. Solutions of this substance are blue when in an oxidizing environment, but will turn colorless if exposed to a reducing agent. The redox properties can be seen in a classical demonstration of chemical kinetics in general chemistry, the "blue bottle" experiment. Typically, a solution is made of glucose (dextrose), methylene blue, and sodium hydroxide. Upon shaking the bottle, oxygen oxidizes methylene blue, and the solution turns blue. The dextrose will gradually reduce the methylene blue to its colorless, reduced form. Hence, when the dissolved dextrose is entirely consumed, the solution will turn blue again. The redox midpoint potential E' is +0.01 V.
Peroxide generator
Methylene blue is also a photosensitizer used to create singlet oxygen when exposed to both oxygen and light. It is used in this regard to make organic peroxides by a Diels-Alder reaction which is spin forbidden with normal atmospheric triplet oxygen.
Sulfide analysis
The formation of methylene blue after the reaction of hydrogen sulfide with dimethyl-p-phenylenediamine and iron(III) at pH 0.4 – 0.7 is used to determine by photometric measurements sulfide concentration in the range 0.020 to 1.50 mg/L (20 ppb to 1.5 ppm). The test is very sensitive and the blue coloration developing upon contact of the reagents with dissolved H2S is stable for 60 min. Ready-to-use kits such as the Spectroquant sulfide test facilitate routine analyses. The methylene blue sulfide test is a convenient method often used in soil microbiology to quickly detect in water the metabolic activity of sulfate reducing bacteria (SRB). In this colorimetric test, methylene blue is a product formed by the reaction and not a reagent added to the system.
The addition of a strong reducing agent, such as ascorbic acid, to a sulfide-containing solution is sometimes used to prevent sulfide oxidation from atmospheric oxygen. Although it is certainly a sound precaution for the determination of sulfide with an ion selective electrode, it might however hamper the development of the blue color if the freshly formed methylene blue is also reduced, as described here above in the paragraph on redox indicator.
Test for milk freshness
Methylene blue is a dye behaving as a redox indicator that is commonly used in the food industry to test the freshness of milk and dairy products. A few drops of methylene blue solution added to a sample of milk should remain blue (oxidized form in the presence of enough dissolved ), otherwise (discoloration caused by the reduction of methylene blue into its colorless reduced form) the dissolved concentration in the milk sample is low indicating that the milk is not fresh (already abiotically oxidized by whose concentration in solution decreases) or could be contaminated by bacteria also consuming the atmospheric dissolved in the milk. In other words, aerobic conditions should prevail in fresh milk and methylene blue is simply used as an indicator of the dissolved oxygen remaining in the milk.
Water testing
The adsorption of methylene blue serves as an indicator defining the adsorptive capacity of granular activated carbon in water filters. Adsorption of methylene blue is very similar to adsorption of pesticides from water, this quality makes methylene blue serve as a good predictor for filtration qualities of carbon. It is as well a quick method of comparing different batches of activated carbon of the same quality.
A color reaction in an acidified, aqueous methylene blue solution containing chloroform can detect anionic surfactants in a water sample. Such a test is known as an MBAS assay (methylene blue active substances assay).
The MBAS assay cannot distinguish between specific surfactants, however. Some examples of anionic surfactants are carboxylates, phosphates, sulfates, and sulfonates.
Methylene blue value of fine aggregate
The methylene blue value is defined as the number of milliliter's standard methylene value solution decolorized 0.1 g of activated carbon (dry basis).
Methylene blue value reflects the amount of clay minerals in aggregate samples. In materials science, methylene blue solution is successively added to fine aggregate which is being agitated in water. The presence of free dye solution can be checked with stain test on a filter paper.
Biological staining
In biology, methylene blue is used as a dye for a number of different staining procedures, such as Wright's stain and Jenner's stain. Since it is a temporary staining technique, methylene blue can also be used to examine RNA or DNA under the microscope or in a gel: as an example, a solution of methylene blue can be used to stain RNA on hybridization membranes in northern blotting to verify the amount of nucleic acid present. While methylene blue is not as sensitive as ethidium bromide, it is less toxic and it does not intercalate in nucleic acid chains, thus avoiding interference with nucleic acid retention on hybridization membranes or with the hybridization process itself.
It can also be used as an indicator to determine whether eukaryotic cells such as yeast are alive or dead. The methylene blue is reduced in viable cells, leaving them unstained. However dead cells are unable to reduce the oxidized methylene blue and the cells are stained blue. Methylene blue can interfere with the respiration of the yeast as it picks up hydrogen ions made during the process.
Aquaculture
Methylene blue is used in aquaculture and by tropical fish hobbyists as a treatment for fungal infections. It can also be effective in treating fish infected with ich although a combination of malachite green and formaldehyde is far more effective against the parasitic protozoa Ichthyophthirius multifiliis. It is usually used to protect newly laid fish eggs from being infected by fungus or bacteria. This is useful when the hobbyist wants to artificially hatch the fish eggs.
Methylene blue is also very effective when used as part of a "medicated fish bath" for treatment of ammonia, nitrite, and cyanide poisoning as well as for topical and internal treatment of injured or sick fish as a "first response".
History
Methylene blue has been described as "the first fully synthetic drug used in medicine." Methylene blue was first prepared in 1876 by German chemist Heinrich Caro.
Its use in the treatment of malaria was pioneered by Paul Guttmann and Paul Ehrlich in 1891. During this period before the first World War, researchers like Ehrlich believed that drugs and dyes worked in the same way, by preferentially staining pathogens and possibly harming them. Changing the cell membrane of pathogens is in fact how various drugs work, so the theory was partially correct although far from complete. Methylene blue continued to be used in the second World War, where it was not well liked by soldiers, who observed, "Even at the loo, we see, we pee, navy blue." Antimalarial use of the drug has recently been revived. It was discovered to be an antidote to carbon monoxide poisoning and cyanide poisoning in 1933 by Matilda Brooks.
| Physical sciences | Chemical methods | Chemistry |
238841 | https://en.wikipedia.org/wiki/Trifid%20Nebula | Trifid Nebula | The Trifid Nebula (catalogued as Messier 20 or M20 and as NGC 6514) is an H II region in the north-west of Sagittarius in a star-forming region in the Milky Way's Scutum–Centaurus Arm. It was discovered by Charles Messier on June 5, 1764. Its name means 'three-lobe'. The object is an unusual combination of an open cluster of stars, an emission nebula (the relatively dense, reddish-pink portion), a reflection nebula (the mainly NNE blue portion), and a dark nebula (the apparent 'gaps' in the former that cause the trifurcated appearance, also designated Barnard 85). Viewed through a small telescope, the Trifid Nebula is a bright and peculiar object, and is thus a perennial favorite of amateur astronomers.
The most massive star that has formed in this region is HD 164492A, an O7.5III star with a mass more than 20 times the mass of the Sun.
This star is surrounded by a cluster of approximately 3100 young stars.
Characteristics
The Trifid Nebula was the subject of an investigation by astronomers using the Hubble Space Telescope in 1997, using filters that isolate emission from hydrogen atoms, ionized sulfur atoms, and doubly ionized oxygen atoms. The images were combined into a false-color composite picture to suggest how the nebula might look to the eye.
The close-up images show a dense cloud of dust and gas, which is a stellar nursery full of embryonic stars. This cloud is about
away from the nebula's central star. A stellar jet protrudes from the head of the cloud and is about long. The jet's source is a young stellar object deep within the cloud. Jets are the exhaust gasses of star formation and radiation from the nebula's central star makes the jet glow.
The images also showed a finger-like stalk to the right of the jet. It points from the head of the dense cloud directly toward the star that powers the Trifid nebula. This stalk is a prominent example of evaporating gaseous globules, or 'EGGs'. The stalk has survived because its tip is a knot of gas that is dense enough to resist being eaten away by the powerful radiation from the star.
In January 2005, NASA's Spitzer Space Telescope discovered 30 embryonic stars and 120 newborn stars not seen in visible light images.
It is centered about from Earth. Its apparent magnitude is 6.3.
Details and features
| Physical sciences | Notable nebulae | Astronomy |
238881 | https://en.wikipedia.org/wiki/Desmidiales | Desmidiales | Desmidiales, commonly called the desmids (Gr. desmos, bond or chain), are an order in the Charophyta, a division of green algae in which the land plants (Embryophyta) emerged. Desmids consist of single-celled (sometimes filamentous or colonial) microscopic green algae. Because desmids are highly symmetrical, attractive, and come in a diversity of forms, they are popular subjects for microscopists, both amateur and professional.
The desmids belong to the class Zygnematophyceae. Although they are sometimes grouped together as a single family Desmidiaceae, most classifications recognize three to five families, usually within their own order, Desmidiales.
The Desmidiales comprise around 40 genera and 5,000 to 6,000 species, found mostly but not exclusively in fresh water. In general, desmids prefer acidic waters (pH between 4.8 and 7.0), so many species may be found in the fissures between patches of sphagnum moss in marshes. As desmids are sensitive to changes in their environments, they are useful as bioindicators for water and habitat quality.
Nomenclature
The term "desmid" typically refers to a group of microscopic, mostly single-celled algae in the class Zygnematophyceae. Within the desmids, a distinction is typically made between "saccoderm" and "placoderm" desmids. Saccoderm desmids, corresponding to the family Mesotaeniaceae in the order Zygnematales, consist of cells that are unconstricted at the middle, lack median suture lines, and do not have mucilage-secreting pores in the cell wall. Meanwhile, placoderm desmids, corresponding to the order Desmidiales, consist of cells with two symmetrical halves, and mucilage-secreting pores in the cell wall. Here, the term "desmids" and "placoderm desmids" will be used interchangeably to refer to the Desmidiales.
Morphology
The structure of these algae is unicellular, and lacks flagella. Although most desmid species are unicellular, some genera form chains of cells, called filaments. A few genera form non-filamentous colonies, with individual cells connected by threads or remnants of parent cell walls.
The cell of a desmid is often divided into two symmetrical compartments separated by a narrow bridge or isthmus, wherein the spherical nucleus is located. Each semi-cell houses a large, often folded chloroplast for photosynthesizing. One or more pyrenoids can be found. These form carbohydrates for energy storage. The cell-wall, of two halves (termed semicells), which, in a few species of Closterium and Penium, are of more than one piece, has two distinct layers, the inner composed mainly of cellulose, the outer is stronger and thicker, often furnished with spines, granules, warts et cetera. It is made up of a base of cellulose impregnated with other substances including iron compounds, which are especially prominent in some species of Closterium and Penium and is not soluble in an ammoniacal solution of copper oxide.
Desmids assume a variety of highly symmetrical and generally attractive shapes, among those elongated, star-shaped and rotund configurations, which provide the basis for their classification. The largest among them may be visible to the unaided eye.
Desmids possess characteristic crystals of barium sulphate at either end of the cell which exhibit continuous Brownian motion. The function of these crystals is completely unknown.
Many desmids also secrete translucent, gelatinous mucilage from pores in the cell wall that acts as a protecting agent. These pores are either, as in Micrasterias, uniformly distributed across the cell-wall but always appear to be absent in the region of the isthmus, or, in highly ornamented forms, as many genera of Cosmarium, grouped symmetrically around the bases of the spines, warts and so on with which the cell is provided.
In the inner layer of the wall the pore is a simple canal, but in the outer, except in Closterium, the canal is surrounded by a specially differentiated cylindrical zone, not composed of cellulose, through which the canal passes. This is termed the pore-organ. The canals are no doubt in all cases occupied by threads of mucilage in process of excretion. At the inner surface of the wall they terminate in lens- or button-shaped swellings, while from the outer end of the pore-organ there sometimes arise delicate radiating or club-shaped masses of mucilage through which the canal passes and which appear to be more or less permanent in character. In most cases, however, these are absent or only represented by small perforated buttons.
Reproduction
Desmids most commonly reproduce by asexual fission. During cell division, the two halves of a cell separate, and each half develops into a new cell. After division, a cell may be asymmetric since the recently formed half is smaller than the original half.
In adverse conditions, desmids may reproduce sexually through a process of conjugation, which are also found among other closely related taxa in the Zygnematophyceae. Sexual reproduction is rare, and many species have never been observed sexually reproducing.
Classification
Classification of the families and genera in the Desmidiales:
Closteriaceae
Closterium
Spinoclosterium
Gonatozygaceae
Genicularina
Gonatozygon
Leptocystinema
Peniaceae
Penium
Desmidiaceae
Actinodontum
Actinotaenium
Allorgeia
Amscottia
Bambusina
Bourrellyodesmus
Brachytheca
Calocylindrus
Cosmaridium
Cosmarium
Cosmocladium
Croasdalea
Cruciangulum
Desmidium
Docidium
Euastridium
Euastrum
Groenbladia
Haplotaenium
Heimansia
Hyalotheca
Ichthyocercus
Ichthyodontum
Mateola
Micrasterias
Onychonema
Oocardium
Pachyphorium
Phymatodocis
Pleurotaeniopsis
Pleurotaenium
Prescottiella
Pseudomicrasterias
Sphaerozosma
Spinocosmarium
Spondylosium
Staurastrum
Staurodesmus
Streptonema
Teilingia
Tetmemorus
Trapezodesmus
Triplastrum
Triploceras
Vincularia
Xanthidium
The family Gonatozygaceae is sometimes included within the Peniaceae, reducing the number of families from four to three. A fifth family Mesotaeniaceae was formerly included in the Desmidiales, but analysis of cell wall structure and DNA sequences show that the group is more closely related to the Zygnemataceae, and so is now placed together with that family in the order Zygnematales. However, the Zygnemataceae may have emerged in the Mesotaeniaceae.
Habitat and distribution
Desmids are found in freshwater habitats all over the world, but strongly prefer bogs, mires, and other nutrient-poor wetlands. They generally have strict ecological requirements: most species prefer waters with low amounts of dissolved calcium and magnesium, low salinity levels, and somewhat acidic pH. In waters with higher amounts of nutrients, desmids rapidly become outcompeted. Desmid species are generally found attached to aquatic vegetation, such as Utricularia, or tychoplanktonic; that is, free-floating in the water column after being disturbed.
Although the Desmidiales are cosmopolitan, a number of species appear to be restricted to continents or biogeographical realms; this is likely because desmids have strict ecological requirements and do not produce resting spores, making successful dispersal less likely. Therefore, they can be grouped into several regions each with their own characteristic desmid floras. The Indo-Malayan to North Australian realm, for example, is characterized by species such as Micrasterias ceratofera, while equatorial Africa is characterized by species such as Allorgeia incredibilis.
Ecology
Although desmids are incredibly diverse, with up to hundreds of them being found in a single site, their interactions with the environment are relatively unknown.
Desmids are host to a wide array of parasites, particularly fungal parasites called chytrids. They are also grazed by microscopic aquatic heterotrophs, such as crustaceans, rotifers, and ciliates.
| Biology and health sciences | Green algae | Plants |
238901 | https://en.wikipedia.org/wiki/Echo | Echo | In audio signal processing and acoustics, an echo is a reflection of sound that arrives at the listener with a delay after the direct sound. The delay is directly proportional to the distance of the reflecting surface from the source and the listener. Typical examples are the echo produced by the bottom of a well, a building, or the walls of enclosed and empty rooms.
Etymology
The word echo derives from the Greek ἠχώ (ēchō), itself from ἦχος (ēchos), 'sound'. Echo in Greek mythology was a mountain nymph whose ability to speak was cursed, leaving her able only to repeat the last words spoken to her.
Nature
Some animals, such as cetaceans (dolphins and whales) and bats, use echo for location sensing and navigation, a process known as echolocation. Echoes are also the basis of sonar technology.
Acoustic phenomenon
Walls or other hard surfaces, such as mountains and privacy fences, reflect acoustic waves. The reason for reflection may be explained as a discontinuity in the propagation medium. This can be heard when the reflection returns with sufficient magnitude and delay to be perceived distinctly. When sound, or the echo itself, is reflected multiple times from multiple surfaces, it is characterized as a reverberation.
The human ear cannot distinguish echo from the original direct sound if the delay is less than 1/10 a second. The speed of sound in dry air is approximately 341m/s at a temperature of 25°C. Therefore, the reflecting object must be more than from the sound source for the echo to be perceived by a person at the source. When a sound produces an echo in two seconds, the reflecting object is away. In nature, canyon walls or rock cliffs facing water are the most common natural settings for hearing echoes. The echo strength is frequently measured in sound pressure level (SPL) relative to the directly transmitted wave. Echoes may be desirable (as in systems).
Use of echo
In sonar, ultrasonic waves are more energetic than audible sounds. They can travel undeviated through a long distance, confined to a narrow beam, and are not easily absorbed in the medium. Hence, sound ranging and echo depth sounding uses ultrasonic waves. Ultrasonic waves are sent in all directions from the ship and are received at the receiver after the reflection from an obstacle (enemy ship, iceberg, or sunken ship). The distance from the obstacle is found using the formula d = (V*t)/2. Echo depth sounding is the process of finding the depth of the sea using this process. In the medical field, ultrasonic waves of sound are used in ultrasonography and echo cardiography.
Echo in music
Electric echo effects have been used since the 1950s in music performance and recording. The Echoplex is a tape delay effect, first made in 1959, that recreates the sound of an acoustic echo. Designed by Mike Battle, the Echoplex set a standard for the effect in the 1960s and was used by most of the notable guitar players of the era; original Echoplexes are highly sought after. While Echoplexes were used heavily by guitar players (and the occasional bass player, such as Chuck Rainey, or trumpeter, such as Don Ellis), many recording studios also used the Echoplex. Beginning in the 1970s, Market built the solid-state Echoplex for Maestro. In the 2000s, most echo effects units used electronic or digital circuitry to recreate the echo effect.
| Physical sciences | Waves | Physics |
238911 | https://en.wikipedia.org/wiki/Crimson | Crimson | Crimson is a rich, deep red color, inclining to purple.
It originally meant the color of the kermes dye produced from a scale insect, Kermes vermilio, but the name is now sometimes also used as a generic term for slightly bluish-red colors that are between red and rose. It is the national color of Nepal.
History
Crimson (NR4) is produced using the dried bodies of a scale insect, Kermes, which were gathered commercially in Mediterranean countries, where they live on the kermes oak, and sold throughout Europe. Kermes dyes have been found in burial wrappings in Anglo-Scandinavian York. They fell out of use with the introduction of cochineal, also made from scale insects, because although the dyes were comparable in quality and color intensity, ten to twelve times as much kermes is needed to produce the same effect as cochineal.
Carmine is the name given to the dye made from the dried bodies of the female cochineal, although the name crimson is sometimes applied to these dyes too. Cochineal appears to have been brought to Europe by the Spaniard Hernán Cortés during the conquest of the Aztec Empire and the name 'carmine' is derived from the French carmin. It was first described by Pietro Andrea Mattioli in 1549. The pigment is also called cochineal after the insect from which it is made.
Alizarin (PR83) is a pigment that was first synthesized in 1868 by the German chemists Carl Gräbe and Carl Liebermann and replaced the natural pigment madder lake. Alizarin crimson is a dye bonded onto alum which is then used as a pigment and mixed with ochre, sienna and umber. It is not totally colorfast.
Etymology
The word crimson has been recorded in English since 1400, and its earlier forms include cremesin, crymysyn and cramoysin (cf. cramoisy, a crimson cloth). These were adapted via Old Spanish from the Medieval Latin cremesinus (also kermesinus or carmesinus), the dye produced from Kermes scale insects, and can be traced back to Arabic qirmizi (قرمزي) ("red") (), also borrowed in Turkic languages kırmız and many other languages, e.g. German Karmesin, Italian cremisi, French cramoisi, Portuguese carmesim, Dutch “karmozijn”, etc. (via Latin). The ultimate source may be Sanskrit कृमिज kṛmi-jā meaning "worm-made".
A shortened form of carmesinus also gave the Latin carminus, from which comes carmine.
Other cognates include the Persian ghermez "red" derived from "kermest" the red worm, Old Church Slavonic чрьвл҄ѥнъ (črьvl'enъ), archaic Russian чермный (čermnyj), Bulgarian червен (cherven), and Serbo-Croatian crven "red". Cf. also vermilion.
Dyes
Carmine dyes, which give crimson and related red and purple colors, are based on an aluminium and calcium salt of carminic acid. Carmine lake is an aluminium or aluminium-tin lake of cochineal extract, and crimson lake is prepared by striking down an infusion of cochineal with a 5 percent solution of alum and cream of tartar. Purple lake is prepared like carmine lake with the addition of lime to produce the deep purple tone. Carmine dyes tend to fade quickly.
Carmine dyes were once widely prized in both the Americas and in Europe. They were used in paints by Michelangelo and for the crimson fabrics of the Hussars, the Turks, the British Redcoats, and the Royal Canadian Mounted Police.
Nowadays carmine dyes are used for coloring foodstuffs, medicines and cosmetics. As a food additive in the European Union, carmine dyes are designated E120, and are also called cochineal and Natural Red 4'''. Carmine dyes are also used in some oil paints and watercolors used by artists.
In nature
The crimson tide which sometimes occurs on beaches is caused by a type of algae known as Karenia brevis.
Crimson rosellas are a subspecies of parrot that are common in Australia.
The crimson sunbird is the national bird of Singapore.
The crimson-breasted gonolek is an African bushshrike with a bright crimson breast.
Crimson clover (Trifolium incarnatum) is a clover species native to Europe
Crimson glory vine (Vitis coignetiae) is a vine species native to Asia
Hind's Crimson Star is an alternative name of the deep orange-red variable star R Leporis
In culture
Literature
In George R.R. Martin's series A Song of Ice and Fire, crimson is the family color of House Lannister.
There is a Space Marine chapter in Warhammer 40,000 called the "Crimson Fists", who also paint the left glove of every warrior a deep red.
In The Dark Tower VII: The Dark Tower by Stephen King, the principal antagonist is the Crimson King.
The Flash (Barry Allen), a DC Comics superhero, wears a red costume and runs at super-speed. He is sometimes called The Crimson Comet.
Music
"Crimson and Clover" (1968 song)
King Crimson (band)In the Court of the Crimson King (1969)
"The Court of the Crimson King"
W.A.S.P. - The Crimson Idol (album)
Crimson (band)
Edge of Sanity - Crimson (album)Crimson, white and indigo is how Jerry Garcia describes the American flag in “Standing on the Moon.”
Sentenced - Crimson (album)
"Crimson Red", 2023 song by Jeffrey White, Raf Sandou, and MASON HOME
Film
In Guillermo del Toro's 2015 gothic romance film Crimson Peak, the Sharpes' dilapidated mansion Allerdale Hall, which is steadily sinking into the red clay, is referred to as "Crimson Peak" due to the warm red clay seeping through the snow.
The 1952 film The Crimson Pirate starred Burt Lancaster and Nick Cravat. Set late in the 18th century, on the fictional Caribbean islands of San Pero and Cobra, where a rebellion on Cobra is underway by the mysterious "El Libre". Pirate Captain Vallo captures the King's ship carrying His Majesty's envoy.
Nobility
In Polish, karmazyn (crimson) is a synonym for a magnate, i.e., a member of the rich, high nobility as only they may wear robing dyed from the scale insect.
Religion
In scriptures of the Baháʼí Faith, crimson stands for tests and sacrifice, among other things
Food
Rhubarb is sometimes poetically referred to as crimson stalks.
Military
The Danish hussar regiment's ceremonial uniform for enlisted members has a crimson pelisse.
A regiment of the British Army, The King's Royal Hussars still wears crimson trousers as successors to the 11th Hussars (the "Cherrypickers")
In the United States Army, crimson is the color of the Ordnance Corps.
School colors
Some Greek letter organizations use crimson as one of their official colors: Delta Sigma Theta (ΔΣΘ), Kappa Alpha Psi (ΚΑΨ), and Kappa Alpha Order (ΚΑ).
Crimson is the school color of several universities, including Korea University, University of Belgrano and University of Talca
In the United States including, Harvard University, University of Kansas, Indiana University, New Mexico State University, Saint Joseph's University, Tuskegee University, University of Alabama, University of Denver, University of Mississippi, University of Nebraska, University of Oklahoma, University of Utah, Washington State University, and Worcester Polytechnic Institute
The daily newspaper at Harvard is The Harvard Crimson.
The daily newspaper at Alabama is called The Crimson White''.
Harvard's athletic teams are the Crimson, and those of the University of Alabama are the Crimson Tide.
Vexillology
Crimson is the national color of Nepal and forms the background of the country's flag. It also appears on the flag of Poland.
| Physical sciences | Colors | Physics |
238914 | https://en.wikipedia.org/wiki/Viaduct | Viaduct | A viaduct is a specific type of bridge that consists of a series of arches, piers or columns supporting a long elevated railway or road. Typically a viaduct connects two points of roughly equal elevation, allowing direct overpass across a wide valley, road, river, or other low-lying terrain features and obstacles. The term viaduct is derived from the Latin via meaning "road", and ducere meaning "to lead". It is a 19th-century derivation from an analogy with ancient Roman aqueducts. Like the Roman aqueducts, many early viaducts comprised a series of arches of roughly equal length.
Over land
The longest viaduct in antiquity may have been the Pont Serme which crossed wide marshes in southern France. At its longest point, it measured 2,679 meters with a width of 22 meters.
Viaducts are commonly used in many cities that are railroad hubs, such as Chicago, Birmingham, London and Manchester. These viaducts cross the large railroad yards that are needed for freight trains there, and also cross the multi-track railroad lines that are needed for heavy rail traffic. These viaducts provide grade separation and keep highway and city street traffic from having to be continually interrupted by the train traffic. Likewise, some viaducts carry railroads over large valleys, or they carry railroads over cities with many cross-streets and avenues.
Many viaducts over land connect points of similar height in a landscape, usually by bridging a river valley or other eroded opening in an otherwise flat area. Often such valleys had roads descending either side (with a small bridge over the river, where necessary) that become inadequate for the traffic load, necessitating a viaduct for "through" traffic. Such bridges also lend themselves for use by rail traffic, which requires straighter and flatter routes. Some viaducts have more than one deck, such that one deck has vehicular traffic and another deck carries rail traffic. One example of this is the Prince Edward Viaduct in Toronto, Canada, that carries motor traffic on the top deck as Bloor Street, and metro as the Bloor-Danforth subway line on the lower deck, over the steep Don River valley. Others were built to span settled areas, crossing over roads beneath—the reason for many viaducts in London.
Over water
Viaducts over water make use of islands or successive arches. They are often combined with other types of bridges or tunnels to cross navigable waters as viaduct sections, while less expensive to design and build than tunnels or bridges with larger spans, typically lack sufficient horizontal and vertical clearance for large ships. See the Chesapeake Bay Bridge-Tunnel.
The Millau Viaduct is a cable-stayed road-bridge that spans the valley of the river Tarn near Millau in southern France. It opened in 2004 and is the tallest vehicular bridge in the world, with one pier's summit at 343 metres (1,125 ft). The viaduct Danyang–Kunshan Grand Bridge in China was the longest bridge in the world .
Land use below viaducts
Where a viaduct is built across land rather than water, the space below the arches may be used for businesses such as car parking, vehicle repairs, light industry, bars and nightclubs. In the United Kingdom, many railway lines in urban areas have been constructed on viaducts, and so the infrastructure owner Network Rail has an extensive property portfolio in arches under viaducts. In Berlin the space under the arches of elevated subway lines (S-Bahn) is used for several different purposes, including small eateries or bars.
Past and future
Elevated expressways were built in major cities such as Boston (Central Artery), Los Angeles, San Francisco, Seoul, Tokyo and Toronto (Gardiner Expressway). Some were demolished because they were unappealing and divided the city. In other cases, viaducts were demolished because they were structurally unsafe, such as the Embarcadero Freeway in San Francisco, which was damaged by an earthquake in 1989. However, in developing nations such as Thailand (Bang Na Expressway, the world's longest road bridge), India (Delhi-Gurgaon Expressway), China, Bangladesh, Pakistan, and Nicaragua, elevated expressways have been built and more are under construction to improve traffic flow, particularly as a workaround of land shortage when built atop surface roads.
Other uses have been found for some viaducts. In Paris, France, a repurposed rail viaduct provides a garden promenade on top and workspace for artisans below. The garden promenade is called the Coulée verte René-Dumont while the workspaces in the arches below are the Viaduc des Arts. The project was inaugurated in 1993. Manhattan's High Line, inaugurated in 2009, also uses an elevated train line as a linear urban park.
In Indonesia viaducts are used for railways in Java and also for highways such as the Jakarta Inner Ring Road. In January 2019, the Alaskan Way Viaduct in Seattle was closed and replaced with a tunnel after several decades of use because it was seismically unsafe.
| Technology | Transport infrastructure | null |
239021 | https://en.wikipedia.org/wiki/Badlands | Badlands | Badlands are a type of dry terrain where softer sedimentary rocks and clay-rich soils have been extensively eroded. They are characterized by steep slopes, minimal vegetation, lack of a substantial regolith, and high drainage density. Ravines, gullies, buttes, hoodoos and other such geologic forms are common in badlands.
Badlands are found on every continent except Antarctica, being most common where there are unconsolidated sediments. They are often difficult to navigate by foot, and are unsuitable for agriculture. Most are a result of natural processes, but destruction of vegetation by overgrazing or pollution can produce anthropogenic badlands.
Badlands topography
Badlands are characterized by a distinctive badlands topography. This is terrain in which water erosion has cut a very large number of deep drainage channels, separated by short, steep ridges (interfluves). Such a drainage system is said to have a very fine drainage texture, as measured by its drainage density. Drainage density is defined as the total length of drainage channels per unit area of land surface. Badlands have a very high drainage density of . The numerous deep drainage channels and high interfluves creates a stark landscape of hills, gullies, and ravines.
In addition to a dense system of drainages and interfluves, badlands often contain buttes and hoodoos. These are formed by resistant beds of sandstone, which form the caprock of the buttes and hoodoos.
Origin
Badlands arise from a combination of an impermeable but easily eroded ground surface, sparse vegetation, and infrequent but heavy rainfall. The surface bedrock is typically mudrock, sometimes with evaporites, with only occasional beds of more resistant sandstone. Infrequent heavy rains lead to heavy erosional dissection. Where sudden precipitation cannot penetrate impermeable clays, it is channeled into a very dense system of streamlets that erode a dense system of ever-enlarging, coalescing gulleys and ravines. Erosion is enhanced by pelting raindrops that dislodge soft sediments. The presence of bentonite clay further increases erodibility, as can rejuvenation of the drainage system from regional uplift, as occurred at Badlands National Park.
In addition to surface erosion, badlands sometimes have well-developed piping, which is a system of pipes, joints, caverns, and other connected void spaces in the subsurface through which water can drain. However, this is not a universal feature of badlands. For example, the Henry Mountains badlands show very little piping.
The precise processes by which the erosion responses take place vary depending on the precise interbedding of the sedimentary material. However, it has been estimated that the badlands of Badlands National Park erode at the relatively high rate of about per year. The White River draining Badlands National Park was so named for its heavy load of bentonite clay eroded from the badlands.
Regolith
Badlands are partially characterized by their thin to nonexistent regolith layers. The regolith profiles of badlands in arid climates are likely to resemble one another. In these regions, the upper layer (~) is typically composed of silt, shale, and sand (a byproduct of the weathered shale). This layer can form either a compact crust or a looser, more irregular aggregation of "popcorn" fragments. Located beneath the top layer is a sublayer (~), below which can be found a transitional shard layer (~), formed largely of loose disaggregated shale chips, which in turn eventually gives way to a layer of unweathered shale. Badlands such as those found in the Mancos Shale, the Brule Formation, the Chadron Formation, and the Dinosaur Provincial Park can be generally said to fit this profile.
In less arid regions, the regolith profile can vary considerably. Some badlands have no regolith layer whatsoever, capping instead in bare rock such as sandstone. Others have a regolith with a clay veneer, and still others have a biological crust of algae or lichens.
In addition to lacking significant regolith, they also lack much vegetation. The lack of vegetation could very well be a result of the lack of a substantial regolith.
Anthropogenic badlands
Although most badland topography is natural, badlands have been produced artificially by destruction of vegetation cover, through overgrazing, acid rain, or acid mine drainage. The Cheltenham Badlands in Caledon, Ontario are an example of badlands produced by poor farming practices. In the early 1900s, the area was used for agricultural purposes, predominantly cattle grazing. Agricultural use ceased by 1931 and natural recovery of the majority of the property began. Once established, however, this type of erosion can continue rapidly, if land clearing, overgrazing, and increased foot traffic by humans persists, as the shale is highly susceptible to erosion.
An example of badlands created by mining is the Roman gold mine of Las Médulas in northern Spain.
Etymology
The word badlands is a calque from the Canadian French phrase , as the early French fur traders called the White River badlands or 'bad lands to traverse', perhaps influenced by the Lakota people who moved there in the late 1700s and who referred to the terrain as , meaning 'bad land' or 'eroded land'.
The term malpaís means 'badlands' in Spanish, but refers to a terrain of lava flows that is unlike the eroded badlands of the White River.
Human impact
Badlands are generally unsuitable for agriculture, but attempts have been made to remediate badlands. For example, reforestation is being attempted in the Garbeta badlands of Eastern India. Revegetation and reforestation have been studied in the black marl badlands of the French Alps. Austrian black pine can become established and then be gradually replaced by native deciduous species. However, the time scale for this process is many decades.
Locations
Badlands are found on all the continents except Antarctica. The presence of unconsolidated sediments is a strong control on their locations.
Argentina
The Valle de la Luna ("Valley of the Moon") is one of many examples of badland formations in midwestern Argentina.
Canada
The Cheltenham Badlands are in Caledon, Ontario, not far from Canada's largest city Toronto.
The Big Muddy Badlands in Saskatchewan gained notoriety as a hideout for outlaws.
There is a large badland area in Alberta, particularly in the valley of the Red Deer River, where Dinosaur Provincial Park is located, as well as in Drumheller, where the Royal Tyrrell Museum of Palaeontology is located.
China
Zhangye National Geopark is a badlands area known for its colorful rock formations. It was voted by Chinese media outlets as one of the most beautiful landforms in China and became a UNESCO Global Geopark in 2019.
India
Garbeta, Eastern India is a badlands located in a monsoon climate. Chambal spread across northern parts of Madhya Pradesh, southeastern Rajasthan and southern parts of Uttar Pradesh known for its lawlessness and dacoity is another example of badlands. A small strip of badlands is also found in western Uttar Pradesh and Haryana.
Italy
In Italy, badlands are called "calanchi". Some examples are Aliano (Basilicata), Crete Senesi (Tuscany) and Civita di Bagnoregio (Lazio).
New Zealand
A well-known badlands formation in New Zealand – the Pūtangirua Pinnacles, formed by the erosion of the conglomerate of an old alluvial fan – is located at the head of a small valley near the southern tip of the North Island.
Spain
The Bardenas Reales near Tudela, Navarre, the Tabernas Desert in Tabernas, Almería, parts of the Granada Altiplano near Guadix and possibly Los Monegros in Aragon are examples of Spanish badlands.
Turkey
Turkey has extensive badlands, including Göreme National Park.
United States
In the U.S., Makoshika State Park in Montana and Badlands National Park in South Dakota are examples of extensive badland formations. Also located in this region is Theodore Roosevelt National Park, a United States National Park composed of three geographically separated areas of badlands in western North Dakota named after former U.S. President Theodore Roosevelt. Petrified Forest National Park in Arizona which is part of Navajo County encompasses numerous badlands that also abuts the Navajo Indian Reservation and is directly north of Joseph City, Arizona. Many dinosaurs are believed to be buried in the immediate area and exploration has been ongoing since the early 20th century.
Among the Henry Mountains area in Utah, about above sea level, Cretaceous- and Jurassic-aged shales are exposed. Another popular area of badland formations is Toadstool Geologic Park in the Oglala National Grassland located in northwestern Nebraska. Dinosaur National Monument in Colorado and Utah are also badlands settings, along with several other areas in southern Utah, such as the Chinle Badlands in Grand Staircase–Escalante National Monument. A small badland called Hell's Half-Acre is present in Natrona County, Wyoming. Additional badlands also exist in various places throughout southwest Wyoming, such as near Pinedale and in the Bridger Valley near the towns of Lyman and Mountain View, near the high Uintah Mountains. Pinnacles National Park in California also has areas of badlands, as does the Mojave Desert in eastern California.
Culture and media
Badlands have become a popular trope inside various media, particularly westerns.
Image gallery
| Physical sciences | Other erosional landforms | Earth science |
239096 | https://en.wikipedia.org/wiki/Parking | Parking | Parking is the act of stopping and disengaging a vehicle and usually leaving it unoccupied. Parking on one or both sides of a road is often permitted, though sometimes with restrictions. Some buildings have parking facilities for use of the buildings' users. Countries and local governments have rules for design and use of parking spaces.
Car parking is essential to car-based travel. Cars are typically stationary around 95 per cent of the time. The availability and price of car parking may support car dependency. Significant amounts of urban land are devoted to car parking; in many North American city centers, half or more of all land is devoted to car parking.
Parking facilities
Parking facilities can be divided into public parking and private parking.
Public parking is managed by local government authorities and available for all members of the public to drive to and park in.
Private parking is owned by a private entity. It may be available for use by the public or restricted to customers, employees or residents.
Such facilities may be on-street parking, located on the street, or off-street parking, located in a parking lot or parking garage.
On-street parking
On-street parking can come in the form of curbside or central parking.
Curbside parking may be parallel, angled or perpendicular parking. Parallel parking is often considered a complicated maneuver for drivers, however uses the least road width.
On-street parking can act as inexpensive traffic calming by reducing the effective width of the street.
On-street parking may be restricted for a number of reasons. Restrictions could include waiting prohibitions, which ban parking in certain areas; time restrictions; requirements to pay, e.g. at a Parking meter or using a pay by phone facility, or a permit zone, restricting parking to permit holders - often residents - only. Parking restrictions may be applied across a whole zone using a controlled parking zone or similar.
On-street parking is often criticised for being a bad use of high-value public space, especially where parking is free. In some cities, authorities have replaced parking spaces with Parklets.
Parking lots and garages
Parking lots (or car parks) generally come in either a structured or surface regime.
Structured regimes are buildings in which vehicles can be parked, including multi-storey parking garages, underground parking or a hybrid of the two. Such structures may be incorporated into a wider structure.
In the U.S., after the first public parking garage for motor vehicles was opened in Boston, May 24, 1898, livery stables in urban centers began to be converted into garages. In cities of the Eastern US, many former livery stables, with lifts for carriages, continue to operate as garages today.
Surface regimes involve using a clear lot to provide a single level of parking. This may be a stand-alone car park or located around a building.
There is a wide international vocabulary for multi-storey parking garages. In the Midwestern United States, they are known as parking ramp. In the United Kingdom, they are known as multi-storey car parks. In the Western US, they are called parking structures. In New Zealand, they are known as parking buildings. In Canada and South Africa, they are known as parkades.
Fringe parking
Fringe parking is an area for parking usually located outside the central business district and most often used by suburban residents who work or shop downtown.
Park and ride
Park and ride is a concept of parking whereby people drive or cycle to a car park away from their destination and use public transport or another form of transport, such as bicycle hire schemes, to complete their journey. This is done to reduce the amount of traffic congestion and the need for parking in city centres and to connect more people to public transport networks who may not be otherwise.
Bicycle parking
Parking lots specifically for bicycles are becoming more prevalent in many countries. These may include bicycle parking racks and locks, as well as more modern technologies for security and convenience. For instance, one bicycle parking lot in Tokyo has an automated parking system.
Certain parking lots or garages may contain parking facilities for other vehicles, such as bicycle parking. Underneath Utrecht Central station, there is a three-storey underground bicycle park which can store 12,656 bicycles.
Types of parking
In addition to basic car parking, variations of serviced parking types exist. Common serviced parking types are:
Carport (open-air single-level covered parking)
Valet parking
Meet and Greet Parking
Park and Fly Parking
Peer-to-peer shared parking
Parking spaces within car parks may be variously arranged.
Economics
Parking is one of the most important Intermediate goods in the modern market economy. Early economic analysis treated parking only as an end-of-trip cost. However, later work has recognised that parking is a major use of land in any urban area. According to the International Parking Institute, "parking is a $25 billion industry and plays a pivotal role in transportation, building design, quality of life and environmental issues". Annual parking revenue in the US alone is $10 billion.
In urban areas, car parks compete with each other and curbside parking spaces. Drivers do not want to walk far from where they have parked, giving car parks local monopoly power.
Urban parking spaces can have a high value where the price of land is high. The prices in Boston for parking spaces have always been high; in August 2020, the asking price ranged just under US$39,000 in the West End to almost $250,000 in the South End. According to Parkopedia's 2019 Global Parking Index, the cost for 2 hours of parking in USD$ for the top 25 global cities is as follows:
In the graph to the right or below the value above the line represents the out-of-pocket cost per trip, per person for each mode of transportation; the value below the line shows subsidies, environmental impact, social and indirect costs. When cities charge market rates for on-street parking and municipal parking garages for motor vehicles, and when bridges and tunnels are tolled for these modes, driving becomes less competitive in terms of out-of-pocket costs compared to other modes of transportation. When municipal motor vehicle parking is underpriced and roads are not tolled, the shortfall in tax expenditures by drivers, through fuel tax and other taxes might be regarded as a very large subsidy for automobile use: much greater than common subsidies for the maintenance of infrastructure and discounted fares for public transportation.
Parking price elasticity
The average response in parking demand to a change in price (parking price elasticity) is -0.52 for commuting and -0.62 for non-commuting trips. Non-commuters also respond to parking fees by changing their parking duration if the price is per hour.
Performance parking
Donald C. Shoup in 2005 argued in his book, The High Cost of Free Parking, against the large-scale use of land and other resources in urban and suburban areas for motor vehicle parking. Shoup's work has been popularized along with market-rate parking and performance parking, both of which raise and lower the price of metered street parking with the goal of reducing cruising for parking and double parking without overcharging for parking.
"Performance parking" or variable-rate parking is based on Shoup's ideas. Electronic parking meters are used so that parking spaces in desirable locations and at desirable times are more expensive than less desirable locations. Other variations include rising rates based on duration of parking. More modern ideas use sensors and networked parking meters that "bid up" (or down) the price of parking automatically with the goal of keeping 85–90% of the spaces in use at any given time to ensure perpetual parking availability. These ideas have been implemented in Redwood City, California and are being implemented in San Francisco and Los Angeles.
One empirical study supports performance-based pricing by analyzing the block-level price elasticity of parking demand in the SFpark context. The study suggests that block-level elasticities vary so widely that urban planners and economists cannot accurately predict the response in parking demand to a given change in price. The public policy implication is that planners should utilize observed occupancy rates in order to adjust prices so that target occupancy rates are achieved. Effective implementation will require further experimentation with and assessment of the tâtonnement process.
Geography
The management of parking as a land use is an aspect of urban planning.
Municipal parking regulation introduced controls for parking on public land, often funded through parking meters. However, with the growth of car use, the supply of on-street parking became insufficient to meet demand. City centre merchants called on municipalities to subsidise car parking in the city centre to facilitate competition against new forms of car-centric commercial development.
Parking is a heavy land use. The total land area of parking in the US is at least the size of Massachusetts.
Off-street parking can be a temporary usage for a land owner to extract value from a vacant lot.
Parking restrictions
During the winter of 2005 in Boston, the practice of some people saving convenient roadway for themselves became controversial. At that time, many Boston districts had an informal convention that if a person shoveled the snow out of a roadspace, that person could claim ownership of that space with a marker. However, city government defied that custom and cleared markers out of spaces.
Parking minimums and maximums
In congested urban areas parking of motor vehicles is time-consuming and often expensive. Urban planners who are in a position to override market forces must consider whether and how to accommodate or "demand manage" potentially large numbers of motor vehicles in small geographic areas. Usually, the authorities set minimum, or more rarely maximum, numbers of motor vehicle parking spaces for new housing and commercial developments, and may also plan their location and distribution to influence their convenience and accessibility. The costs or subsidies of such parking accommodations can become a heated point in local politics. For example, in 2006 the San Francisco Board of Supervisors considered a controversial zoning plan to limit the number of motor vehicle parking spaces available in new residential developments.
Tradeable parking allowances have been proposed for dense residential areas to reduce inequity and increase urban livability. In summary, each resident would receive an annual, fractional allowance for on-street parking. To park on the street, one must assemble a whole parking allowance by purchasing fractional allowances from others who do not own cars.
Parking by country
Germany
German municipalities have variegated transport cultures and policies, however common federal laws govern the use of street space and the rights of motorists. German law privileges parked cars as traffic and constrains the ability of municipal governments to implement diverse parking policies.
German legal principles determine that the use of public streets is for traffic, including car parking. Consequently, German motorists tend to assert a right to park for free on the public highway.
Japan
In Japan, since 1962, to buy a car, one is required to obtain a "garage certificate" (shako shomeisho) from their local prefecture's police, providing proof of their own off-street parking space that they either buy or rent, that is not located more than 2 kilometers from their place of residence. Kei cars can be exempted from parking space requirement in some sparsely-populated areas. Overnight street parking is not allowed.
United Kingdom
United States
In some jurisdictions, those in possession of the proper ID tags or license plates are also free from parking violation tickets for running over their metered time or parking in an inappropriate place, as some disabilities may prohibit the use of regular spaces. Illegally parking in a disabled parking space or fraudulent use of another person's permit is heavily fined.
South Korea
Shortage of parking lots
In South Korea, there are many more vehicles than there are parking lots in the country, so parking lots are sometimes created as a way to utilize empty spaces where people are playing.
Discount system
There are not many compact cars in Korea, so the government is providing a lot of support for them, and the parking lot discount system for them is an example of that.
Full discounts on eco-friendly cars are also active.
Electric & Hydrogen & Hybrid vehicles: 50% Off
Compact cars: 50% to 60% off
As the number of users of large supermarket chain increased in Korea, the utilization rate of the traditional market sharply decreased. Accordingly, each local government has a large-scale parking lots near the traditional market and provides discounts to users.
Traditional market users: 1 hour exemption & 50% off additional 1 hour afterwards
The low birth rate problem in Korea is serious, and there is a lot of support for them.
Multi-child households: 50% off
Disabled person: 1 hour exemption / 50% off additional 1 hour afterwards
Persons of national merit and persons eligible for veterans benefits: 1 hour exemption / 50% discount for additional 1 hour afterwards
Parking at various destinations
Hospitals
In England, NHS hospitals are permitted to charge patients, staff and visitors for parking at the hospital. This has been criticised for adding extra costs to accessing healthcare. In Scotland and Wales, all hospital parking charges have been abolished.
Airports
Most airports provide parking for patrons. Parking is normally split into short-stay parking, intended for those dropping off or picking up passengers, and long-stay parking, intended for staff and passengers who choose to drive to the airport. At larger airports, long-stay parking may be located further away from the terminal, while parking at the terminal will be more expensive. Some airports charge more for parking cars than for parking aircraft. Airports may be reluctant to discourage passengers from arriving at the airport by car due to the revenues generated.
At UK airports, it is rare for employees to pay for their car parking. Generally, the airports authority will charge for staff permits, but these permits will be purchased by employers and the cost not passed on to staff. Staff are generally more willing to park at a site away from the airport than passengers too.
Statistics
Parking Generation is a document produced by the Institute of Transportation Engineers (ITE) that assembles a vast array of parking demand observations predominately from the United States. It summarizes the amount of parking observed with various land uses at different times of the day/week/month/year including the peak parking demand. While it has been assailed by some planners for lack of data in urban settings, it stands as the single largest accumulation of actual parking demand data related to land use. Anyone can submit parking demand data for inclusion. The report is updated approximately every 5 to 10 years.
Finding parking
When the supply of kerbside parking in a particular area is less than the demand for parking, a phenomenon known as cruising occurs, where drivers drive on streets in search of a parking space. It can also occur where there is supply of kerbside space, but parking restriction or payment costs discourage drivers from parking there.
Cruising is an economic decision, with the cost of parking dominant in determining cruising behaviour. This is grounded in the principle that drivers will only cruise if the cost of cruising is lower than the savings of not parking in available chargeable spaces. Drivers are more likely to cruise if on-street parking is cheaper than off-street parking, the costs of fuel are cheap, the driver wishes to park for longer, the driver is alone in the car and the driver's time is not valuable to them. Cruising can be diminished if the cost of on-street parking is set equal to the cost of off-street parking.
Automated Parking Guidance systems present drivers with dynamic information on parking within controlled areas (like parking garages and parking lots). The systems combine traffic monitoring, communication, processing and variable message sign technologies to provide the service.
Mobile apps and parking booking platforms that help drivers find parking take different approaches have emerged.
Some connected cars have mobile apps associated with the in-car system, that can locate the car or indicate the last place it was parked. Cars with Internavi communicate to each other indicating recently vacated spots.
San Francisco uses a system called SFpark, which has sensors embedded in the roadway. It allows drivers to find parking via mobile app, website, or SMS, and includes "smart" parking meters and garages that use variable pricing based on time and location to keep approximately 15% of parking spaces open.
Some South Boston spots also have sensors, so users of an app called Parker can find vacancies.
Ford Motor Company is developing a system called Parking Spotter, which allows vehicles to upload parking spot information into the cloud for other drivers to access.
Parking guidance and information system provides information about the availability of parking spaces within a controlled area. The systems may include vehicle detection sensors that can count the number of available spaces and display the information on various signs. There may be indicator lights that can lead drivers to an exact available spot.
An amusing alliterative slang term for finding an ideal parking spot directly in front of ones destination is Doris Day parking named for the American singer and actor who in numerous romantic comedy films was shown to immediately drive into the perfect spot time after time.
Statistically, the optimal strategy is to drive past the first empty spot and park in the next available spot.
| Technology | Road transport | null |
239097 | https://en.wikipedia.org/wiki/Halide | Halide | In chemistry, a halide (rarely halogenide) is a binary chemical compound, of which one part is a halogen atom and the other part is an element or radical that is less electronegative (or more electropositive) than the halogen, to make a fluoride, chloride, bromide, iodide, astatide, or theoretically tennesside compound. The alkali metals combine directly with halogens under appropriate conditions forming halides of the general formula, MX (X = F, Cl, Br or I). Many salts are halides; the hal- syllable in halide and halite reflects this correlation. All Group 1 metals form halides that are white solids at room temperature.
A halide ion is a halogen atom bearing a negative charge. The common halide anions are fluoride (), chloride (), bromide (), and iodide (). Such ions are present in many ionic halide salts. Halide minerals contain halides. All these halide anions are colorless. Halides also form covalent bonds, examples being colorless TiF4, colorless TiCl4, orange TiBr4, and brown TiI4. The heavier members TiCl4, TiBr4, TiI4 can be distilled readily because they are molecular. The outlier is TiF4, m.p. 284 °C, because it has a polymeric structure. Fluorides often differ from the heavier halides.
Reactions
Redox
Halides cannot be reduced under the usual laboratory conditions, but they all can be oxidized to the parent halogens, which are diatomic. Especially for iodide and less so for the lighter halides, intermediates can be observed and isolated. Best characterized is triiodide. Many related species are known, including a host of polyiodides.
Protonation
Halides are conjugate bases of hydrogen halides, which are all gases. When the protonation is conducted in aqueous solution, hydrohalic acids are produced.
Reaction with silver ions
Halide salts such as , and are highly soluble in water to give colorless solutions. The solutions react readily with a solution of silver nitrate . These three halides form solid precipitates:
: white
: pale yellow
: yellow
Similar but slower reactions occur with alkyl halides in place of alkali metal halides, as describe in the Beilstein test.
Uses
Metal halides are used in high-intensity discharge lamps called metal halide lamps, such as those used in modern street lights. These are more energy-efficient than mercury-vapor lamps, and have much better colour rendition than orange high-pressure sodium lamps. Metal halide lamps are also commonly used in greenhouses or in rainy climates to supplement natural sunlight.
Silver halides are used in photographic films and papers. When the film is developed, the silver halides which have been exposed to light are reduced to metallic silver, forming an image.
Halides are also used in solder paste, commonly as a Cl or Br equivalent.
Synthetic organic chemistry often incorporates halogens into organohalide compounds.
| Physical sciences | Halide salts | Chemistry |
239098 | https://en.wikipedia.org/wiki/BitTorrent | BitTorrent | BitTorrent is a communication protocol for peer-to-peer file sharing (P2P), which enables users to distribute data and electronic files over the Internet in a decentralized manner. The protocol is developed and maintained by Rainberry, Inc., and was first released in 2001.
To send or receive files, users use a BitTorrent client on their Internet-connected computer, which are available for a variety of computing platforms and operating systems, including an official client. BitTorrent trackers provide a list of files available for transfer and allow the client to find peer users, known as "seeds", who may transfer the files. BitTorrent downloading is considered to be faster than HTTP ("direct downloading") and FTP due to the lack of a central server that could limit bandwidth.
BitTorrent is one of the most common protocols for transferring large files, such as digital video files containing TV shows and video clips, or digital audio files. BitTorrent accounted for a third of all internet traffic in 2004, according to a study by Cachelogic. As recently as 2019 BitTorrent remained a significant file sharing protocol according to Sandvine, generating a substantial amount of Internet traffic, with 2.46% of downstream, and 27.58% of upstream traffic, although this share has declined significantly since then.
History
Programmer Bram Cohen, a University at Buffalo alumnus, designed the protocol in April 2001, and released the first available version on 2 July 2001. Cohen and Ashwin Navin founded BitTorrent, Inc. (later renamed Rainberry, Inc.) to further develop the technology in 2004.
The first release of the BitTorrent client had no search engine and no peer exchange. Up until 2005, the only way to share files was by creating a small text file called a "torrent", that they would upload to a torrent index site. The first uploader acted as a seed, and downloaders would initially connect as peers. Those who wish to download the file would download the torrent, which their client would use to connect to a tracker which had a list of the IP addresses of other seeds and peers in the swarm. Once a peer completed a download of the complete file, it could in turn function as a seed. These files contain metadata about the files to be shared and the trackers which keep track of the other seeds and peers.
In 2005, first Vuze and then the BitTorrent client introduced distributed tracking using distributed hash tables which allowed clients to exchange data on swarms directly without the need for a torrent file.
In 2006, peer exchange functionality was added allowing clients to add peers based on the data found on connected nodes.
In 2017, BitTorrent, Inc. released the BitTorrent v2 protocol specification. BitTorrent v2 is intended to work seamlessly with previous versions of the BitTorrent protocol. The main reason for the update was that the old cryptographic hash function, SHA-1, is no longer considered safe from malicious attacks by the developers, and as such, v2 uses SHA-256. To ensure backwards compatibility, the v2 .torrent file format supports a hybrid mode where the torrents are hashed through both the new method and the old method, with the intent that the files will be shared with peers on both v1 and v2 swarms. Another update to the specification is adding a hash tree to speed up time from adding a torrent to downloading files, and to allow more granular checks for file corruption. In addition, each file is now hashed individually, enabling files in the swarm to be deduplicated, so that if multiple torrents include the same files, but seeders are only seeding the file from some, downloaders of the other torrents can still download the file. In addition, file hashes can be displayed on tracker, torrent indexing services, to search for swarms by searching for hashes of files contained in them. These hashes are different from the usual SHA-256 hash of files and can be obtained using tools. Magnet links for v2 also support a hybrid mode to ensure support for legacy clients.
Design
The BitTorrent protocol can be used to reduce the server and network impact of distributing large files. Rather than downloading a file from a single source server, the BitTorrent protocol allows users to join a "swarm" of hosts to upload and download from each other simultaneously. The protocol is an alternative to the older single source, multiple mirror sources technique for distributing data, and can work effectively over networks with lower bandwidth. Using the BitTorrent protocol, several basic computers, such as home computers, can replace large servers while efficiently distributing files to many recipients. This lower bandwidth usage also helps prevent large spikes in internet traffic in a given area, keeping internet speeds higher for all users in general, regardless of whether or not they use the BitTorrent protocol.
The file being distributed is divided into segments called pieces. As each peer receives a new piece of the file, it becomes a source (of that piece) for other peers, relieving the original seed from having to send that piece to every computer or user wishing a copy. With BitTorrent, the task of distributing the file is shared by those who want it; it is entirely possible for the seed to send only a single copy of the file itself and eventually distribute to an unlimited number of peers. Each piece is protected by a cryptographic hash contained in the torrent descriptor. This ensures that any modification of the piece can be reliably detected, and thus prevents both accidental and malicious modifications of any of the pieces received at other nodes. If a node starts with an authentic copy of the torrent descriptor, it can verify the authenticity of the entire file it receives.
Pieces are typically downloaded non-sequentially, and are rearranged into the correct order by the BitTorrent client, which monitors which pieces it needs, and which pieces it has and can upload to other peers. Pieces are of the same size throughout a single download (for example, a 10 MB file may be transmitted as ten 1 MB pieces or as forty 256 KB pieces).
Due to the nature of this approach, the download of any file can be halted at any time and be resumed at a later date, without the loss of previously downloaded information, which in turn makes BitTorrent particularly useful in the transfer of larger files. This also enables the client to seek out readily available pieces and download them immediately, rather than halting the download and waiting for the next (and possibly unavailable) piece in line, which typically reduces the overall time of the download. This eventual transition from peers to seeders determines the overall "health" of the file (as determined by the number of times a file is available in its complete form).
The distributed nature of BitTorrent can lead to a flood-like spreading of a file throughout many peer computer nodes. As more peers join the swarm, the likelihood of a successful download by any particular node increases. Relative to traditional Internet distribution schemes, this permits a significant reduction in the original distributor's hardware and bandwidth resource costs. Distributed downloading protocols in general provide redundancy against system problems, reduce dependence on the original distributor, and provide sources for the file which are generally transient and therefore there is no single point of failure as in one way server-client transfers.
Though both ultimately transfer files over a network, a BitTorrent download differs from a one way server-client download (as is typical with an HTTP or FTP request, for example) in several fundamental ways:
BitTorrent makes many small data requests over different IP connections to different machines, while server-client downloading is typically made via a single TCP connection to a single machine.
BitTorrent downloads in a random or in a "rarest-first" approach that ensures high availability, while classic downloads are sequential.
Taken together, these differences allow BitTorrent to achieve much lower cost to the content provider, much higher redundancy, and much greater resistance to abuse or to "flash crowds" than regular server software. However, this protection, theoretically, comes at a cost: downloads can take time to rise to full speed because it may take time for enough peer connections to be established, and it may take time for a node to receive sufficient data to become an effective uploader. This contrasts with regular downloads (such as from an HTTP server, for example) that, while more vulnerable to overload and abuse, rise to full speed very quickly, and maintain this speed throughout. In the beginning, BitTorrent's non-contiguous download methods made it harder to support "streaming playback". In 2014, the client Popcorn Time allowed for streaming of BitTorrent video files. Since then, more and more clients are offering streaming options.
Searching
The BitTorrent protocol provides no way to index torrent files. As a result, a comparatively small number of websites have hosted a large majority of torrents, many linking to copyrighted works without the authorization of copyright holders, rendering those sites especially vulnerable to lawsuits. A BitTorrent index is a "list of .torrent files, which typically includes descriptions" and information about the torrent's content. Several types of websites support the discovery and distribution of data on the BitTorrent network. Public torrent-hosting sites such as The Pirate Bay allow users to search and download from their collection of torrent files. Users can typically also upload torrent files for content they wish to distribute. Often, these sites also run BitTorrent trackers for their hosted torrent files, but these two functions are not mutually dependent: a torrent file could be hosted on one site and tracked by another unrelated site. Private host/tracker sites operate like public ones except that they may restrict access to registered users and may also keep track of the amount of data each user uploads and downloads, in an attempt to reduce "leeching".
Web search engines allow the discovery of torrent files that are hosted and tracked on other sites; examples include The Pirate Bay and BTDigg. These sites allow the user to ask for content meeting specific criteria (such as containing a given word or phrase) and retrieve a list of links to torrent files matching those criteria. This list can often be sorted with respect to several criteria, relevance (seeders to leechers ratio) being one of the most popular and useful (due to the way the protocol behaves, the download bandwidth achievable is very sensitive to this value). Metasearch engines allow one to search several BitTorrent indices and search engines at once.
The Tribler BitTorrent client was among the first to incorporate built-in search capabilities. With Tribler, users can find .torrent files held by random peers and taste buddies. It adds such an ability to the BitTorrent protocol using a gossip protocol, somewhat similar to the eXeem network which was shut down in 2005. The software includes the ability to recommend content as well. After a dozen downloads, the Tribler software can roughly estimate the download taste of the user, and recommend additional content.
In May 2007, researchers at Cornell University published a paper proposing a new approach to searching a peer-to-peer network for inexact strings, which could replace the functionality of a central indexing site. A year later, the same team implemented the system as a plugin for Vuze called Cubit and published a follow-up paper reporting its success.
A somewhat similar facility but with a slightly different approach is provided by the BitComet client through its "Torrent Exchange" feature. Whenever two peers using BitComet (with Torrent Exchange enabled) connect to each other they exchange lists of all the torrents (name and info-hash) they have in the Torrent Share storage (torrent files which were previously downloaded and for which the user chose to enable sharing by Torrent Exchange). Thus each client builds up a list of all the torrents shared by the peers it connected to in the current session (or it can even maintain the list between sessions if instructed).
At any time the user can search into that Torrent Collection list for a certain torrent and sort the list by categories. When the user chooses to download a torrent from that list, the .torrent file is automatically searched for (by info-hash value) in the DHT Network and when found it is downloaded by the querying client which can subsequently create and initiate a downloading task.
Downloading and sharing
Users find a torrent of interest on a torrent index site or by using a search engine built into the client, download it, and open it with a BitTorrent client. The client connects to the tracker(s) or seeds specified in the torrent file, from which it receives a list of seeds and peers currently transferring pieces of the file(s). The client connects to those peers to obtain the various pieces. If the swarm contains only the initial seeder, the client connects directly to it, and begins to request pieces. Clients incorporate mechanisms to optimize their download and upload rates.
The effectiveness of this data exchange depends largely on the policies that clients use to determine to whom to send data. Clients may prefer to send data to peers that send data back to them (a "tit for tat" exchange scheme), which encourages fair trading. But strict policies often result in suboptimal situations, such as when newly joined peers are unable to receive any data because they do not have any pieces yet to trade themselves or when two peers with a good connection between them do not exchange data simply because neither of them takes the initiative. To counter these effects, the official BitTorrent client program uses a mechanism called "optimistic unchoking", whereby the client reserves a portion of its available bandwidth for sending pieces to random peers (not necessarily known good partners, or "preferred peers") in hopes of discovering even better partners and to ensure that newcomers get a chance to join the swarm.
Although "swarming" scales well to tolerate "flash crowds" for popular content, it is less useful for unpopular or niche market content. Peers arriving after the initial rush might find the content unavailable and need to wait for the arrival of a "seed" in order to complete their downloads. The seed arrival, in turn, may take long to happen (this is termed the "seeder promotion problem"). Since maintaining seeds for unpopular content entails high bandwidth and administrative costs, this runs counter to the goals of publishers that value BitTorrent as a cheap alternative to a client-server approach. This occurs on a huge scale; measurements have shown that 38% of all new torrents become unavailable within the first month. A strategy adopted by many publishers which significantly increases availability of unpopular content consists of bundling multiple files in a single swarm. More sophisticated solutions have also been proposed; generally, these use cross-torrent mechanisms through which multiple torrents can cooperate to better share content.
Creating and publishing
The peer distributing a data file treats the file as a number of identically sized pieces, usually with byte sizes of a power of 2, and typically between 32 KB and 16 MB each. The peer creates a hash for each piece, using the SHA-1 hash function, and records it in the torrent file. Pieces with sizes greater than 512 KB will reduce the size of a torrent file for a very large payload, but is claimed to reduce the efficiency of the protocol. When another peer later receives a particular piece, the hash of the piece is compared to the recorded hash to test that the piece is error-free. Peers that provide a complete file are called seeders, and the peer providing the initial copy is called the initial seeder. The exact information contained in the torrent file depends on the version of the BitTorrent protocol.
By convention, the name of a torrent file has the suffix .torrent. Torrent files use the Bencode file format, and contain an "announce" section, which specifies the URL of the tracker, and an "info" section, containing (suggested) names for the files, their lengths, the piece length used, and a SHA-1 hash code for each piece, all of which are used by clients to verify the integrity of the data they receive. Though SHA-1 has shown signs of cryptographic weakness, Bram Cohen did not initially consider the risk big enough for a backward incompatible change to, for example, SHA-3. As of BitTorrent v2 the hash function has been updated to SHA-256.
In the early days, torrent files were typically published to torrent index websites, and registered with at least one tracker. The tracker maintained lists of the clients currently connected to the swarm. Alternatively, in a trackerless system (decentralized tracking) every peer acts as a tracker. Azureus was the first BitTorrent client to implement such a system through the distributed hash table (DHT) method. An alternative and incompatible DHT system, known as Mainline DHT, was released in the Mainline BitTorrent client three weeks later (though it had been in development since 2002) and subsequently adopted by the μTorrent, Transmission, rTorrent, KTorrent, BitComet, and Deluge clients.
After the DHT was adopted, a "private" flag – analogous to the broadcast flag – was unofficially introduced, telling clients to restrict the use of decentralized tracking regardless of the user's desires. The flag is intentionally placed in the info section of the torrent so that it cannot be disabled or removed without changing the identity of the torrent. The purpose of the flag is to prevent torrents from being shared with clients that do not have access to the tracker. The flag was requested for inclusion in the official specification in August 2008, but has not been accepted yet. Clients that have ignored the private flag were banned by many trackers, discouraging the practice.
Anonymity
BitTorrent does not, on its own, offer its users anonymity. One can usually see the IP addresses of all peers in a swarm in one's own client or firewall program. This may expose users with insecure systems to attacks. In some countries, copyright organizations scrape lists of peers, and send takedown notices to the internet service provider of users participating in the swarms of files that are under copyright. In some jurisdictions, copyright holders may launch lawsuits against uploaders or downloaders for infringement, and police may arrest suspects in such cases.
Various means have been used to promote anonymity. For example, the BitTorrent client Tribler makes available a Tor-like onion network, optionally routing transfers through other peers to obscure which client has requested the data. The exit node would be visible to peers in a swarm, but the Tribler organization provides exit nodes. One advantage of Tribler is that clearnet torrents can be downloaded with only a small decrease in download speed from one "hop" of routing.
i2p provides a similar anonymity layer although in that case, one can only download torrents that have been uploaded to the i2p network. The bittorrent client Vuze allows users who are not concerned about anonymity to take clearnet torrents, and make them available on the i2p network.
Most BitTorrent clients are not designed to provide anonymity when used over Tor, and there is some debate as to whether torrenting over Tor acts as a drag on the network.
Private torrent trackers are usually invitation only, and require members to participate in uploading, but have the downside of a single centralized point of failure. Oink's Pink Palace and What.cd are examples of private trackers which have been shut down.
Seedbox services download the torrent files first to the company's servers, allowing the user to direct download the file from there. One's IP address would be visible to the Seedbox provider, but not to third parties.
Virtual private networks encrypt transfers, and substitute a different IP address for the user's, so that anyone monitoring a torrent swarm will only see that address.
Associated technologies
Distributed trackers
On 2 May 2005, Azureus 2.3.0.0 (now known as Vuze) was released, utilizing a distributed database system. This system is a distributed hash table implementation which allows the client to use torrents that do not have a working BitTorrent tracker. A bootstrap server is instead utilized. The following month, BitTorrent, Inc. released version 4.2.0 of the Mainline BitTorrent client, which supported an alternative DHT implementation (popularly known as "Mainline DHT", outlined in a draft on their website) that is incompatible with that of Azureus. In 2014, measurement showed concurrent users of Mainline DHT to be from 10 million to 25 million, with a daily churn of at least 10 million.
Current versions of the official BitTorrent client, μTorrent, BitComet, Transmission and BitSpirit all share compatibility with Mainline DHT. Both DHT implementations are based on Kademlia. As of version 3.0.5.0, Azureus also supports Mainline DHT in addition to its own distributed database through use of an optional application plugin. This potentially allows the Azureus/Vuze client to reach a bigger swarm.
Another idea that has surfaced in Vuze is that of virtual torrents. This idea is based on the distributed tracker approach and is used to describe some web resource. Currently, it is used for instant messaging. It is implemented using a special messaging protocol and requires an appropriate plugin. Anatomic P2P is another approach, which uses a decentralized network of nodes that route traffic to dynamic trackers. Most BitTorrent clients also use peer exchange (PEX) to gather peers in addition to trackers and DHT. Peer exchange checks with known peers to see if they know of any other peers. With the 3.0.5.0 release of Vuze, all major BitTorrent clients now have compatible peer exchange.
Web seeding
Web "seeding" was implemented in 2006 as the ability of BitTorrent clients to download torrent pieces from an HTTP source in addition to the "swarm". The advantage of this feature is that a website may distribute a torrent for a particular file or batch of files and make those files available for download from that same web server; this can simplify long-term seeding and load balancing through the use of existing, cheap, web hosting setups. In theory, this would make using BitTorrent almost as easy for a web publisher as creating a direct HTTP download. In addition, it would allow the "web seed" to be disabled if the swarm becomes too popular while still allowing the file to be readily available. This feature has two distinct specifications, both of which are supported by Libtorrent and the 26+ clients that use it.
The first was created by John "TheSHAD0W" Hoffman, who created BitTornado. This first specification requires running a web service that serves content by info-hash and piece number, rather than filename.
The other specification is created by GetRight authors and can rely on a basic HTTP download space (using byte serving).
In September 2010, a new service named Burnbit was launched which generates a torrent from any URL using webseeding. There are server-side solutions that provide initial seeding of the file from the web server via standard BitTorrent protocol and when the number of external seeders reach a limit, they stop serving the file from the original source.
RSS feeds
A technique called broadcatching combines RSS feeds with the BitTorrent protocol to create a content delivery system, further simplifying and automating content distribution. Steve Gillmor explained the concept in a column for Ziff-Davis in December 2003. The discussion spread quickly among bloggers (Ernest Miller, Chris Pirillo, etc.). In an article entitled Broadcatching with BitTorrent, Scott Raymond explained:
The RSS feed will track the content, while BitTorrent ensures content integrity with cryptographic hashing of all data, so feed subscribers will receive uncorrupted content. One of the first and popular software clients (free and open source) for broadcatching is Miro. Other free software clients such as PenguinTV and KatchTV are also now supporting broadcatching. The BitTorrent web-service MoveDigital added the ability to make torrents available to any web application capable of parsing XML through its standard REST-based interface in 2006, though this has since been discontinued. Additionally, Torrenthut is developing a similar torrent API that will provide the same features, and help bring the torrent community to Web 2.0 standards. Alongside this release is a first PHP application built using the API called PEP, which will parse any Really Simple Syndication (RSS 2.0) feed and automatically create and seed a torrent for each enclosure found in that feed.
Throttling and encryption
Since BitTorrent makes up a large proportion of total traffic, some ISPs have chosen to "throttle" (slow down) BitTorrent transfers. For this reason, methods have been developed to disguise BitTorrent traffic in an attempt to thwart these efforts. Protocol header encrypt (PHE) and Message stream encryption/Protocol encryption (MSE/PE) are features of some BitTorrent clients that attempt to make BitTorrent hard to detect and throttle. As of November 2015, Vuze, BitComet, KTorrent, Transmission, Deluge, μTorrent, MooPolice, Halite, qBittorrent, rTorrent, and the latest official BitTorrent client (v6) support MSE/PE encryption.
In August 2007, Comcast was preventing BitTorrent seeding by monitoring and interfering with the communication between peers. Protection against these efforts is provided by proxying the client-tracker traffic via an encrypted tunnel to a point outside of the Comcast network. In 2008, Comcast called a "truce" with BitTorrent, Inc. with the intention of shaping traffic in a protocol-agnostic manner. Questions about the ethics and legality of Comcast's behavior have led to renewed debate about net neutrality in the United States. In general, although encryption can make it difficult to determine what is being shared, BitTorrent is vulnerable to traffic analysis. Thus, even with MSE/PE, it may be possible for an ISP to recognize BitTorrent and also to determine that a system is no longer downloading but only uploading data, and terminate its connection by injecting TCP RST (reset flag) packets.
Multitrackers
Another unofficial feature is an extension to the BitTorrent metadata format proposed by John Hoffman and implemented by several indexing websites. It allows the use of multiple trackers per file, so if one tracker fails, others can continue to support file transfer. It is implemented in several clients, such as BitComet, BitTornado, BitTorrent, KTorrent, Transmission, Deluge, μTorrent, rtorrent, Vuze, and Frostwire. Trackers are placed in groups, or tiers, with a tracker randomly chosen from the top tier and tried, moving to the next tier if all the trackers in the top tier fail.
Torrents with multiple trackers can decrease the time it takes to download a file, but also have a few consequences:
Poorly implemented clients may contact multiple trackers, leading to more overhead-traffic.
Torrents from closed trackers suddenly become downloadable by non-members, as they can connect to a seed via an open tracker.
Peer selection
BitTorrent, Inc. was working with Oversi on new Policy Discover Protocols that query the ISP for capabilities and network architecture information. Oversi's ISP hosted NetEnhancer box is designed to "improve peer selection" by helping peers find local nodes, improving download speeds while reducing the loads into and out of the ISP's network.
Implementations
The BitTorrent specification is free to use and many clients are open source, so BitTorrent clients have been created for all common operating systems using a variety of programming languages. The official BitTorrent client, μTorrent, qBittorrent, Transmission, Vuze, and BitComet are some of the most popular clients.
Some BitTorrent implementations such as MLDonkey and Torrentflux are designed to run as servers. For example, this can be used to centralize file sharing on a single dedicated server which users share access to on the network. Server-oriented BitTorrent implementations can also be hosted by hosting providers at co-located facilities with high bandwidth Internet connectivity (e.g., a datacenter) which can provide dramatic speed benefits over using BitTorrent from a regular home broadband connection. Services such as ImageShack can download files on BitTorrent for the user, allowing them to download the entire file by HTTP once it is finished.
The Opera web browser supports BitTorrent natively. Brave web browser ships with an extension which supports WebTorrent, a BitTorrent-like protocol based on WebRTC instead of UDP and TCP. BitLet allowed users to download Torrents directly from their browser using a Java applet (until browsers removed support for Java applets). An increasing number of hardware devices are being made to support BitTorrent. These include routers and NAS devices containing BitTorrent-capable firmware like OpenWrt. Proprietary versions of the protocol which implement DRM, encryption, and authentication are found within managed clients such as Pando.
Adoption
A growing number of individuals and organizations are using BitTorrent to distribute their own or licensed works (e.g. indie bands distributing digital files of their new songs). Independent adopters report that BitTorrent technology reduces demands on private networking hardware and bandwidth, an essential for non-profit groups with large amounts of internet traffic.
Many major open source and free software projects encourage BitTorrent as well as conventional downloads of their products (via HTTP, FTP etc.) to increase availability and to reduce load on their own servers, especially when dealing with larger files. In addition, some video game installers, especially those whose large size makes them difficult to host due to bandwidth limits, extremely frequent downloads, and unpredictable changes in network traffic, will distribute instead a specialized, stripped down BitTorrent client with enough functionality to download the game from the other running clients and the primary server (which is maintained in case not enough peers are available).
Some uses of BitTorrent for file sharing may violate laws in some jurisdictions (see legislation section).
Popularity and traffic statistics
, BitTorrent is utilized by 150 million active users. Based on this figure, the total number of monthly users may be estimated to more than a quarter of a billion (≈ 250 million). BitTorrent was responsible for 3.35% of all worldwide bandwidth—more than half of the 6% of total bandwidth dedicated to file sharing. , BitTorrent had 15–27 million concurrent users at any time.
Film, video, and music
BitTorrent Inc. has obtained a number of licenses from Hollywood studios for distributing popular content from their websites.
Sub Pop Records releases tracks and videos via BitTorrent Inc. to distribute its 1000+ albums. Babyshambles and The Libertines (both bands associated with Pete Doherty) have extensively used torrents to distribute hundreds of demos and live videos. US industrial rock band Nine Inch Nails frequently distributes albums via BitTorrent.
Podcasting software has integrated BitTorrent to help podcasters deal with the download demands of their MP3 "radio" programs. Specifically, Juice and Miro (formerly known as Democracy Player) support automatic processing of .torrent files from RSS feeds. Similarly, some BitTorrent clients, such as μTorrent, are able to process web feeds and automatically download content found within them.
DGM Live previously used BitTorrent to distribute music purchases.
VODO was a platform for promoting and distributing freely licensed films. It used BitTorrent for distribution and encouraged downloaders to donate to content creators.
Broadcasters
The CBC distributed the show Canada's Next Great Prime Minister via BitTorrent after the broadcast, becoming the first major broadcaster in North America to do so.
The NRK distributes a few past shows via BitTorrent.
VPRO released CC-licensed documentaries in 2009 and 2010 via BitTorrent.
Cloud Service Providers
Amazon S3 previously supported seeding public objects via the BitTorrent protocol.
Software
Blizzard Entertainment previously distributed content and patches for Diablo III, StarCraft II and World of Warcraft via BitTorrent.
Wargaming uses BitTorrent in their popular titles World of Tanks, World of Warships and World of Warplanes to distribute game updates.
Resilio Sync is a BitTorrent-based folder-syncing tool which can act as an alternative to server-based synchronisation services such as Dropbox.
Government
The British government used BitTorrent to distribute details about how the tax money of British citizens was spent.
Education
Florida State University uses BitTorrent to distribute large scientific data sets to its researchers.
Many universities that have BOINC distributed computing projects have used the BitTorrent functionality of the client-server system to reduce the bandwidth costs of distributing the client-side applications used to process the scientific data. If a BOINC distributed computing application needs to be updated (or merely sent to a user), it can do so with little impact on the BOINC server.
The developing Human Connectome Project uses BitTorrent to share their open dataset.
Academic Torrents is a BitTorrent tracker for use by researchers in fields that need to share large datasets
Others
Facebook uses BitTorrent to distribute updates to Facebook servers.
Twitter uses BitTorrent to distribute updates to Twitter servers.
The Internet Archive added BitTorrent to its file download options for over 1.3 million existing files, and all newly uploaded files, in August 2012. This method is the fastest means of downloading media from the Archive.
By early 2015, AT&T estimated that BitTorrent accounted for 20% of all broadband traffic.
Routers that use network address translation (NAT) must maintain tables of source and destination IP addresses and ports. Because BitTorrent frequently contacts 20–30 servers per second, the NAT tables of some consumer-grade routers are rapidly filled. This is a known cause of some home routers ceasing to work correctly.
Legislation
Although the protocol itself is legal, problems stem from using the protocol to traffic copyright infringing works, since BitTorrent is often used to download otherwise paid content, such as movies and video games. There has been much controversy over the use of BitTorrent trackers. BitTorrent metafiles themselves do not store file contents. Whether the publishers of BitTorrent metafiles violate copyrights by linking to copyrighted works without the authorization of copyright holders is controversial. Various jurisdictions have pursued legal action against websites that host BitTorrent trackers.
As a result the use of BitTorrent may sometimes be limited by Internet Service Providers (ISPs) due to legal or copyright grounds. Users may choose to run seedboxes or virtual private networks (VPNs) to circumvent these restrictions.
High-profile examples include the closing of Suprnova.org, TorrentSpy, LokiTorrent, BTJunkie, Mininova, Oink's Pink Palace and What.cd. BitTorrent search engine The Pirate Bay torrent website, formed by a Swedish group, is noted for the "legal" section of its website in which letters and replies on the subject of alleged copyright infringements are publicly displayed. On 31 May 2006, The Pirate Bay's servers in Sweden were raided by Swedish police on allegations by the MPAA of copyright infringement; however, the tracker was up and running again three days later. In the study used to value NBC Universal in its merger with Comcast, Envisional examined the 10,000 torrent swarms managed by PublicBT which had the most active downloaders. After excluding pornographic and unidentifiable content, it was found that only one swarm offered legitimate content.
In the United States, more than 200,000 lawsuits have been filed for copyright infringement on BitTorrent since 2010. In the United Kingdom, on 30 April 2012, the High Court of Justice ordered five ISPs to block The Pirate Bay.
Security
One concern is the UDP flood attack. BitTorrent implementations often use μTP for their communication. To achieve high bandwidths, the underlying protocol used is UDP, which allows spoofing of source addresses of internet traffic. It has been possible to carry out denial-of-service attacks in a P2P lab environment, where users running BitTorrent clients act as amplifiers for an attack at another service. However this is not always an effective attack because ISPs can check if the source address is correct.
Several studies on BitTorrent found files available for download containing malware. In particular, one small sample indicated that 18% of all executable programs available for download contained malware. Another study claims that as much as 14.5% of BitTorrent downloads contain zero-day malware, and that BitTorrent was used as the distribution mechanism for 47% of all zero-day malware they have found.
| Technology | Internet | null |
239136 | https://en.wikipedia.org/wiki/Pentaprism | Pentaprism | A pentaprism is a five-sided reflecting prism used to deviate a beam of light by a constant 90°, even if the entry beam is not at 90° to the prism.
The beam reflects inside the prism twice, allowing the transmission of an image through a right angle without inverting it (that is, without changing the image's handedness) as an ordinary right-angle prism or mirror would.
The reflections inside the prism are not caused by total internal reflection, since the beams are incident at an angle less than the critical angle (the minimum angle for total internal reflection). Instead, the two faces are coated to provide mirror surfaces. The two opposite transmitting faces are often coated with an antireflection coating to reduce spurious reflections. The fifth face of the prism is not used optically but truncates what would otherwise be an awkward angle joining the two mirrored faces.
In cameras
A variant of this prism is the roof pentaprism which is commonly used in the viewfinder of single-lens reflex cameras. The camera lens renders an image that is both vertically and laterally reversed, and the reflex mirror re-inverts it leaving an image laterally reversed. In this case, the image needs to be reflected left-to-right as the prism transmits the image formed on the camera's focusing screen. This lateral inversion is done by replacing one of the reflective faces of a normal pentaprism with a "roof" section, with two additional surfaces angled towards each other and meeting at 90°, which laterally reverses the image back to normal. Reflex cameras with waist-level finders (viewed from above), including many medium format cameras, display a laterally reversed image directly from the focusing screen which is viewed from above.
Compared to the pentamirror
The same optical paths can be realized with three mirrors, in an arrangement called the pentamirror. While substantially lighter, the light enters and exits the mirrors' glass several times, each time losing brightness and instead scattering. The pentaprism is typically much heavier, but only has one entrance and one exit, providing a notably superior optical performance. Additionally, pentamirrors can conceivably go out of alignment while a pentaprism's facets are perfectly aligned until it is destroyed.
In surveying
In surveying a double pentaprism (two pentaprisms stacked on top of each other) and a plumb-bob are used to stake out right angles, e.g. on a construction site.
| Technology | Optical components | null |
239217 | https://en.wikipedia.org/wiki/Healing | Healing | With physical trauma or disease suffered by an organism, healing involves the repairing of damaged tissue(s), organs and the biological system as a whole and resumption of (normal) functioning. Medicine includes the process by which the cells in the body regenerate and repair to reduce the size of a damaged or necrotic area and replace it with new living tissue. The replacement can happen in two ways: by regeneration in which the necrotic cells are replaced by new cells that form "like" tissue as was originally there; or by repair in which injured tissue is replaced with scar tissue. Most organs will heal using a mixture of both mechanisms.
Within surgery, healing is more often referred to as recovery, and postoperative recovery has historically been viewed simply as restitution of function and readiness for discharge. More recently, it has been described as an energy‐requiring process to decrease physical symptoms, reach a level of emotional well‐being, regain functions, and re‐establish activities
Healing is also referred to in the context of the grieving process.
In psychiatry and psychology, healing is the process by which neuroses and psychoses are resolved to the degree that the client is able to lead a normal or fulfilling existence without being overwhelmed by psychopathological phenomena. This process may involve psychotherapy, pharmaceutical treatment or alternative approaches such as traditional spiritual healing.
Regeneration
In order for an injury to be healed by regeneration, the cell type that was destroyed must be able to replicate. Cells also need a collagen framework along which to grow. Alongside most cells there is either a basement membrane or a collagenous network made by fibroblasts that will guide the cells' growth. Since ischaemia and most toxins do not destroy collagen, it will continue to exist even when the cells around it are dead.
Example
Acute tubular necrosis (ATN) in the kidney is a case in which cells heal completely by regeneration. ATN occurs when the epithelial cells that line the kidney are destroyed by either a lack of oxygen (such as in hypovolemic shock, when blood supply to the kidneys is dramatically reduced), or by toxins (such as some antibiotics, heavy metals or carbon tetrachloride).
Although many of these epithelial cells are dead, there is typically patchy necrosis, meaning that there are patches of epithelial cells still alive. In addition, the collagen framework of the tubules remains completely intact.
The existing epithelial cells can replicate, and, using the basement membrane as a guide, eventually bring the kidney back to normal. After regeneration is complete, the damage is undetectable, even microscopically.
Healing must happen by repair in the case of injury to cells that are unable to regenerate (e.g. neurons). Also, damage to the collagen network (e.g. by enzymes or physical destruction), or its total collapse (as can happen in an infarct) cause healing to take place by repair.
Genetics
Many genes play a role in healing. For instance, in wound healing, P21 has been found to allow mammals to heal spontaneously. It even allows some mammals (like mice) to heal wounds without scars. The LIN28 gene also plays a role in wound healing. It is dormant in most mammals. Also, the proteins MG53 and TGF beta 1 play important roles in wound healing.
Wound healing
In response to an incision or wound, a wound healing cascade is unleashed. This cascade takes place in four phases: clot formation, inflammation, proliferation, and maturation.
Clotting phase
Healing of a wound begins with clot formation to stop bleeding and to reduce infection by bacteria, viruses and fungi. Clotting is followed by neutrophil invasion three to 24 hours after the wound has been incurred, with mitoses beginning in epithelial cells after 24 to 48 hours.
Inflammation phase
In the inflammatory phase, macrophages and other phagocytic cells kill bacteria, debride damaged tissue and release chemical factors such as growth hormones that encourage fibroblasts, epithelial cells and endothelial cells which make new capillaries to migrate to the area and divide.
Proliferative phase
In the proliferative phase, immature granulation tissue containing plump, active fibroblasts forms. Fibroblasts quickly produce abundant type III collagen, which fills the defect left by an open wound. Granulation tissue moves, as a wave, from the border of the injury towards the center.
As granulation tissue matures, the fibroblasts produce less collagen and become more spindly in appearance. They begin to produce the much stronger type I collagen. Some of the fibroblasts mature into myofibroblasts which contain the same type of actin found in smooth muscle, which enables them to contract and reduce the size of the wound.
Maturation phase
During the maturation phase of wound healing, unnecessary vessels formed in granulation tissue are removed by apoptosis, and type III collagen is largely replaced by type I. Collagen which was originally disorganized is cross-linked and aligned along tension lines. This phase can last a year or longer. Ultimately a scar made of collagen, containing a small number of fibroblasts is left.
Tissue damaged by inflammation
After inflammation has damaged tissue (when combatting bacterial infection for example) and pro-inflammatory eicosanoids have completed their function, healing proceeds in 4 phases.
Recall phase
In the recall phase the adrenal glands increase production of cortisol which shuts down eicosanoid production and inflammation.
Resolution phase
In the Resolution phase, pathogens and damaged tissue are removed by macrophages (white blood cells). Red blood cells are also removed from the damaged tissue by macrophages. Failure to remove all of the damaged cells and pathogens may retrigger inflammation. The two subsets of macrophage M1 & M2 plays a crucial role in this phase, M1 macrophage being a pro inflammatory while as M2 is a regenerative and the plasticity between the two subsets determine the tissue inflammation or repair.
Regeneration phase
In the Regeneration phase, blood vessels are repaired and new cells form in the damaged site similar to the cells that were damaged and removed. Some cells such as neurons and muscle cells (especially in the heart) are slow to recover.
Repair phase
In the Repair phase, new tissue is generated which requires a balance of anti-inflammatory and pro-inflammatory eicosanoids. Anti-inflammatory eicosanoids include lipoxins, epi-lipoxins, and resolvins, which cause release of growth hormones.
| Biology and health sciences | Basics | Biology |
239409 | https://en.wikipedia.org/wiki/Insectivore | Insectivore | An insectivore is a carnivorous animal or plant that eats insects. An alternative term is entomophage, which can also refer to the human practice of eating insects.
The first vertebrate insectivores were amphibians. When they evolved 400 million years ago, the first amphibians were piscivores, with numerous sharp conical teeth, much like a modern crocodile. The same tooth arrangement is however also suited for eating animals with exoskeletons, thus the ability to eat insects is an extension of piscivory.
At one time, insectivorous mammals were scientifically classified in an order called Insectivora. This order is now abandoned, as not all insectivorous mammals are closely related. Most of the Insectivora taxa have been reclassified; those that have not yet been reclassified and found to be truly related to each other remain in the order Eulipotyphla.
Although individually small, insects exist in enormous numbers. Insects make up a very large part of the animal biomass in almost all non-marine, non-polar environments. It has been estimated that the global insect biomass is in the region of 1012 kg (one billion tons) with an estimated population of 1018 (one billion billion, or quintillion) organisms. Many creatures depend on insects as their primary diet, and many that do not (and are thus not technically insectivores) nevertheless use insects as a protein supplement, particularly when they are breeding.
Examples
Examples of insectivores include different kinds of species of carp, opossum, frogs, lizards (e.g. chameleons, geckos), nightingales, swallows, echidnas, numbats, anteaters, armadillos, aardvarks, pangolins, aardwolfs, bats, and spiders. Even large mammals are recorded as eating insects; the sloth bear is perhaps the largest insectivore. Insects also can be insectivores; examples are dragonflies, hornets, ladybugs, robber flies, and praying mantises. Insectivory also features to various degrees amongst primates, such as marmosets, tamarins, tarsiers, galagos and aye-aye. There is some suggestion that the earliest primates were nocturnal, arboreal insectivores.
Insectivorous plants
Insectivorous plants are plants that derive some of their nutrients from trapping and consuming animals or protozoan. The benefit they derive from their catch varies considerably; in some species, it might include a small part of their nutrient intake and in others it might be an indispensable source of nutrients. As a rule, however, such animal food, however valuable it might be as a source of certain critically important minerals, is not the plants' major source of energy, which they generally derive mainly from photosynthesis.
Insectivorous plants might consume insects and other animal material trapped adventitiously. However, most species to which such food represents an important part of their intake are specifically, often spectacularly, adapted to attract and secure adequate supplies. Their prey animals typically, but not exclusively, comprise insects and other arthropods. Plants highly adapted to reliance on animal food use a variety of mechanisms to secure their prey, such as pitfalls, sticky surfaces, hair-trigger snaps, bladder-traps, entangling furriness, and lobster-pot trap mechanisms. Also known as carnivorous plants, they appear adapted to grow in places where the soil is thin or poor in nutrients, especially nitrogen, such as acidic bogs and rock outcroppings.
Insectivorous plants include the Venus flytrap, several types of pitcher plants, butterworts, sundews, bladderworts, the waterwheel plant, brocchinia and many members of the Bromeliaceae. The list is far from complete, and some plants, such as Roridula species, exploit the prey organisms mainly in a mutualistic relationship with other creatures, such as resident organisms that contribute to the digestion of prey. In particular, animal prey organisms supply carnivorous plants with nitrogen, but they also are important sources of various other soluble minerals, such as potassium and trace elements that are in short supply in environments where the plants flourish. This gives them a decisive advantage over other plants, whereas in nutrient-rich soils they tend to be out-competed by plants adapted to aggressive growth where nutrient supplies are not the major constraints.
Technically these plants are not strictly insectivorous, as they consume any animal that they can secure and consume; the distinction is trivial, however, because not many primarily insectivorous organisms exclusively consume insects. Most of those that do have such a restrictive diet, such as certain parasitoids and hunting wasps, are specialized to exploit particular species, not insects in general. Indeed, much as large mantids and spiders will do, the larger varieties of pitcher plants have been known to consume vertebrates such as small rodents and lizards. Charles Darwin wrote the first well-known treatise on carnivorous plants in 1875.
| Biology and health sciences | Ethology | Biology |
239413 | https://en.wikipedia.org/wiki/Arctic%20tern | Arctic tern | The Arctic tern (Sterna paradisaea) is a tern in the family Laridae. This bird has a circumpolar breeding distribution covering the Arctic and sub-Arctic regions of Europe (as far south as Brittany), Asia, and North America (as far south as Massachusetts). The species is strongly migratory, seeing two summers each year as it migrates along a convoluted route from its northern breeding grounds to the Antarctic coast for the southern summer and back again about six months later. Recent studies have shown average annual round-trip lengths of about for birds nesting in Iceland and Greenland and about for birds nesting in the Netherlands. These are by far the longest migrations known in the animal kingdom. The Arctic tern nests once every one to three years (depending on its mating cycle).
Arctic terns are medium-sized birds. They have a length of and a wingspan of . They are mainly grey and white plumaged, with a red/orange beak and feet, white forehead, a black nape and crown (streaked white), and white cheeks. The grey mantle is , and the scapulae are fringed brown, some tipped white. The upper wing is grey with a white leading edge, and the collar is completely white, as is the rump. The deeply forked tail is whitish, with grey outer webs.
Arctic terns are long-lived birds, with many reaching fifteen to thirty years of age. They eat mainly fish and small marine invertebrates. The species is abundant, with an estimated two million individuals. While the trend in the number of individuals in the species as a whole is not known, exploitation in the past has reduced this bird's numbers in the southern reaches of its ranges.
Etymology
The genus name Sterna is derived from Old English "stearn", "tern". The specific paradisaea is from Late Latin paradisus, "paradise".
The Scots names pictarnie, tarrock and their many variants are also believed to be onomatopoeic, derived from the distinctive call. Due to the difficulty in distinguishing the two species, all the informal common names are shared with the common tern.
Distribution and migration
The Arctic tern has a continuous worldwide circumpolar breeding distribution; there are no recognized subspecies. It can be found in coastal regions in cooler temperate parts of North America and Eurasia during the northern summer. During the southern summer, it can be found at sea, reaching the northern edge of the Antarctic ice.
The Arctic tern is famous for its migration; it flies from its Arctic breeding grounds to the Antarctic and back again each year. The shortest distance between these areas is . The long journey ensures that this bird sees two summers per year and more daylight than any other creature on the planet. One example of this bird's remarkable long-distance flying abilities involves an Arctic tern ringed as an unfledged chick on the Farne Islands, Northumberland, UK, in the northern summer of 1982 that reached Melbourne, Australia in October, just three months after fledging – a journey of more than . Another example is that of a chick ringed in Labrador, Canada, on 23 July 1928. It was found in South Africa four months later.
A 2010 study using tracking devices attached to the birds showed that the above examples are not unusual for the species. In fact, the study showed that previous research had seriously underestimated the annual distances travelled by the Arctic tern. Eleven birds that bred in Greenland or Iceland covered on average in a year, with a maximum of . The difference from previous estimates is due to the birds taking meandering courses rather than following a straight route as was previously assumed. The birds follow a somewhat convoluted course in order to take advantage of prevailing winds. The average Arctic tern lives about 30 years and will, based on the above research, travel some during its lifetime, the equivalent of a roundtrip from Earth to the Moon more than three times.
A 2013 tracking study of half a dozen Arctic terns breeding in the Netherlands shows average annual migrations of c. . On their way south, these birds roughly followed the coastlines of Europe and Africa.
Arctic terns usually migrate sufficiently far offshore that they are rarely seen from land outside the breeding season.
Description and taxonomy
The Arctic tern is a medium-sized bird around from the tip of its beak to the tip of its tail. The wingspan is . The weight is . The beak is dark red, as are the short legs and webbed feet. Like most terns, the Arctic tern has high aspect ratio wings and a tail with a deep fork.
The adult plumage is grey above, with a black nape and crown and white cheeks. The upperwings are pale grey, with the area near the wingtip being translucent. The tail is white, and the underparts pale grey. Both sexes are similar in appearance. The winter plumage is similar, but the crown is whiter and the bills are darker.
Juveniles differ from adults in their black bill and legs, "scaly" appearing wings, and mantle with dark feather tips, dark carpal wing bar, and short tail streamers. During their first summer, juveniles also have a whiter forecrown.
The species has a variety of calls; the two most common being the alarm call, made when possible predators (such as humans or other mammals) enter the colonies, and the advertising call.
While the Arctic tern is similar to the common and roseate terns, its colouring, profile, and call are slightly different. Compared to the common tern, it has a longer tail and mono-coloured bill, while the main differences from the roseate are its slightly darker colour and longer wings. The Arctic tern's call is more nasal and rasping than that of the common, and is easily distinguishable from that of the roseate.
This bird's closest relatives are a group of South Polar species, the South American (Sterna hirundinacea), Kerguelen (S. virgata), and Antarctic (S. vittata) terns.
The immature plumages of Arctic tern were originally described as separate species, Sterna portlandica and Sterna pikei.
Reproduction
Breeding begins around the third or fourth year. Arctic terns mate for life and, in most cases, return to the same colony each year. Courtship is elaborate, especially in birds nesting for the first time. Courtship begins with a so-called "high flight", where a female will chase the male to a high altitude and then slowly descend. This display is followed by "fish flights", where the male will offer fish to the female. Courtship on the ground involves strutting with a raised tail and lowered wings. After this, both birds will usually fly and circle each other.
Both sexes agree on a site for a nest, and both will defend the site. During this time, the male continues to feed the female. Mating occurs shortly after this. Breeding takes place in colonies on coasts, islands and occasionally inland on tundra near water. It often forms mixed flocks with the common tern. It lays from one to three eggs per clutch, most often two.
It is one of the most aggressive terns, fiercely defensive of its nest and young. It will attack humans and large predators, usually striking the top or back of the head. Although it is too small to cause serious injury to an animal of a human's size, it is still capable of drawing blood, and is capable of repelling many raptorial birds, polar bears and smaller mammalian predators such as foxes and cats.
The nest is usually a depression in the ground, which may or may not be lined with bits of grass or similar materials. The eggs are mottled and camouflaged. Both sexes share incubation duties. The young hatch after 22–27 days and fledge after 21–24 days. If the parents are disturbed and flush from the nest frequently the incubation period could be extended to as long as 34 days.
When hatched, the chicks are downy. Being precocial, the chicks begin to move around and explore their surroundings within one to three days after hatching. Usually they do not stray far from the nest. Chicks are brooded by the adults for the first ten days after hatching. Both parents care for hatchlings. Chick diets always include fish, and parents selectively bring larger prey items to chicks than they eat themselves. Males bring more food than females. Feeding by the parents lasts for roughly a month before being weaned off slowly. After fledging, the juveniles learn to feed themselves, including the difficult method of plunge-diving. They will fly south to winter with the help of their parents.
Arctic terns are long-lived birds that spend considerable time raising only a few young, and are thus said to be K-selected. A 1957 study in the Farne Islands estimated an annual survival rate of 82%.
Ecology and behaviour
The diet of the Arctic tern varies depending on location and time, but is usually carnivorous. In most cases, it eats small fish or marine crustaceans. Fish species comprise the most important part of the diet, and account for more of the biomass consumed than any other food. Prey species are immature (1–2-year-old) shoaling species such as herring, cod, sandlances, and capelin. Among the marine crustaceans eaten are amphipods, crabs and krill. Sometimes, these birds also eat molluscs, marine worms, or berries, and on their northern breeding grounds, insects.
Arctic terns sometimes dip down to the surface of the water to catch prey close to the surface. They may also chase insects in the air when breeding. It is also thought that Arctic terns may, in spite of their small size, occasionally engage in kleptoparasitism by swooping at birds so as to startle them into releasing their catches. Several species are targeted—conspecifics, other terns (like the common tern), and some auk and grebe species.
While nesting, Arctic terns are vulnerable to predation by cats and other animals. Besides being a competitor for nesting sites, the larger herring gull steals eggs and hatchlings. Camouflaged eggs help prevent this, as do isolated nesting sites. Scientists have experimented with bamboo canes erected around tern nests. Although they found fewer predation attempts in the caned areas than in the control areas, canes did not reduce the probability of predation success per attempt. While feeding, skuas, gulls, and other tern species will often harass the birds and steal their food.
Conservation status
The total population for the arctic tern is estimated at more than two million individuals, with more than half of the population in Europe. The breeding range is very large, and although the population is considered to be decreasing, this species is evaluated as a species of least concern by the IUCN.
Arctic terns are among the species to which the Agreement on the Conservation of African-Eurasian Migratory Waterbirds applies.
The population in New England was reduced in the late nineteenth-century because of hunting for the millinery trade. Exploitation continues in western Greenland, where the population of the species has been reduced greatly since 1950. In Iceland, the Arctic tern has been regionally uplisted to Vulnerable as of 2018, due to the crash of sandeel (Ammodytes spp.) stocks.
At the southern part of their range, the Arctic tern has been reducing in numbers. Much of this is due to a lack of food. However, most of these birds' range is extremely remote, with no apparent trend in the species as a whole. The Arctic terns' dispersal pattern is affected by changing climatic conditions, and its ability to feed in its Antarctic wintering is dependent on sea-ice cover, but unlike breeding species, it is able to move to a different area if necessary, and can be used as a control to investigate the effect of climate change on breeding species.
Cultural depictions
The Arctic tern has appeared on the postage stamps of several countries and dependent territories. The territories include Åland, Alderney, and Faroe Islands. Countries include Canada, Finland, Iceland, and Cuba.
The Arctic tern was featured prominently in a sketch on the improv comedy television show Whose Line Is It Anyway? involving Colin Mochrie and Ryan Stiles.
| Biology and health sciences | Charadriiformes | Animals |
239807 | https://en.wikipedia.org/wiki/Windlass | Windlass | The windlass is an apparatus for moving heavy weights. Typically, a windlass consists of a horizontal cylinder (barrel), which is rotated by the turn of a crank or belt. A winch is affixed to one or both ends, and a cable or rope is wound around the winch, pulling a weight attached to the opposite end. The Greek scientist Archimedes was the inventor of the windlass. A surviving medieval windlass, dated to –1400, is in the Church of St Mary and All Saints, Chesterfield. The oldest depiction of a windlass for raising water can be found in the Book of Agriculture published in 1313 by the Chinese official Wang Zhen of the Yuan Dynasty ( 1290–1333).
Uses
Vitruvius, a military engineer writing about 28 BC, defined a machine as "a combination of timber fastened together, chiefly efficacious in moving great weights." About a century later, Hero of Alexandria summarized the practice of his day by naming the "five simple machines" for "moving a given weight by a given force" as the lever, windlass, screw for power, wedge, and tackle block (pulley). Until nearly the end of the nineteenth century it was held that these "five mechanical powers" were the building blocks from which all more complex assemblages were constructed.'
During the Middle Ages the windlass was used to raise materials for the construction of buildings such as in Chesterfield's crooked spire church.
A windlass cocking mechanism on crossbows was used as early as 1215 in England, and most European crossbows had one by the Late Middle Ages.
Windlasses are sometimes used on boats to raise the anchor as an alternative to a vertical capstan (see anchor windlass).
The handle used to open locks on the UK's inland waterways is called a windlass.
Windlass can be used to raise water from a well. The oldest description of a well windlass, a rotating wooden rod installed across the mouth of a well, is found in Isidore of Seville's ( 560–636) Origenes (XX, 15, 1–3).
Windlass have also been used in gold mining. A windlass would be constructed above a shaft which allowed heavy buckets to be hauled up to the surface. This process would be used until the shaft got below 40 metres deep when the windlass would be replaced by a 'whip' or a 'whim'.
Differential windlass
In a differential windlass, also called a Chinese windlass, there are two coaxial drums of different radii r and r′. The rope is wound onto one drum while it unwinds from the other, with a movable pulley hanging in the bight between the drums. Since each turn of the crank raises the pulley and attached weight by only , very large mechanical advantages can be obtained.
Spanish windlass
A Spanish windlass is a device for tightening a rope or cable by twisting it using a stick as a lever. The rope or cable is looped around two points so that it is fixed at either end. The stick is inserted into the loop and twisted, tightening the rope and pulling the two points toward each other. It is commonly used to move a heavy object such as a pipe or a post a short distance. It can be an effective device for pulling cars or cattle out of mud. A Spanish windlass is sometimes used to tighten a tourniquet or a straitjacket. A Spanish windlass trap can be used to kill small game. An 1898 report to the US Senate Committee on Foreign Relations about an American vessel captured by a Spanish gunboat described the Spanish windlass as a torture device. One of the captives' wrists were tied together. The captor then twisted a stick in the rope until it tightened and caused the man's wrists to swell.
| Technology | Mechanisms | null |
239836 | https://en.wikipedia.org/wiki/Aromaticity | Aromaticity | In organic chemistry, aromaticity is a chemical property describing the way in which a conjugated ring of unsaturated bonds, lone pairs, or empty orbitals exhibits a stabilization stronger than would be expected by the stabilization of conjugation alone. The earliest use of the term was in an article by August Wilhelm Hofmann in 1855. There is no general relationship between aromaticity as a chemical property and the olfactory properties of such compounds.
Aromaticity can also be considered a manifestation of cyclic delocalization and of resonance. This is usually considered to be because electrons are free to cycle around circular arrangements of atoms that are alternately single- and double-bonded to one another. This commonly seen model of aromatic rings, namely the idea that benzene was formed from a six-membered carbon ring with alternating single and double bonds (cyclohexatriene), was developed by Kekulé (see History section below). Each bond may be seen as a hybrid of a single bond and a double bond, every bond in the ring identical to every other. The model for benzene consists of two resonance forms, which corresponds to the double and single bonds superimposing to give rise to six one-and-a-half bonds. Benzene is a more stable molecule than would be expected without accounting for charge delocalization.
Theory
As is standard for resonance diagrams, a double-headed arrow is used to indicate that the two structures are not distinct entities, but merely hypothetical possibilities. Neither is an accurate representation of the actual compound, which is best represented by a hybrid (average) of these structures, which can be seen at right. A C=C bond is shorter than a C−C bond, but benzene is perfectly hexagonal—all six carbon-carbon bonds have the same length, intermediate between that of a single and that of a double bond.
A better representation is that of the circular π bond (Armstrong's inner cycle), in which the electron density is evenly distributed through a π-bond above and below the ring. This model more correctly represents the location of electron density within the aromatic ring.
The single bonds are formed with electrons in line between the carbon nuclei — these are called σ-bonds. Double bonds consist of a σ-bond and a π-bond. The π-bonds are formed from overlap of atomic p-orbitals above and below the plane of the ring. The following diagram shows the positions of these p-orbitals:
Since they are out of the plane of the atoms, these orbitals can interact with each other freely, and become delocalized. This means that, instead of being tied to one atom of carbon, each electron is shared by all six in the ring. Thus, there are not enough electrons to form double bonds on all the carbon atoms, but the "extra" electrons strengthen all of the bonds on the ring equally. The resulting molecular orbital has π symmetry.
History
Etymology
The first known use of the word "aromatic" as a chemical term — namely, to apply to compounds that contain the phenyl radical — occurs in an article by August Wilhelm Hofmann in 1855. If this is indeed the earliest introduction of the term, it is curious that Hofmann says nothing about why he introduced an adjective indicating olfactory character to apply to a group of chemical substances only some of which have notable aromas. Also, many of the most odoriferous organic substances known are terpenes, which are not aromatic in the chemical sense. But terpenes and benzenoid substances do have a chemical characteristic in common, namely higher unsaturation indices than many aliphatic compounds, and Hofmann may not have been making a distinction between the two categories.
The structure of the benzene ring
In the 19th century, chemists found it puzzling that benzene could be so unreactive toward addition reactions, given its presumed high degree of unsaturation. The cyclohexatriene structure for benzene was first proposed by August Kekulé in 1865. Over the next few decades, most chemists readily accepted this structure, since it accounted for most of the known isomeric relationships of aromatic chemistry.
Between 1897 and 1906, J. J. Thomson, the discoverer of the electron, proposed three equivalent electrons between each carbon atom in benzene.
An explanation for the exceptional stability of benzene is conventionally attributed to Sir Robert Robinson, who was apparently the first (in 1925) to coin the term aromatic sextet as a group of six electrons that resists disruption.
In fact, this concept can be traced further back, via Ernest Crocker in 1922, to Henry Edward Armstrong, who in 1890 wrote "the (six) centric affinities act within a cycle ... benzene may be represented by a double ring (sic) ... and when an additive compound is formed, the inner cycle of affinity suffers disruption, the contiguous carbon-atoms to which nothing has been attached of necessity acquire the ethylenic condition".
Here, Armstrong is describing at least four modern concepts. First, his "affinity" is better known nowadays as the electron, which was to be discovered only seven years later by J. J. Thomson. Second, he is describing electrophilic aromatic substitution, proceeding (third) through a Wheland intermediate, in which (fourth) the conjugation of the ring is broken. He introduced the symbol C centered on the ring as a shorthand for the inner cycle, thus anticipating Erich Clar's notation. It is argued that he also anticipated the nature of wave mechanics, since he recognized that his affinities had direction, not merely being point particles, and collectively having a distribution that could be altered by introducing substituents onto the benzene ring (much as the distribution of the electric charge in a body is altered by bringing it near to another body).
The quantum mechanical origins of this stability, or aromaticity, were first modelled by Hückel in 1931. He was the first to separate the bonding electrons into sigma and pi electrons.
Characteristics of aromatic (aryl) compounds
An aromatic (or aryl) compound contains a set of covalently bound atoms with specific characteristics:
A delocalized conjugated π system, most commonly an arrangement of alternating single and double bonds
Coplanar structure, with all the contributing atoms in the same plane
Contributing atoms arranged in one or more rings
A number of π delocalized electrons that is even, but not a multiple of 4. That is, 4n + 2 number of π electrons, where n=0, 1, 2, 3, and so on. This is known as Hückel's Rule.
Whereas benzene is aromatic (6 electrons, from 3 double bonds), cyclobutadiene is not, since the number of π delocalized electrons is 4, which of course is a multiple of 4. The cyclobutadienide (2−) ion, however, is aromatic (6 electrons). An atom in an aromatic system can have other electrons that are not part of the system, and are therefore ignored for the 4n + 2 rule. In furan, the oxygen atom is sp² hybridized. One lone pair is in the π system and the other in the plane of the ring (analogous to C-H bond on the other positions). There are 6 π electrons, so furan is aromatic.
Aromatic molecules typically display enhanced chemical stability, compared to similar non-aromatic molecules. A molecule that can be aromatic will tend to alter its electronic or conformational structure to be in this situation. This extra stability changes the chemistry of the molecule. Aromatic compounds undergo electrophilic aromatic substitution and nucleophilic aromatic substitution reactions, but not electrophilic addition reactions as happens with carbon-carbon double bonds.
Many of the earliest-known examples of aromatic compounds, such as benzene and toluene, have distinctive pleasant smells. This property led to the term "aromatic" for this class of compounds, and hence the term "aromaticity" for the eventually discovered electronic property.
The circulating π electrons in an aromatic molecule produce ring currents that oppose the applied magnetic field in NMR. The NMR signal of protons in the plane of an aromatic ring are shifted substantially further down-field than those on non-aromatic sp² carbons. This is an important way of detecting aromaticity. By the same mechanism, the signals of protons located near the ring axis are shifted up-field.
Aromatic molecules are able to interact with each other in so-called π-π stacking: The π systems form two parallel rings overlap in a "face-to-face" orientation. Aromatic molecules are also able to interact with each other in an "edge-to-face" orientation: The slight positive charge of the substituents on the ring atoms of one molecule are attracted to the slight negative charge of the aromatic system on another molecule.
Planar monocyclic molecules containing 4n π electrons are called antiaromatic and are, in general, destabilized. Molecules that could be antiaromatic will tend to alter their electronic or conformational structure to avoid this situation, thereby becoming non-aromatic. For example, cyclooctatetraene (COT) distorts itself out of planarity, breaking π overlap between adjacent double bonds. Relatively recently, cyclobutadiene was discovered to adopt an asymmetric, rectangular configuration in which single and double bonds indeed alternate; there is no resonance and the single bonds are markedly longer than the double bonds, reducing unfavorable p-orbital overlap. This reduction of symmetry lifts the degeneracy of the two formerly non-bonding molecular orbitals, which by Hund's rule forces the two unpaired electrons into a new, weakly bonding orbital (and also creates a weakly antibonding orbital). Hence, cyclobutadiene is non-aromatic; the strain of the asymmetric configuration outweighs the anti-aromatic destabilization that would afflict the symmetric, square configuration.
Importance of aromatic compounds
Aromatic compounds play key roles in the biochemistry of all living things. The four aromatic amino acids histidine, phenylalanine, tryptophan, and tyrosine each serve as one of the 20 basic building-blocks of proteins. Further, all 5 nucleotides (adenine, thymine, cytosine, guanine, and uracil) that make up the sequence of the genetic code in DNA and RNA are aromatic purines or pyrimidines. The molecule heme contains an aromatic system with 22 π electrons. Chlorophyll also has a similar aromatic system.
Aromatic compounds are important in industry. Key aromatic hydrocarbons of commercial interest are benzene, toluene, ortho-xylene and para-xylene. About 35 million tonnes are produced worldwide every year. They are extracted from complex mixtures obtained by the refining of oil or by distillation of coal tar, and are used to produce a range of important chemicals and polymers, including styrene, phenol, aniline, polyester and nylon.
Types of aromatic compounds
The overwhelming majority of aromatic compounds are compounds of carbon, but they need not be hydrocarbons.
Neutral homocyclics
Benzene, as well as most other annulenes (cyclodecapentaene excepted) with the formula CnHn where n ≥ 4 and is an even number, such as cyclotetradecaheptaene.
Heterocyclics
In heterocyclic aromatics (heteroaromats), one or more of the atoms in the aromatic ring is of an element other than carbon. This can lessen the ring's aromaticity, and thus (as in the case of furan) increase its reactivity. Other examples include pyridine, pyrazine, imidazole, pyrazole, oxazole, thiophene, and their benzannulated analogs (benzimidazole, for example).
Polycyclics
Polycyclic aromatic hydrocarbons are molecules containing two or more simple aromatic rings fused together by sharing two neighboring carbon atoms (see also simple aromatic rings). Examples are naphthalene, anthracene, and phenanthrene.
Substituted aromatics
Many chemical compounds are aromatic rings with other functional groups attached. Examples include trinitrotoluene (TNT), acetylsalicylic acid (aspirin), paracetamol, and the nucleotides of DNA.
Atypical aromatic compounds
Aromaticity is found in ions as well: the cyclopropenyl cation (2e system), the cyclopentadienyl anion (6e system), the tropylium ion (6e), and the cyclooctatetraene dianion (10e). Aromatic properties have been attributed to non-benzenoid compounds such as tropone. Aromatic properties are tested to the limit in a class of compounds called cyclophanes.
A special case of aromaticity is found in homoaromaticity where conjugation is interrupted by a single sp³ hybridized carbon atom.
When carbon in benzene is replaced by other elements in borabenzene, silabenzene, germanabenzene, stannabenzene, phosphorine or pyrylium salts the aromaticity is still retained. Aromaticity also occurs in compounds that are not carbon-based at all. Inorganic 6-membered-ring compounds analogous to benzene have been synthesized. Hexasilabenzene (Si6H6) and borazine (B3N3H6) are structurally analogous to benzene, with the carbon atoms replaced by another element or elements. In borazine, the boron and nitrogen atoms alternate around the ring. Quite recently, the aromaticity of planar Si56- rings occurring in the Zintl phase Li12Si7 was experimentally evidenced by Li solid state NMR.
Metal aromaticity is believed to exist in certain metal clusters of aluminium.
Möbius aromaticity occurs when a cyclic system of molecular orbitals, formed from pπ atomic orbitals and populated in a closed shell by 4n (n is an integer) electrons, is given a single half-twist to correspond to a Möbius strip. A π system with 4n electrons in a flat (non-twisted) ring would be anti-aromatic, and therefore highly unstable, due to the symmetry of the combinations of p atomic orbitals. By twisting the ring, the symmetry of the system changes and becomes allowed (see also Möbius–Hückel concept for details). Because the twist can be left-handed or right-handed, the resulting Möbius aromatics are dissymmetric or chiral.
As of 2012, there is no proof that a Möbius aromatic molecule was synthesized.
Aromatics with two half-twists corresponding to the paradromic topologies were first suggested by Johann Listing. In carbo-benzene the ring bonds are extended with alkyne and allene groups.
Y-aromaticity
Y-aromaticity is a concept which was developed to explain the extraordinary stability and high basicity of the guanidinium cation. Guanidinium does not have a ring structure but has six π-electrons which are delocalized over the molecule. However, this concept is controversial and some authors have stressed different effects.
| Physical sciences | Aromatic hydrocarbons | Chemistry |
239860 | https://en.wikipedia.org/wiki/Macaque | Macaque | The macaques () constitute a genus (Macaca) of gregarious Old World monkeys of the subfamily Cercopithecinae. The 23 species of macaques inhabit ranges throughout Asia, North Africa, and (in Gibraltar) Europe. Macaques are principally frugivorous (preferring fruit), although their diet also includes seeds, leaves, flowers, and tree bark. Some species such as the long-tailed macaque (M. fascicularis; also called the crab-eating macaque) will supplement their diets with small amounts of meat from shellfish, insects, and small mammals. On average, a southern pig-tailed macaque (M. nemestrina) in Malaysia eats about 70 large rats each year. All macaque social groups are arranged around dominant matriarchs.
Macaques are found in a variety of habitats throughout the Asian continent and are highly adaptable. Certain species are synanthropic, having learned to live alongside humans, but they have become problematic in urban areas in Southeast Asia and are not suitable to live with, as they can carry transmittable diseases.
Most macaque species are listed as vulnerable to critically endangered on the International Union of the Conservation of Nature (IUCN) Red List.
Description
Aside from humans (genus Homo), the macaques are the most widespread primate genus, ranging from Japan to the Indian subcontinent, and in the case of the Barbary macaque (Macaca sylvanus), to North Africa and Southern Europe. Twenty-three macaque species are currently recognized. Macaques are robust primates whose arms and legs are about the same in length. The fur of these animals is typically varying shades of brown or black and their muzzles are rounded in profile with nostrils on the upper surface. The tail varies among each species, which can be long, moderate, short or totally absent. Although several species lack tails, and their common names refer to them as apes, these are true monkeys, with no greater relationship to the true apes than any other Old World monkeys. Instead, this comes from an earlier definition of 'ape' that included primates generally.
In some species, skin folds join the second through fifth toes, almost reaching the first metatarsal joint. The monkey's size differs depending on sex and species. Males from all species can range from 41 to 70 cm (16 to 28 inches) in head and body length, and in weight from 5.5 to 18 kg (12.13 to 39.7 lb). Females can range from a weight of 2.4 to 13 kg (5.3 to 28.7 lb). These primates live in troops that vary in size, where males dominate, however the order of dominance frequently shifts. Female dominance lasts longer and depends upon their genealogical position. Macaques are able to swim and spend most of their time on the ground and spend some time in trees. They have large pouches in their cheeks where they carry extra food. They are considered highly intelligent and are often used in the medical field for experimentation due to their remarkable similarity to humans in emotional and cognitive development. Extensive experimentation has led to the long-tailed macaque being listed as endangered.
Distribution and habitat
Macaques are highly adaptable to different habitats and climates and can tolerate a wide fluctuation of temperatures and live in varying landscape settings. They easily adapt to human-built environments and can survive well in urban settings if they are able to obtain food. They can also survive in completely natural settings absent of humans.
The ecological and geographic ranges of the macaque are the widest of any non-human primate. Their habitats include the tropical rainforests of Southeast Asia, Sri Lanka, India, arid mountains of Pakistan and Afghanistan, and temperate mountains in Algeria, Japan, China, Morocco, and Nepal. Some species also inhabit villages and towns in cities in Asia. There is also an introduced population of rhesus macaques in the US state of Florida consisting, essentially, of monkeys abandoned when a failed boat ride-safari was shut down in the mid-20th century.
A probable Early Pliocene macaque molar from the Red Crag Formation (Waldringfield, United Kingdom), represents one of the oldest and northernmost records of the genus in Europe reported to date.
Ecology and behavior
Diet
Macaques are mainly frugivorous, although some species have been observed feeding on insects. In natural habitats, they have been observed to consume certain parts of over one hundred species of plants including the buds, fruit, young leaves, bark, roots, and flowers. When macaques live amongst people, they raid agricultural crops such as wheat, rice, or sugarcane; and garden crops like tomatoes, bananas, melons, mangos, or papayas. In human settings, they also rely heavily on direct handouts from people. This includes peanuts, rice, legumes, or even prepared food.
Group structure
Macaques live in established social groups that can range from a few individuals to several hundred, as they are social animals. A typical social group possess between 20 and 50 individuals of all ages and of both sexes. The typical composition consists of 15% adult males, 35% adult females, 20% infants, and 30% juveniles, though there exists variation in structure and size of groups across populations.
Macaques have a very intricate social structure and hierarchy, with different classifications of despotism depending on species. If a macaque of a lower level in the social chain has eaten berries and none are left for a higher-ranking macaque, then the one higher in status can, within this social organization, remove the berries from the other monkey's mouth.
Reproduction and mortality
The reproductive potential of each species differs. Populations of the rhesus macaque can grow at rates of 10% to 15% per year if the environmental conditions are favorable. However, some forest-dwelling species are endangered with much lower reproductive rates. After one year of age, macaques move from being dependent on their mother during infancy, to the juvenile stage, where they begin to associate more with other juveniles through rough tumble and playing activities. They sexually mature between three and five years of age. Females will usually stay with the social group in which they were born; however, young adult males tend to disperse and attempt to enter other social groups. Not all males succeed in joining other groups and may become solitary, attempting to join other social groups for many years. Macaques have a typical lifespan of 20 to 30 years.
As invasive species
Certain species under the genus Macaca have become invasive in certain parts of the world, while others that survive in forest habitats remain threatened. The long-tailed macaque (M. fascicularis) is listed as a threat and invasive alien species in Mauritius, along with the rhesus macaques (M. mulatta) in Florida. Despite this, the former is listed as endangered.
The long-tailed macaque causes severe damage to parts of its range where it has been introduced because the populations grow unchecked due to a lack of predators. On the island of Mauritius, they have created serious conservation concerns for other endemic species. They consume seeds of native plants and aid in the spread of exotic weeds throughout the forests. This changes the composition of the habitats and allows them to be rapidly overrun by invasive plants.
Long-tailed macaques are also responsible for the near extinction of several bird species on Mauritius by destroying the nests of the birds as they move through their native ranges and eat the eggs of critically endangered species, such as the pink pigeon and Mauritian green parrot. They can be serious agricultural pests because they raid crops and gardens and humans often shoot the monkeys which can eliminate entire local populations.
In Florida, a group of rhesus macaques inhabit Silver Springs State Park. Humans often feed them, which may alter their movement and keep them close to the river on weekends where high human traffic is present. The monkeys can become aggressive toward humans (largely due to human ignorance of macaque behavior), and also carry potentially fatal human diseases, including the herpes B virus.
Relations with humans
Several species of macaque are used extensively in animal testing, particularly in the neuroscience of visual perception and the visual system.
Nearly all (73–100%) captive rhesus macaques are carriers of the herpes B virus. This virus is harmless to macaques, but infections of humans, while rare, are potentially fatal, a risk that makes macaques unsuitable as pets.
Urban performing macaques also carried simian foamy virus, suggesting they could be involved in the species-to-species jump of similar retroviruses to humans.
Population control
Management techniques have historically been controversial, and public disapproval can hinder control efforts. Previously, efforts to remove macaque individuals were met with public resistance. One management strategy that is currently being explored is that of sterilization. Natural resource managers are being educated by scientific studies in the proposed strategy. Effectiveness of this strategy is estimated to succeed in keeping populations in check. For example, if 80% of females are sterilized every five years, or 50% every two years, it could effectively reduce the population. Other control strategies include planting specific trees to provide protection to native birds from macaque predation, live trapping, and the vaccine porcine zona pellucida (PZP), which causes infertility in females.
Cloning
In January 2018, scientists in China reported in the journal Cell the first creation of two crab-eating macaque clones, named Zhong Zhong and Hua Hua, using somatic cell nuclear transfer – the same method that produced Dolly the sheep.
Species
Prehistoric (fossil) species
M. anderssoni Schlosser, 1924
M. jiangchuanensis Pan et al., 1992
M. libyca Stromer, 1920
M. majori Schaub & Azzaroli in Comaschi Caria, 1969 (sometimes included in M. sylvanus)
M. florentina Cocchi, 1872
| Biology and health sciences | Primates | null |
239932 | https://en.wikipedia.org/wiki/Flush%20toilet | Flush toilet | A flush toilet (also known as a flushing toilet, water closet (WC); see also toilet names) is a toilet that disposes of human waste (i.e., urine and feces) by collecting it in a bowl and then using the force of water to channel it ("flush" it) through a drainpipe to another location for treatment, either nearby or at a communal facility. Flush toilets can be designed for sitting or squatting (often regionally differentiated). Most modern sewage treatment systems are also designed to process specially designed toilet paper, and there is increasing interest for flushable wet wipes. Porcelain (sometimes with vitreous china) is a popular material for these toilets, although public or institutional ones may be metal, etc.
Flush toilets are a type of plumbing fixture, and usually incorporate a bend called a trap (S-, U-, J-, or P-shaped) that causes water to collect in the toilet bowl – to hold the waste and act as a seal against noxious sewer gases. Urban and suburban flush toilets are connected to a sewerage system that conveys wastewater to a sewage treatment plant; rurally, a septic tank or composting system is mostly used.
The opposite of a flush toilet is a dry toilet, which uses no water for flushing. Associated devices are urinals, which primarily dispose of urine, and bidets, which use water to cleanse the anus, perineum, and vulva after using the toilet.
Operation
A typical flush toilet is a fixed, vitreous ceramic bowl (also known as a pan) which is connected to a drain. After use, the bowl is emptied and cleaned by the rapid flow of water into the bowl. This flush may flow from a dedicated tank (cistern), a high-pressure water pipe controlled by a flush valve, or by manually pouring water into the bowl. Tanks and valves are normally operated by the user, by pressing a button, pushing down on a handle, pulling a lever or pulling a chain. The water is directed around the bowl by a molded flushing rim around the top of the bowl or by one or more jets, so that the entire internal surface of the bowl is rinsed with water.
Mechanical flush from a cistern
A typical toilet has a tank fixed above the bowl which contains a fixed volume of water, and two devices. The first device allows part of the contents of the tank (usually in the range) to be discharged rapidly into the toilet bowl, causing the contents of the bowl to be swept or sucked out of the toilet and into the drain, when the user operates the flush. The second device automatically allows water to enter the tank until the water level is appropriate for a flush.
The water may be discharged through a "toilet flapper valve" (not to be confused with a type of check valve), or through a siphon. A float usually controls the refilling device.
Mechanical flush from a high-pressure water supply
Toilets without cisterns are often flushed through a simple flush valve or "Flushometer" connected directly to the water supply. These are designed to rapidly discharge a limited volume of water when the lever or button is pressed then released.
Manual flush (pour flush)
A toilet does not need be connected to a water supply, but may be pour-flushed. This type of flush toilet has no cistern or permanent water supply, but is flushed by pouring in a few litres of water from a container. The flushing can use as little as . This type of toilet is common in many Asian countries. The toilet can be connected to one or two pits, in which case it is called a "pour flush pit latrine" or a "twin pit pour flush pit latrine". It can also be connected to a septic tank.
Vacuum toilet
A vacuum toilet is a flush toilet that is connected to a vacuum sewer system, and removes waste by suction. They may use very little water (less than per flush) or none. Some flush with coloured disinfectant solution rather than with water. They may be used to separate blackwater and greywater, and process them separately (for instance, the fairly dry blackwater can be used for biogas production, or in a composting toilet).
Passenger train toilets, aircraft lavatories, bus toilets, and ships with plumbing often use vacuum toilets. The lower water usage saves weight and avoids water slopping out of the toilet bowl in motion. Aboard vehicles, a portable collection chamber is used; if it is filled by positive pressure from an intermediate vacuum chamber, it need not be kept under vacuum.
Flushing systems
The flushing system provides a large flow of water into the bowl. They normally take the form of either fixed tanks of water or flush valves.
Flush tanks
Flush tanks or cisterns usually incorporate a mechanism to release water from the tank and an automatic valve to allow the cistern to be refilled automatically.
This system is suitable for locations plumbed with water pipes which cannot supply water quickly enough to flush the toilet; the tank is needed to supply a large volume of water in a short time. The tank typically collects between of water over a period of time. In modern installations the storage tank is usually mounted directly above and behind the bowl.
Older installations, known as "high suite combinations", used a high-level cistern (tank), fitted above head height, activated by a pull chain connected to a flush lever on the cistern. When more modern close-coupled cistern and bowl combinations were first introduced, these were first referred to as "low suite combinations". Modern versions have a neater-looking low-level cistern with a lever that the user can reach directly, or a close-coupled cistern that is even lower down and fixed directly to the bowl. In recent decades the close coupled tank–bowl combination has become the most popular residential system, as it has been found by ceramic engineers that improved waterway design is a more effective way to enhance the bowl's flushing action than high tank mounting.
Tank fill valve
Tank fill valves are found in all tank-style toilets. The valves are of two main designs: the side-float design and the concentric-float design. The side-float design has existed for over a hundred years. The concentric design has only existed since 1957 but is gradually becoming more popular than the side-float design.
The side-float design uses a float on the end of a lever to control the fill valve. The float is usually shaped like a ball, so the mechanism is often called a ball-valve or a ballcock (cock in this context is an alternative term for valve; see, for example, stopcock). Historically floats were made from copper sheet, but are now usually plastic. The float is located to one side of the main valve tower or inlet at the end of a rod or arm. As the float rises, so does the float-arm. The arm connects to the fill valve that shuts off the inflow of water when the float reaches a level at which the volume of water in the tank is sufficient to provide another flush.
The newer concentric-float fill valve consists of a tower which is encircled by a plastic float assembly. Operation is otherwise the same as a side-float fill valve, even though the float position is somewhat different. By virtue of its more compact layout, interference between the float and other obstacles (tank insulation, flush valve, and so on) is greatly reduced, thus increasing reliability. The concentric-float fill valve is also designed to signal to users automatically when there is a leak in the tank, by making much more noise when a leak is present than the older style side-float fill valve, which tends to be nearly silent when a slow leak is present.
Newer fill valves have a delayed action that will not start filling the tank/cistern until the flapper/drop valve has closed which saves some water.
Flapper-flush valve or drop valve
In tanks using a flapper-flush valve, the outlet at the bottom of the tank is covered by a buoyant (plastic or rubber) cover, or flapper, which is held in place against a fitting (the flush valve seat) by water pressure. The user pushes a lever to flush the toilet, which lifts the flush valve from the valve seat. The valve then floats clear of the seat, allowing the tank to empty quickly into the bowl. As the water level drops, the floating flush valve descends back to the bottom of the tank and covers the outlet pipe again. This system is common in homes in North America and in continental Europe. From 2001, due to a change in regulations, this flush system has also become available in the UK, where prior to that the siphon-type flush was mandated.
Dual flush versions of this design with push buttons are widely available. They have one level of water for liquid waste and a higher level for solid waste.
In North America, newer toilets have a flapper-flush valve. Older toilets have a flapper-flush valve. The larger flapper-flush valve is used on toilets that use less water, such as per flush. Some have a bell inlet for a faster more effective flush.
A problem with the valve type flush mechanism is that it invariably starts to leak after a couple of years use due to wear and tear of the valve, particles, etc. trapped in the valve. Quite often this leakage is barely noticeable but adds up to a considerable water wastage. In the UK it has been found that between 5 and 8% of toilets (mostly dual flush drop valves) are leaking, each one between on average per day. Whilst they save more water than they leak, regular maintenance or use of a non-leaking flush mechanism will maximise water savings.
Siphon-flush mechanism
This system, invented by Albert Giblin and common in the UK, uses a storage tank similar to that used in the flapper-flush-valve system above. This flush valve system is sometimes referred to as a valveless system, since no valve as such is required.
The siphon is formed of a vertical pipe that links the flush pipe to a domed chamber inside the cistern. A perforated disc, covered by a flexible plate or flap, is fitted inside this chamber and is joined by a rod to the flush lever.
Pressing the lever raises the disc, forces water over the top of the siphon into the vertical pipe, and starts the siphonic flow. Water flows through the perforated disc past the flap until the cistern is empty, at which point air enters the siphon and the flush stops.
The advantage of a siphon over the flush valve is that it has no sealing washers that can wear out and cause leaks, whereas other valve types - flapper, drop valve do leak, invariably after a couple of years of use and they have reduced water savings due to the valves not being maintained in practice. The siphon membrane will require occasional replacement.
Until 1 January 2001, the use of siphon-type cisterns was mandatory in the UK but after that date the regulations additionally allowed pressure flushing cisterns and pressure flushing valves (though the latter remained forbidden in houses). Siphons can sometimes be more difficult to operate than a "flapper"-based flush valve because moving the lever requires more torque than a flapper system. This additional torque is required because a certain amount of water must be lifted up into the siphon passageway in order to initiate the siphon action in the tank. Splitting or jamming of the flexible flap covering the perforated disc can cause the cistern to stop working.
Dual-flush versions of the siphon cistern provide a shorter flush option by allowing air into the siphon to stop the siphon action before the tank is empty.
The siphon system can also be combined with an air box to allow multiple siphons to be installed in a single trough cistern.
High-pressure or pressure-assisted tanks
Pressure-assisted toilets are sometimes found in both private (single, multiple, and lodging) installations as well as light commercial installations (such as offices). Products from several companies use per flush.
The mechanism consists of a plastic tank hidden inside the typical ceramic cistern or an exposed metal tank/cistern.
When the tank fills with water, the air trapped inside compresses. When the air pressure inside the plastic tank reaches a certain level, the tank stops filling with water. A high-pressure valve located in the center of the vessel holds the air and water inside until the user flushes the toilet.
During flushing, the user activates the valve via a button or lever, which releases the pressurized water into the bowl at a flow rate much higher than a conventional gravity-flow toilet. One advantage to this is lower water consumption than a gravity-flow toilet, or more effectiveness with a similar amount of water. As a result, the toilet does not clog as easily as those using non-pressurized mechanisms.
However, there are some financial and safety disadvantages. These toilets are generally more expensive to purchase, and the plastic tanks need to be replaced about every 10 years. They also have a noisier flush than other models. In addition, pressure-assisted tanks have been known to explode, causing serious injuries and property damage, resulting in a massive recall beginning in 2012 of over 1.4 million toilets equipped with the tank.
Some newer toilets use similar pressure-assist technology, along with a bowl and trapway designed to enhance the siphon effect; they use only per flush, or / for dual flush models. This design is also much quieter than other pressure-assist or flushometer toilets.
Tipping bucket type valve
A number of tipping bucket type cisterns have been developed. In the cistern is placed a tipping bucket with its axis aligned or perpendicular with the cistern. They have a lever which is rotated emptying the bucket which allows a variable flush. Usually the quantity of water is marked on the cistern and depending on the performance of the bowl a dual flush can be achieved e.g. , L (), , , etc. A normal or delayed action refill valve is used.
Tankless style with high-pressure (flushometer) valve
In 1906, William Sloan first made available his "flushometer" style toilet flush valve, incorporating his patented design. The design proved to be very popular and efficient and remains so to this day. Flushometer toilet flush valves are still often installed in commercial restrooms, and are frequently used for both toilets and urinals. Since they have no tank, they have no fill delay and can be used again immediately. They can be easily identified by their distinctive chrome pipe-work, and by the absence of a toilet tank or cistern, wherever they are employed.
Some flushometer models require the user to either depress a lever or press a button, which in turn opens a flush valve allowing mains-pressure water to flow directly into the toilet bowl or urinal. Other flushometer models are electronically triggered, using an infrared sensor to initiate the flushing process. Typically, on electronically triggered models, an override button is provided in case the user wishes to manually trigger flushing earlier. Some electronically triggered models also incorporate a true mechanical manual override which can be used in the event of the failure of the electronic system. In retrofit installations, a self-contained battery-powered or hard-wired unit can be added to an existing manual flushometer to flush automatically when a user departs.
Once a flushometer valve has been flushed, and after a preset interval, the flushometer mechanism closes the valve and stops the flow. The flushometer system requires no storage tank, but uses a high rate of water flow for a very short time. Thus a 22-mm/-inch pipe at minimum, or preferably a 29-mm/1-inch pipe, must be used. Water main pressures must be above . The higher water pressure employed by a flushometer valve scours the bowl more efficiently than a gravity-driven system, and fewer blockages typically occur as a result of this higher water pressure. Flushometer systems require approximately the same amount of water as a gravity system to operate.
Bowl design
The "bowl" or "pan" of a toilet is the receptacle that receives bodily waste. A toilet bowl is most often made of a ceramic, but can sometimes be made of stainless steel or composite plastics. Toilet bowls are mounted in any one of three basic manners: above-floor mounted (pedestal), wall mounted (cantilever), or in-floor mounted (squat toilet).
Within the bowl, there are three main waterway design systems: the siphoning trapped system (found primarily in North American residential installations, and in North American light commercial installations), the non-siphoning trapped system (found in most other installations), and the valve-closet system (found in trains, passenger aircraft, buses, and other such installations around the world). Older style toilets called "washout" toilets are now only found in a few locations.
Siphonic bowls
Single trap siphonic toilet
The siphonic toilet, also called "siphon jet" and "siphon wash", is perhaps the most popular design in North America for residential and light commercial toilet installations. All siphonic toilets incorporate an S-shaped waterway.
Standing water in the bowl acts as a barrier to sewer gas coming out of the sewer through the drain, and also as a receptacle for waste. Sewer gas is vented through a separate vent pipe attached to the sewer line. The water in the toilet bowl is connected to the drain by a drainpipe shaped like an extended "S" which curves up behind the bowl and down to the drain. The portion of the channel behind the bowl is arranged as a siphon tube, whose length is greater than the depth of the water in the bowl. The top of the curving tube limits the height of the water in the bowl before it flows down the drain. The waterways in these toilets are designed with slightly smaller diameters than a non-siphonic toilet, so that the waterway will naturally fill up with water each time it is flushed, creating the siphon action.
At the top of the toilet bowl is a rim with many angled drain holes that are fed from the tank, which fill, rinse, and induce swirling in the bowl when it is flushed. Some designs use a large hole in the front of the rim to allow faster filling of the bowl. There may also be a siphon jet hole about diameter in the bottom of the toilet bowl trap.
If the toilet is flushed from a tank, a large holding cistern is mounted above the toilet, containing approximately of water in modern designs. This tank is built with a large drain diameter hole at its bottom covered by a flapper valve that allows the water to rapidly leave the holding tank when the flush is activated. Alternatively, water may be supplied directly via a flush valve or "flushometer".
The rapid influx of water into the bowl causes the standing water in the bowl to rise and fill the S-shaped siphon tube mounted in the back of the toilet. This starts the toilet's siphonic action. The siphon action quickly "pulls" nearly all of the water and waste in the bowl and the on-rushing tank water down the drain in about 4–7 seconds —it flushes. When most of the water has drained out of the bowl, the continuous column of water through the siphon is broken when air enters the siphon tube. The toilet then gives its characteristic gurgle as the siphonic action ceases and no more water flows out of the toilet.
A "true siphonic toilet" can be easily identified by the noise it makes. If it can be heard to suck air down the drain at the end of a flush, then it is a true siphonic toilet. If not, then it is either a double trap siphonic or a non-siphonic toilet.
If water is poured slowly into the bowl it simply flows over the rim of the waterway and pours slowly down the drain—thus the toilet does not flush properly.
After flushing, the flapper valve in the water tank closes or the flush valve shuts; water lines and valves connected to the water supply refill the toilet tank and bowl. Then the toilet is again ready for use.
If the forward ("flush") jet connection to the upper inlet in the toilet clogs, poor or no flushing action may result.
Double trap siphonic toilet
The double trap siphonic toilet is a less common type that is exceptionally quiet when flushed. A device known as an aspirator uses the flow of water in a flush to pull air from the cavity between the two traps, reducing the air pressure inside and creating a siphon which pulls water and waste from the toilet bowl. Towards the end of the flush cycle the aspirator ceases being immersed in water, thus allowing air to enter the cavity between the traps and break the siphon, without the gurgling noise, while the final flush water fills the pan.
Non-siphonic bowls
Washdown toilet
Washdown toilets are the most common form of pedestal and cantilever toilets outside the Americas. The bowl has a large opening at the top which tapers down to a water trap at the base. It is flushed from the top by water discharged through a flushing rim or jets. The force of the water flowing into the bowl washes the waste through the trap and into the drains.
Washdown bowls developed from earlier "hopper" closets, which were simple conical bowls connected to a drain. However, waste is typically excreted towards the back of the toilet, rather than the exact center, and the backs of the hoppers were prone to becoming soiled. The modern washdown bowl has a steeply sloping back and a more gently sloping or curving front, so the water trap is off-center, towards the rear of the toilet. With this "eccentric cone" design, most waste drops into the pool of water at the base of the bowl, rather than onto the surface of the toilet. Early washdown closets had a large water area at the base to minimize soiling, which required a large volume of water to clear them effectively. Modern bowls have a smaller area, which reduces the volume of water needed to flush them; however, that water area is always small compared to the water area of a typical North American siphonic bowl, and this makes the washdown bowl prone to soiling.
Washout toilet
Washout, or Flachspüler ("shallow flush"), toilets have a flat platform with a shallow pool of water. They are flushed by a jet of water from the back that drives waste into the trap below. From there, the water flow removes it into the sewage system. An advantage of the design is that users will not get splashed from below. Taking of stool samples is also simplified. Washout toilets have a shallow pool of water into which waste is deposited, with a trapped drain just behind this pool. Waste is cleared out from this pool of water by being swept over into the trap (usually either a P-trap or an S-trap) and then beyond into a sewer by water from the flush. Washout pans were among the first types of ceramic toilets invented and since the early 1970s are now only found in a decreasing number of localities in Europe. A washout toilet is a kind of flush toilet which was once predominantly used in Germany, Austria and France. It was patented in Britain by George Jennings in 1852 and remained the standard toilet type in Britain throughout the 19th century.
Examples of this type of toilet can be found in Austria, the Czech Republic, Germany, Hungary, the Netherlands, and some regions of Poland, although it is becoming less common.
One disadvantage of this design is that it may require the more intense use of a toilet brush to remove bits of feces that may have left marks on the shelf. Additionally, this design presents the disadvantage of creating a strong lingering odor since the feces are not submerged in water immediately after excretion.
Squat toilet
In many parts of Asia, people traditionally use the toilet in a squatting position. This applies to defecation and urination by males and females. Therefore, homes and public washrooms have squat toilets, with the toilet bowl installed in the floor. This has the advantages of not needing an additional toilet seat and also being more convenient for cultures where people use water to wash their genitals instead of toilet paper. However, Western-style toilets that are mounted at sitting height and have a plastic seat have become popular as well. Many public washrooms have both squatting and sitting toilets.
In Western countries, instructions have been put up in some public toilets used by people accustomed to squat toilets, on the correct use of a sitting-style toilet. This is to avoid breaking the toilet or seat if someone attempts to squat on the edges.
The "Anglo-Indian" design used in India allows the same toilet to be used in the sitting or the squatting position.
Valve closet
The valve closet has a valve or flap at the exit of the bowl, often with a watertight seal to retain a pool of water in the pan. When the toilet is flushed, the valve is opened and the water in the pan flows rapidly out of the bowl into the drains, carrying the waste with it.
The earliest type of toilet, the valve closet is now rarely used for water-flush toilet systems. More complicated in design than other toilets, this design has lower reliability and is more difficult to maintain or repair. The most common use for valve closets is now in portable closets for caravans, camping, trains, and aircraft, where the flushing fluid is recycled. This design is also used in train carriages for use in areas where the waste is allowed to be simply dumped between the tracks (the flushing of such toilets is generally prohibited when the train is in a station).
Simple valve closets are used on most older style Russian trains, made in Eastern Germany ( factory, design dated probably to the 1950s), employing a pan-like shutter valve at the base of the pan and discharging waste directly onto the trackbed below. Usage of this type of toilet is permitted only while the train is moving, and outside of major cities. These designs are being phased out, together with the old trains, and are being replaced with modern vacuum systems.
The British singer Ian Wallace composed and performed the humorous song "Never Do It at the Station", which mentioned the old-fashioned trackbed dumping toilets which were still in use during the mid-20th century in Britain. The song first advised frugal travelers to save money by avoiding pay toilets in train stations, but also reminded polite passengers not to use the onboard "loo" while the train was stopped at a station.
Low-flow and high-efficiency flush toilets
Since 1994, there is a significant move towards using less water for flushing toilets. This has resulted in the emergence of low flush toilet designs and local or national standards on water consumption for flushing. As an alternative some people modify an existing high flush toilet to use less water by placing a brick or water bottle into the toilet's water tank. Other modifications are often done on the water system itself (such as by using greywater), or a system that pollutes the water less, for more efficient water use.
Urine diversion flush toilets, which were developed in Sweden, save water by using less water, or even no water, for the urine flush compared to about the feces flush.
US standards for new toilets
Pre-1994 residential and pre-1997 commercial flush toilets in the United States typically used of water per flush (gpf or lpf). The United States Congress passed the Energy Policy Act of 1992, which mandated that from 1994 common flush toilets use only . In response to the Act, manufacturers produced low-flow toilets, which many consumers did not like because they often required more than one flush to remove solids. People unhappy with the reduced performance of the low-flow toilets resorted to driving across the border to Canada or Mexico or buying salvaged toilets from older buildings. Manufacturers responded to consumers' complaints by improving the toilets. The improved products are generally identified as high efficiency toilets or HETs. HETs possess an effective flush volume of or less. They may be single-flush or dual-flush. A dual-flush toilet permits selection between different amounts of water for solid or liquid waste. Some HETs are pressure-assisted (or power, pump, or vacuum assisted).
The performance of a flush-toilet may be rated by a Maximum Performance (MaP) score. The low end of MaP scores is 250 (250 grams of simulated fecal matter). The high end of MaP scores is 1,000. A toilet with a MaP score of 1,000 should provide trouble-free service. It should remove all waste with a single flush; it should not plug; it should not harbor any odor; it should be easy to keep clean. The United States Environmental Protection Agency uses a MaP score of 350 as the minimum performance threshold for HETs. 1.6 gpf toilets are also sometimes referred as Ultra Low Flow (ULF).
Methods used to make up for the inadequacies of low flow toilets include using thinner toilet paper, plungers, and adding extra cups of water to the bowl manually.
Maintenance and hygiene
Clogging
If clogging occurs, it is usually the result of an attempt to flush unsuitable items, or as feces size often increases with age or too much toilet paper. Clogging can occur spontaneously due to limescale fouling of the drain pipe, or by overloading the stool capacity of the toilet. Stool capacity varies among toilet designs and is based on the size of the drainage pipe, the capacity of the water tank, the velocity of a flush, and the method by which the water attempts to vacate the bowl of its contents. The size and consistency of the stool is a hard-to-predict contributory factor.
In some countries, clogging has become more frequent due to regulations that require the use of small-tanked low-flush toilets in attempt to conserve water. Designs which increase the velocity of flushed water or improve the travel path can improve low-flow reliability.
Partial clogging is particularly insidious, as it is usually not discovered immediately, but only later by an unsuspecting user trying to flush an incompletely emptied toilet. Overflowing of the water mixed with excrement may then occur, depending on the bowl volume, tank capacity and severity of clogging. For this reason, rooms with flush toilets may be designed as wet rooms, with a second drain on the floor, and a shower head capable of reaching the whole floor area. Common means to remedy clogging include use of a toilet plunger, drain cleaner, or a plumber's snake.
Aerosols
A "toilet plume" is the dispersal of microscopic particles into the air as a result of flushing a toilet. Normal use of a toilet by healthy people is considered unlikely to be a major health risk. There is indirect evidence that specific pathogens such as norovirus or SARS coronavirus could potentially be spread through toilet aerosols, but as of 2015 no direct experimental studies had clearly demonstrated or refuted actual disease transmission from toilet aerosols. It has been hypothesized that dispersal of pathogens may be reduced by closing the toilet lid before flushing, and by using toilets with lower flush energy.
History
Pre-modern flush toilet systems
Forms of water flushed latrines have been found to exist since the Neolithic. The oldest neolithic village in Britain, dating from circa 31st century BC, Skara Brae, Orkney, used a form of hydraulic technology for sanitation. The village's design used a stream, and connecting drainage system to wash waste away.
The Mesopotamians introduced the world to clay sewer pipes around 4000 BCE, with the earliest examples found in the Temple of Bel at Nippur and at Eshnunna, utilised to remove wastewater from sites, and capture rainwater, in wells. The city of Uruk hosts the earliest known examples of brick constructed Latrines, both squat and pedestal, from 3200 BCE. Clay pipes were later used in the Hittite city of Hattusa. They had easily detachable and replaceable segments, and allowed for cleaning.
Flushed toilet systems were constructed by people of the Indus Civilization at some places, and later Egyptians and the Minoan civilization did so next, while the latter developed by the second millennium BC flushable pedestal toilets, with examples excavated at Knossos and Akrotiri.
Communal latrines were in use throughout the Roman Empire, feeding into either primary or secondary sewers, from the first through fifth centuries AD. A very well-preserved example are the latrines at Housesteads on Hadrian's Wall in Britain. Such toilets did not flush in the modern sense, but had a continuous stream of running water to wash away waste. With the fall of the Western Roman Empire, these communal toilet systems fell into disuse in Western Europe, though continued in the Eastern Byzantine Roman Empire, with records of toilet pipes being renewed, and sewers repaired.
In February 2023 archeologists in China announced the discovery of the remains of what may be the world's oldest known flush toilet. Broken parts of the 2,400-year-old lavatory, as well as a bent flush pipe, were unearthed among ancient palace ruins in the Yueyang archaeological site in the central city of Xi'an by researchers from the Chinese Academy of Social Sciences' Institute of Archeology.
Development of the modern flush toilet
In 1596 Sir John Harington (1561–1612) published A New Discourse of a Stale Subject, Called the Metamorphosis of Ajax, describing a forerunner to the modern flush toilet installed at his house at Kelston in Somerset. The design had a flush valve to let water out of the tank, and a wash-down design to empty the bowl. He installed one for his godmother Queen Elizabeth I at Richmond Palace.
With the onset of the Industrial Revolution and related advances in technology, the flush toilet began to emerge into its modern form. A crucial advance in plumbing was the S-trap, invented by the Scottish mechanic Alexander Cumming in 1775, and still in use today. This device uses the standing water to seal the outlet of the bowl, preventing the escape of foul air from the sewer. His design had a sliding valve in the bowl outlet above the trap. Two years later, Samuel Prosser applied for a British patent for a "plunger closet".
Prolific inventor Joseph Bramah began his professional career installing water closets (toilets) that were based on Alexander Cumming's patented design of 1775. He found that the current model being installed in London houses had a tendency to freeze in cold weather. In collaboration with a Mr. Allen, he improved the design by replacing the usual slide valve with a hinged flap that sealed the bottom of the bowl.
He also developed a float valve system for the flush tank. Obtaining the patent for it in 1778, he began making toilets at a workshop in Denmark Street, St Giles. The design was arguably the first practical non-manual flush toilet, and production continued well into the 19th century, used mainly on boats. Thomas Bowdich, an English traveller, visited Kumasi, capital of the Ashanti Empire in 1817 and mentioned that majority of the houses in the city especially those near the king's palace included indoor toilets that were flushed with gallons of boiling water.
Industrial production
It was only in the mid-19th century, with growing levels of urbanisation and industrial prosperity, that the flush toilet became a widely used and marketed invention. This period coincided with the dramatic growth in the sewage system, especially in London, which made the flush toilet particularly attractive for health and sanitation reasons.
George Jennings established a business manufacturing water closets, salt-glaze drainage, sanitary pipes and sanitaryware at Parkstone Pottery in the 1840s, where he popularized the flush toilet to the middle class. At The Great Exhibition at Hyde Park held from 1 May to 15 October 1851, George Jennings installed his Monkey Closets in the Retiring Rooms of The Crystal Palace. These were the first public pay toilets (free ones did not appear until later), and they caused great excitement. During the exhibition, 827,280 visitors paid one penny to use them; for the penny they got a clean seat, a towel, a comb, and a shoe shine. "To spend a penny" became a euphemism, for going to the toilet.
When the exhibition finished and moved to Sydenham, the toilets were to be closed down. However, Jennings persuaded the organisers to keep them open, and the toilet went on to earn over £1,000 a year. He opened the first underground convenience at the Royal Exchange in 1854. He received a patent in 1852 for an improved construction of water-closet, in which the pan and trap were constructed in the same piece, and so formed that there was always a small quantity of water retained in the pan itself, in addition to that in the trap which forms the water-joint. He also improved the construction of valves, drain traps, forcing pumps and pump-barrels. By the end of the 1850s building codes suggested that most new middle-class homes in British cities were equipped with a water closet.
Another pioneering manufacturer was Thomas William Twyford, who invented the single piece, ceramic flush toilet. The 1870s proved to be a defining period for the sanitary industry and the water closet; the debate between the simple water closet trap basin made entirely of earthenware and the very elaborate, complicated and expensive mechanical water closet would fall under public scrutiny and expert opinion. In 1875 the "wash-out" trap water closet was first sold, and was found as the public's preference for basin type water closets. By 1879 Twyford had devised his own type of the "wash out" trap water closet; he titled it the "National", and it became the most popular wash-out water closet.
By the 1880s the free-standing water closet was on sale and quickly gained popularity; the free-standing water closet was able to be cleaned more easily and was therefore a more hygienic water closet. Twyford's "Unitas" model was free-standing and made completely of earthenware. Throughout the 1880s he submitted further patents for improvements to the flushing rim and the outlet. Finally, in 1888 he applied for a patent protection for his "after flush" chamber; the device allowed the basin to be refilled by a lower quantity of clean water in reserve after the water closet was flushed. The modern pedestal "flush-down" toilet was demonstrated by Frederick Humpherson of the Beaufort Works, Chelsea, England in 1885.
The leading companies of the period issued catalogues, established showrooms in department stores and marketed their products around the world. Twyford had showrooms for water closets in Berlin, Germany; Sydney, Australia; and Cape Town, South Africa. The Public Health Act 1875 set down stringent guidelines relating to sewers, drains, water supply and toilets and lent tacit government endorsement to the prominent water closet manufacturers of the day.
Contrary to popular legend, Sir Thomas Crapper did not invent the flush toilet. He was, however, in the forefront of the industry in the late 19th century, and held nine patents, three of them for water closet improvements such as the floating ballcock. In 1880, Thomas Crapper introduced the U-shaped trap. His flush toilets were designed by inventor Albert Giblin, who received a British patent for the "Silent Valveless Water Waste Preventer", a siphon discharge system. Crapper popularized the siphon system for emptying the tank, replacing the earlier floating valve system which was prone to leaks.
Rev. Edward Johns, introduced his Dolphin toilet to the United States at the 1876 Centennial Exposition in Philadelphia, attributed to Edward Johns & Co of Armitage, Staffordshire,
"The American slang term for the toilet, “the john,” is said to be derived from the flushing water closets at Harvard University installed in 1735, and emblazoned with the manufacturer’s name, Rev. Edward Johns."
Spread and further developments
Although flush toilets first appeared in Britain, they soon spread to the Continent. The first such examples may have been the three "waterclosets" installed in the new town house of banker Nicolay August Andresen on 6 Kirkegaten in Christiania, insured in January 1859. The toilets were probably imported from Britain, as they were referred to by the English term "waterclosets" in the insurance ledger. Another early watercloset on the European continent, dating from 1860, was imported from Britain to be installed in the rooms of Queen Victoria in Ehrenburg Palace (Coburg, Germany); she was the only one who was allowed to use it. Flush toilets were sold in Batavia, Dutch East Indies in 1872.
In America, the chain-pull indoor toilet was introduced in the homes of the wealthy and in hotels, soon after its invention in England in the 1880s. Flush toilets were introduced in the 1890s. William Elvis Sloan invented the Flushometer in 1906, which used pressurized water directly from the supply line for faster recycle time between flushes. The Flushometer is still in use today in public restrooms worldwide. The vortex-flushing toilet bowl, which creates a self-cleansing effect, was invented by Thomas MacAvity Stewart of Saint John, New Brunswick, in 1907. Philip Haas of Dayton, Ohio, made some significant developments, including the flush rim toilet with multiple jets of water from a ring and the water closet flushing and recycling mechanism similar to those in use today.
The company Caroma in Australia developed the Duoset cistern with two buttons and two flush volumes as a water-saving measure in 1980. Modern versions of the Duoset are now available worldwide, and save the average household 67% of their normal water usage.
Manufacture
A toilet's body is typically made from vitreous china, which starts out as an aqueous suspension of various minerals called a slip. It takes about of slip to make a toilet.
In traditional casting, the slip is poured into the space between plaster of Paris molds. The toilet bowl, rim, tank and tank lid require separate molds. The molds are assembled and set up for filling and the slip-filled molds sit for about an hour after filling. This allows the plaster molds to absorb moisture from the slip, which makes it semi-solid next to the mold surfaces but lets it remain liquid further from the surface of the molds. Then, the workers remove plugs to allow any excess liquid slip to drain from the cavities of the mold (this excess slip is recycled for later use). The drained-out slip leaves hollow voids inside the fixture, using less material to keep it both lighter and easier to fire in a kiln. This molding process allows the formation of intricate internal waste lines in the fixture; the drain's hollow cavities are poured out as slip.
At this point, the toilet parts without their molds look like and are about as strong as soft clay. After about one hour the top core mold (interior of toilet) is removed. The rim mold bottom (which includes a place to mount the holding tank) is removed, and it then has appropriate slanted holes for the rinsing jets cut, and the mounting holes for tank and seat are punched into the rim piece. Valve holes for rapid water entry into the toilet are cut into the rim pieces. The exposed top of the bowl piece is then covered with a thick slip and the still-uncured rim is attached on top of the bowl so that the bowl and hollow rim are now a single piece. The bowl plus rim is then inverted, and the toilet bowl is set upside down on the top rim mold to hold the pieces together as they dry. Later, all the rest of the mold pieces are removed. As the clay body dries further it hardens more and continues to shrink. After a few hours, the casting is self-supporting, and is called greenware.
After the molds are removed, workers use hand tools and sponges to smooth the edges and surface of the greenware, and to remove the mold joints or roughness: this process is called "fettling". For large scale production pieces, these steps may be automated. The parts are then left outside or put in a warm room to dry, before going through a dryer at about , for about 20–36 hours.
After the surfaces are smoothed, the bowls and tanks are sprayed with glaze of various kinds to get different colors. This glaze is designed to shrink and contract at the same rate as the greenware while undergoing firing. After being sprayed with glaze, the toilet bowls, tanks, and lids are placed in stacks on a conveyor belt or "car" that slowly goes through a large kiln to be fired. The belt slowly moves the glaze-covered greenware into a tunnel kiln, which has different temperature zones inside it starting at about at the front, increasing towards the middle to over degrees, and exiting at about . During the firing in the kiln, the greenware and glaze are vitrified as one solid finished unit. Transiting the kiln takes glaze-covered greenware around 23–40 hours.
After the pieces are removed from the kiln and fully cooled, they are inspected for cracks or other defects. Then, the flushing mechanism may be installed on a one-piece toilet. On a two-piece toilet with a separate tank, the flushing mechanism may only be placed into the tank, with final assembly at installation.
A two-piece attaching seat and toilet bowl lid are typically mounted over the bowl to allow covering the toilet when it is not in use and to provide seating comfort. The seat may be installed at the factory, or the parts may be sold separately and assembled by a plumbing distributor or the installer.
Water usage
The amount of water used by conventional flush toilets is usually a significant portion of personal daily water usage: for example, five flushes per day use .
Modern low-flush toilet designs allow the use of much less water per flush, per flush.
Dual flush toilets allow the user to select between a flush for urine or feces, saving a significant amount of water over conventional units. Dual flush may be accomplished by pushing the flush handle up or down, or a two-segment flush pushbutton may be used whereby pressing the smaller section releases less water.
Flushing with non-potable water sources
Raw water flushing, including seawater flushing, is a method of water conservation, where raw water, such as seawater, is used for flush toilets. Such systems are used in places such as the majority of cities and towns in Hong Kong (see water supply and sanitation in Hong Kong), Gibraltar, and Avalon, California, United States. Heads (on ships) are typically flushed with seawater.
Flush toilets may, if plumbed for it, use greywater (water previously used for washing dishes, laundry and bathing) for flushing rather than drinkable potable water.
Etymology
Water closet
The word "toilet" initially meant "wash-room", without reference to sanitary facilities. Early indoor toilets were known as garderobes because they were used to store clothes, as the smell of ammonia was found to deter fleas and moths. However, an outhouse was usual until the nineteenth century, and only a few wealthy houses and luxury hotels had indoor toilets connected to drains and sewers. Lidded "chamber pots", kept in specially designed bedside cabinets and used in bedrooms by ladies and invalids, and portable bathtubs, could be emptied and washed in an outhouse.
From the sixteenth century in England, a private room (a closet) with a flushing toilet was referred to as a Water Closet ("WC"), in contrast with an Earth Closet ("EC"), an abbreviation still used in 1960s Oxfordshire cottage sales.
The descriptive terms "wash-down closet" or "water closet" only reached the United States in the 1880s. By 1890 there was increased public awareness of disease and of carelessly disposed human waste being infectious.
In common North American usage, a residential "bathroom" usually contains a toilet, a lavatory, and a bath. However, a commercial bathroom (or “restroom”) contains toilets or urinals but no bath, to the confusion of foreigners. In the UK, Australia and NZ, "bathroom" or bath-room refers to a room with a fixed bathtub and not necessarily a toilet. The terms "bathroom", and "toilet" or "loo" (amongst other euphemisms) indicate distinct functions, although bathrooms may often include toilets.
The term "water closet", refers to a room that has both a toilet and other plumbing fixtures such as a sink or a bathtub. American plumbers and codes use the term "water closet" or "WC" to denote a small room or enclosure with a toilet which is usually located within a larger bathroom that contains other fixtures such as a sink and tub.
Many European languages refer to a toilet as a "water" or "WC". The Royal Spanish Academy Dictionary accepts "váter" as a name for a toilet or bathroom, which is derived from the British term "water closet". France uses the expressions aller aux waters ("to go to the waters") derived from "water closet", and "w.-c." . Likewise the Romanian word "veceu" , and the Finnish word "vessa" derive from a shortened version of the abbreviation. In German, the expression "Klo" (first syllable of "Klosett") is used alongside "WC". In Italian WC or , and "water" , are very common terms to refer to the flushing toilet. In Dutch WC is the most common term.
Society and culture
Swirl direction myth
It is a common misconception that when flushed, the water in a toilet bowl swirls one way if the toilet is north of the equator and the other way if south of the equator, due to the Coriolis effect – usually, counter clockwise in the northern hemisphere, and clockwise in the southern hemisphere. In reality, the direction that the water takes is much more determined by the direction that the bowl's rim jets are pointed, and it can be made to flush in either direction in either hemisphere by simply redirecting the rim jets during manufacture. The relative importance of the Coriolis force can be determined by a ratio known as the Rossby number. When this number is much larger than 1, the Coriolis force is insignificant compared to the inertia effects. On the scale of bathtubs and toilets, the Rossby number is on the order of billions, and so the Coriolis effect is too weak to be observed except under carefully controlled laboratory conditions.
Courtesy flush
Since the 1990s, the slang term "courtesy flush" refers to a mid-defecation flush of the toilet as a courtesy to other bathroom users or other prisoners in a cell. A courtesy flush can serve to reduce odor and cover noise.
| Technology | Hydraulics and pneumatics | null |
239972 | https://en.wikipedia.org/wiki/Height | Height | Height is measure of vertical distance, either vertical extent (how "tall" something or someone is) or vertical position (how "high" a point is). For an example of vertical extent, "This basketball player is 7 foot 1 inches in height." For an example of vertical position, "The height of an airplane in-flight is about 10,000 meters."
When the term is used to describe vertical position (of, e.g., an airplane) from sea level, height is more often called altitude.
Furthermore, if the point is attached to the Earth (e.g., a mountain peak), then altitude (height above sea level) is called elevation.
In a two-dimensional Cartesian space, height is measured along the vertical axis (y) between a specific point and another that does not have the same y-value. If both points happen to have the same y-value, then their relative height is zero. In the case of three-dimensional space, height is measured along the vertical z axis, describing a distance from (or "above") the x-y plane.
Etymology
The English-language word high is derived from Old English hēah, ultimately from Proto-Germanic *xauxa-z, from a PIE base *keuk-. The derived noun height, also the obsolete forms heighth and highth, is from Old English híehþo, later héahþu, as it were from Proto-Germanic *xaux-iþa.
In mathematics
In elementary models of space, height may indicate the third dimension, the other two being length and width. Height is normal to the plane formed by the length and width.
Height is also used as a name for some more abstract definitions. These include:
The height or altitude of a triangle, which is the length from a vertex of a triangle to the line formed by the opposite side;
The height of a pyramid, which is the smallest distance from the apex to the base;
A measurement in a circular segment of the distance from the midpoint of the arc of the circular segment to the midpoint of the line joining the endpoints of the arc (see diagram in circular segment);
In a rooted tree, the height of a vertex is the length of the longest downward path to a leaf from that vertex;
In algebraic number theory, a "height function" is a measurement related to the minimal polynomial of an algebraic number; among other uses in commutative algebra and representation theory;
In ring theory, the height of a prime ideal is the supremum of the lengths of all chains of prime ideals contained in it.
In geosciences
Although height is normally relative to a plane of reference, most measurements of height in the physical world are based upon a zero surface, known as sea level. Both altitude and elevation, two synonyms for height, are usually defined as the position of a point above the mean sea level. One can extend the sea-level surface under the continents: naively, one can imagine a lot of narrow canals through the continents. In practice, the sea level under a continent has to be computed from gravity measurements, and slightly different computational methods exist; see Geodesy, heights.
In addition to vertical position, the vertical extent of geographic landmarks can be defined in terms of topographic prominence. For example, the highest mountain (by elevation in reference to sea level) belongs to Mount Everest, located on the border of Nepal and Tibet, China; however the tallest mountain, by measurement of apex to base, is Mauna Kea in Hawaii, United States.
In geodesy
Geodesists formalize mean sea level (MSL) by means of the geoid, the equipotential surface that best fits MSL. Then various types of height (normal, dynamic, orthometric, etc.) can be defined, based on the assumption of density of topographic masses necessary in the continuation of MSL under the continents.
A purely geometric quantity is the ellipsoidal height, reckoned from the surface of a reference ellipsoid, see Geodetic system, vertical datum.
In aviation
In aviation terminology, the terms height, altitude, and elevation are not synonyms. Usually, the altitude of an aircraft is measured from sea level, while its height is measured from ground level. Elevation is also measured from sea level, but is most often regarded as a property of the ground. Thus, elevation plus height can equal altitude, but the term altitude has several meanings in aviation.
In human culture
Human height is one of the areas of study within anthropometry. While environmental factors have some effect on variations in human height, these influences are insufficient to account for all differences between populations, suggesting that genetic factors are important for explaining variations between human populations.
The United Nations uses height (among other statistics) to monitor changes in the nutrition of developing nations. In human populations, average height can distill down complex data about the group's birth, upbringing, social class, diet, and health care system.
In their research, Baten, Stegl and van der Eng came to the conclusion that a change in the average height is a sign for a change in the economic development. With broad data of Indonesia, the researchers state that several incidents in the history of the country has led not only to a change in the economy but also to a change in the population's average height.
| Mathematics | Measurement | null |
239976 | https://en.wikipedia.org/wiki/Stirrup | Stirrup | A stirrup is a light frame or ring that holds the foot of a rider, attached to the saddle by a strap, often called a stirrup leather. Stirrups are usually paired and are used to aid in mounting and as a support while using a riding animal (usually a horse or other equine, such as a mule). They greatly increase the rider's ability to stay in the saddle and control the mount, increasing the animal's usefulness to humans in areas such as communication, transportation, and warfare.
In antiquity, the earliest foot supports consisted of riders placing their feet under a girth or using a simple toe loop appearing in India by the 2nd century BC. Later, a single foot support was used as a mounting aid, and paired stirrups appeared after the invention of the treed saddle. The stirrup was invented in the Chinese Jin dynasty during the 4th century, was in common use throughout China by the 5th century, and was spread across Eurasia to Europe through the nomadic peoples of Central Eurasia by the 7th or 8th century.
Etymology
The English word "stirrup" stems from Old English stirap, stigrap, Middle English stirop, styrope, i.e. a mounting or climbing-rope. Compare Old English stīgan "to ascend" and rap "rope, cord".
History
The stirrup, which gives greater stability to a rider, has been described as one of the most significant inventions in the history of warfare, prior to gunpowder. As a tool allowing expanded use of horses in warfare, the stirrup is often called the third revolutionary step in equipment, after the chariot and the saddle. The basic tactics of mounted warfare were significantly altered by the stirrup. A rider supported by stirrups was less likely to fall off while fighting, and could deliver a blow with a weapon that more fully employed the weight and momentum of horse and rider. Among other advantages, stirrups provided greater balance and support to the rider, which allowed the knight to use a sword more efficiently without falling, especially against infantry adversaries. Contrary to common modern belief, however, it has been asserted that stirrups actually did not enable the horseman to use a lance more effectively (cataphracts had used lances since antiquity), though the cantled saddle did.
Precursors
Soft stirrups
The invention of the stirrup occurred relatively late in history, considering that horses were domesticated in approximately 4000 BC, and the earliest known saddle-like equipment were fringed cloths or pads with breast pads and cruppers used by Assyrian cavalry around 700 BC.
The earliest hard foot support was a toe loop that held the big toe and was used in India late in the second century BC, though it may have appeared as early as 500 BC. This ancient foot support consisted of a looped rope for the big toe which was at the bottom of a saddle made of fibre or leather. Such a configuration was suitable for the warm climate of south and central India where people used to ride horses barefoot. Buddhist carvings in the temples of Sanchi, Mathura and the Bhaja caves dating back between the 1st and 2nd century BC feature horsemen riding with elaborate saddles with toes slipped under girths. Archaeologist John Marshall described the Sanchi relief as "the earliest example by some five centuries of the use of stirrups in any part of the world". This type of foot support has been called the "toe stirrup" in contrast to the later stirrup known as the "foot stirrup" seen in China during the 5th century AD. It is speculated that they may have spread to China and were the precursors of the "foot stirrup".
A pair of first century BC double bent iron bars, approximately 17 cm in length with curvature at each end, excavated from a grave near Junapani in the central Indian state of Madhya Pradesh, have been postulated as either full foot stirrups or bridle bits.
Hard stirrups
A 4th century BC golden artifact from Crimea, known as the Golden Torque, depicts a horse rider with a foot in a stirrup-like hook connected by a chain. It is one of the oldest known depictions of a stirrup-like device. A vase found in Ukraine appears to depict a stirrup, however scholars disagree as they find the saddle too flat for a saddle tree. Riders in Central and Southern Asia during the last century B.C. and the first century A.D. seem to have used toe loops and "hook stirrups," which featured a curved metal hook hanging from the saddle to support the foot. Two Kushan artifacts from the 1st century A.D portray a stirrup, however no further evidence of the Kushan stirrup has been found.
Some credit the nomadic Central Asian group known as the Sarmatians with developing the first stirrups.
The invention of the solid saddle tree allowed development of the true stirrup as it is known today. Without a solid tree, the rider's weight in the stirrups creates abnormal pressure points that make the horse's back sore. Modern thermography studies on "treeless" and flexible-tree saddle designs have found that there is considerable friction across the center line of a horse's back. A coin of Quintus Labienus, who was in service of Parthia, minted circa 39 BC depicts on its reverse a saddled horse with hanging objects. Smith suggests they are pendant cloths, while Thayer suggests that, considering the fact that the Parthians were famous for their mounted archery, the objects are stirrups, but adds that it is difficult to imagine why the Romans would never have adopted the technology.
In Asia, early solid-treed saddles were made of felt that covered a wooden frame. These designs date to approximately 200 BC. One of the earliest solid-treed saddles in the west was first used by the Romans as early as the 1st century BC, but this design did not have stirrups either.
East Asia
The Wenwu journal (1981) speculated that stirrups may have been used in China as early as the Han dynasty (206 BC–220 AD) based on representations of horses believed to date to the Eastern Han period (25–220 AD). Two plaques depict horses with squares between their belly and base line, which has been speculated to represent stirrups. However in 1984, Yang Hong remarked in the same journal that the horses had no saddles and therefore the squares were only ornaments.
Excavations at Khukh Nuur in northern Mongolia discovered a single iron stirrup in a cave burial. Radiocarbon dating of the human bone associated with this stirrup produced a date of 243–405 cal AD. Another cave burial at Urd Ulaan Uneet in Khovd Province was discovered with a saddle that had bilateral straps attached midway through the saddle tree, strongly suggesting the existence of paired stirrups. Radiocarbon dating of a strap made with horse hide gives a date of 267–535 cal AD.
The earliest known paired stirrups first appeared in China during the Jin dynasty by the early 4th century AD. A funerary figurine depicting a stirrup dated 302 AD was unearthed from a Western Jin tomb near Changsha. The stirrup depicted is a mounting stirrup, only placed on one side of the horse, and too short for riding. The earliest reliable representation of a full-length, double-sided riding stirrup was also unearthed from a Jin tomb, this time near Nanjing, dating to the Eastern Jin period, 322 AD. The earliest extant double stirrups were discovered in the tomb of a Northern Yan noble, Feng Sufu, who died in 415 AD. These stirrups were made with mulberry wood gilded with bronze and iron plates. | Technology | Animal-powered transport | null |
239987 | https://en.wikipedia.org/wiki/Progestogen | Progestogen | Progestogens, also sometimes written progestins, progestagens or gestagens, are a class of natural or synthetic steroid hormones that bind to and activate the progesterone receptors (PR). Progesterone is the major and most important progestogen in the body. The progestogens are named for their function in maintaining pregnancy (i.e., progestational), although they are also present at other phases of the estrous and menstrual cycles.
The progestogens are one of three types of sex hormones, the others being estrogens like estradiol and androgens/anabolic steroids like testosterone. In addition, they are one of the five major classes of steroid hormones, the others being the androgens, estrogens, glucocorticoids, and mineralocorticoids, as well as the neurosteroids. All endogenous progestogens are characterized by their basic 21-carbon skeleton, called a pregnane skeleton (C21). In similar manner, the estrogens possess an estrane skeleton (C18), and androgens, an androstane skeleton (C19).
The terms progesterone, progestogen, and progestin are mistakenly used interchangeably both in the scientific literature and in clinical settings. Progestins are synthetic progestogens and are used in medicine. Major examples of progestins include the 17α-hydroxyprogesterone derivative medroxyprogesterone acetate and the 19-nortestosterone derivative norethisterone. The progestins are structural analogues of progesterone and have progestogenic activity similarly, but differ from progesterone in their pharmacological properties in various ways.
In addition to their roles as natural hormones, progestogens are used as medications, for instance in menopausal hormone therapy and transgender hormone therapy for transgender women; for information on progestogens as medications, see the progesterone (medication) and progestogen (medication) articles.
Types and examples
The most important progestogen in the body is progesterone (P4). Other endogenous progestogens, with varying degrees of progestogenic activity, include 16α-hydroxyprogesterone (16α-OHP), 17α-hydroxyprogesterone (17α-OHP) (very weak), 20α-dihydroprogesterone (20α-DHP), 20β-dihydroprogesterone (20β-DHP), 5α-dihydroprogesterone (5α-DHP), 5β-dihydroprogesterone (5β-DHP) (very weak), 3β-dihydroprogesterone (3β-DHP), 11-deoxycorticosterone (DOC), and 5α-dihydrodeoxycorticosterone (5α-DHDOC). They are all metabolites of progesterone, lying downstream of progesterone in terms of biosynthesis.
Biological function
The major tissues affected by progestogens include the uterus, vagina, cervix, breasts, testes, and brain. The main biological role of progestogens in the body is in the female reproductive system, and the male reproductive system, with involvement in regulation of the menstrual cycle, maintenance of pregnancy, and preparation of the mammary glands for lactation and breastfeeding following parturition in women; in men progesterone affects spermiogenesis, sperm capacitation, and testosterone synthesis. Progestogens also have effects in other parts of the body. Unlike estrogens, progestogens have little or no role in feminization.
Biochemistry
Biosynthesis
Progesterone is produced from cholesterol with pregnenolone as a metabolic intermediate. In the first step in the steroidogenic pathway, cholesterol is converted into pregnenolone, which serves as the precursor to the progestogens progesterone and 17α-hydroxyprogesterone. These progestogens, along with another steroid, 17α-hydroxypregnenolone, are the precursors of all other endogenous steroids, including the androgens, estrogens, glucocorticoids, mineralocorticoids, and neurosteroids. Thus, many tissues producing steroids, including the adrenal glands, testes, and ovaries, produce progestogens.
In some tissues, the enzymes required for the final product are not all located in a single cell. For example, in ovarian follicles, cholesterol is converted to androstenedione, an androgen, in the theca cells, which is then further converted into estrogen in the granulosa cells. Fetal adrenal glands also produce pregnenolone in some species, which is converted into progesterone and estrogens by the placenta (see below). In the human, the fetal adrenals produce dehydroepiandrosterone (DHEA) via the pregnenolone pathway.
Ovarian production
Progesterone is the major progestogen produced by the corpus luteum of the ovary in all mammalian species. Luteal cells possess the necessary enzymes to convert cholesterol to pregnenolone, which is subsequently converted into progesterone. Progesterone is highest in the diestrus phase of the estrous cycle.
Placental production
The role of the placenta in progestogen production varies by species. In the sheep, horse, and human, the placenta takes over the majority of progestogen production, whereas in other species the corpus luteum remains the primary source of progestogens. In the sheep and human, progesterone is the major placental progestogen.
The equine placenta produces a variety of progestogens, primarily 5α-dihydroprogesterone and 5α,20α-tetrahydroprogesterone, beginning on day 60. A complete luteo-placental shift occurs by day 120–150.
Chemistry
The endogenous progestogens are naturally occurring pregnane steroids with ketone and/or hydroxyl groups at the C3 and C20 positions.
Medical use
Progestogens, including both progesterone and progestins, are used medically in hormonal birth control, hormone therapy, to treat gynecological disorders, to suppress sex hormone levels for various purposes, and for other indications.
| Biology and health sciences | Animal hormones | Biology |
240003 | https://en.wikipedia.org/wiki/Kangaroo%20rat | Kangaroo rat | Kangaroo rats, small mostly nocturnal rodents of genus Dipodomys, are native to arid areas of western North America. The common name derives from their bipedal form. They hop in a manner similar to the much larger kangaroo, but developed this mode of locomotion independently, like several other clades of rodents (e.g., dipodids and hopping mice).
Description
Kangaroo rats are four or five-toed heteromyid rodents with big hind legs, small front legs, and relatively large heads. Adults typically weigh between The tail of a kangaroo rat is longer than its body and head combined. Another notable feature of kangaroo rats is their fur-lined cheek pouches, which are used for storing food. The coloration of kangaroo rats varies from cinnamon buff to dark gray, depending on the species. There is also some variation in length with one of the largest species, the banner-tailed kangaroo rat being six inches in body length and a tail length of eight inches. Sexual dimorphism exists in all species, with males being larger than females.
Locomotion
Kangaroo rats move bipedally. Kangaroo rats often leap a distance of 7 feet, and reportedly up to 9 feet (2.75 m)
at speeds up to almost 10 feet/sec, or 11 km/h (7 mph). They can quickly change direction between jumps. The rapid locomotion of the banner-tailed kangaroo rat may maximise energy cost and minimise predation risk. Its use of a "move-freeze" mode may also make it less conspicuous to nocturnal predators.
Ecology
Range and habitat
Kangaroo rats live in arid and semiarid areas, particularly on sandy or soft soils which are suitable for burrowing. They can, however, vary in both geographic range and habitat. Their elevation range depends on the species; they are found from below sea level to at least 7,100 feet (the type locality of D. ordii priscus). They are sensitive to extreme temperatures and remain in their burrows during rain storms and other forms of inclement weather. Kangaroo rats are preyed on by coyotes, foxes, badgers, weasels, owls, and snakes.
Merriam's kangaroo rats live in areas of high rainfall and humidity, and high summer temperature and evaporation rates. They prefer areas of stony soils, including clays, gravel, and rocks, which are harder than soils preferred by some other species (like banner-tailed kangaroo rats). Because their habitats are hot and dry, they must conserve water. They do this in part by lowering their metabolic rate, which reduces the loss of water through their skin and respiratory system. Evaporation through the skin is the major route of loss. Merriam's kangaroo rats obtain enough water from the metabolic oxidation of the seeds they eat to survive and do not need to drink water at all. To help conserve water they produce very concentrated urine, via a process apparently associated with expression of aquaporin 1 along a longer than usual segment of the descending limb of the loop of Henle in the kidney.
In contrast, banner-tailed kangaroo rats have more specific habitat requirements for desert grasslands with scattered shrubs; this species is also more threatened because of the decline in these grasslands. These are also dry areas but they tend to have more water available to them than Merriam's kangaroo rats.
Food and foraging
Kangaroo rats are primarily seed eaters. They will, however, eat vegetation occasionally, and at some times of the year, possibly insects as well. They have been seen storing the seeds of mesquite, creosote bush, purslane, ocotillo, and grama grass in their cheek pouches. Kangaroo rats will store extra seeds in seed caches. This caching behavior affects the rangeland and croplands where the animals live. Kangaroo rats must harvest as much seed as possible in as little time as possible. To conserve energy and water, they minimize their time away from their cool, dry burrows. In addition, maximizing time in their burrows minimizes their exposure to predators.
When on foraging trips, kangaroo rats hoard the seeds that they find. It is important for a kangaroo rat to encounter more food items than are consumed, at least at one point in the year, as well as defend or rediscover food caches and remain within the same areas long enough to utilize food resources. Different species of kangaroo rat may have different seed caching strategies to coexist with each other, as is the case for the banner-tailed kangaroo rat and Merriam's kangaroo rat which have overlapping ranges. Merriam's kangaroo rats scatterhoard small caches of seeds in numerous small, shallow holes they dig. This is initially done close to the food source, maximizing harvest rates and reducing travel costs, but later redistributed more widely, minimizing theft by other rodents. Banner-tailed kangaroo rats larderhoard a sizable cache of seeds within the large mounds they occupy. This could decrease their time and energy expenses; they also spend less time on the surface digging holes, reducing the risk of predation. Being larger and more sedentary, they are better able to defend these larders from depredations by other rodents.
Behavior
Kangaroo rats inhabit overlapping home ranges. These home ranges tend to be small with most activities within 200–300 ft and rarely 600 ft. Home range size can vary within species with Merriam's kangaroo rats having larger home ranges than banner-tailed kangaroo rats. Recently weaned kangaroo rats move into new areas not occupied by adults. Within its home range, a kangaroo rat has a defended territory consisting of its burrowing system.
Burrow system
Kangaroo rats live in complex burrow systems. The burrows have separate chambers used for specific purposes like sleeping, living, and food storage. The spacing of the burrows depends on the number of kangaroo rats and the abundance of food. Kangaroo rats also live in colonies that range from six to several hundred dens. The burrow of a kangaroo rat is important in providing protection from the harsh desert environment. To maintain a constant temperature and relative humidity in their burrows, kangaroo rats plug the entrances with soil during the day. When the outside temperature is too hot, a kangaroo rat stays in its cool, humid burrow and leaves it only at night. To reduce loss of moisture through respiration when sleeping, a kangaroo rat buries its nose in its fur to accumulate a small pocket of moist air. The burrows of Merriam's kangaroo rats are simpler and shallower than those of banner-tailed kangaroo rats. Banner-tailed kangaroo rats also mate in their burrows, unlike Merriam's kangaroo rats.
Social interactions
Kangaroo rats are generally solitary animals with little social organization. Kangaroo rats communicate during competitive interactions and courtship. They do cluster together in some feeding situations. Groups of kangaroo rats that exist are aggregations and colonies. There appears to be a dominance hierarchy among male kangaroo rats in competition for access to females. Male kangaroo rats are generally more aggressive than females and are more dominant over them. Females are more tolerant of each other than males are and have more non-aggressive interactions. This is likely in part because the home ranges of females overlap less than the home ranges of males. Linear dominance hierarchies appear to exist among males but it is not known if this is the case for females. Winners of aggressive encounters appear to be the most active individuals.
Mating and reproduction
Kangaroo rats have a promiscuous mating system. Their reproductive output is highest in summer following high rainfalls. During droughts and food shortages, only a few females will breed. It appears that kangaroo rats can assess their local conditions and adjust their reproductive efforts accordingly. Merriam's kangaroo rats breed between December and May and produce two or three litters per year. Before mating, the male and female will perform nasal-anal circling until the female stops and allows the male to mount her. A Merriam's kangaroo rat female will allow multiple males to mount her in a short time, perhaps to ensure greater chances of producing offspring. Mating in banner-tailed kangaroo rats involves more chasing and foot drumming in the male before the female allows him to mate. Banner-tailed kangaroo rats mate on mounds and the more successful males chase away rival males. The gestation period of kangaroo rats lasts 22–27 days.
The young are born in a fur-lined nest in the burrows. They are born blind and hairless. For the first week, young Merriam kangaroo rats crawl, developing their hind legs in their second or third week. At this time, the young become independent. Banner-tailed kangaroo rats are weaned between 22 and 25 days. Offspring remain in the mound for one or six more months in the maternal caches.
Taxonomy
Family Heteromyidae
Subfamily Dipodomyinae
Dipodomys agilis (Agile kangaroo rat)
Dipodomys californicus (California kangaroo rat)
Dipodomys compactus (Gulf Coast kangaroo rat)
Dipodomys deserti (Desert kangaroo rat)
Dipodomys elator (Texas kangaroo rat)
Dipodomys elephantinus (Big-eared kangaroo rat)
Dipodomys gravipes (San Quintin kangaroo rat)
Dipodomys heermanni (Heermann's kangaroo rat)
Dipodomys ingens (Giant kangaroo rat)
Dipodomys merriami (Merriam's kangaroo rat)
Dipodomys microps (Chisel-toothed kangaroo rat)
Dipodomys nelsoni (Nelson's kangaroo rat)
Dipodomys nitratoides (Fresno kangaroo rat)
Dipodomys ordii (Ord's kangaroo rat)
Dipodomys panamintinus (Panamint kangaroo rat)
Dipodomys phillipsii (Phillips's kangaroo rat)
Dipodomys simulans (Dulzura kangaroo rat)
Dipodomys spectabilis (Banner-tailed kangaroo rat)
Dipodomys stephensi (Stephens's kangaroo rat)
Dipodomys venustus (Narrow-faced kangaroo rat)
| Biology and health sciences | Rodents | Animals |
240011 | https://en.wikipedia.org/wiki/Coherence%20%28physics%29 | Coherence (physics) | Coherence expresses the potential for two waves to interfere. Two monochromatic beams from a single source always interfere. Wave sources are not strictly monochromatic: they may be partly coherent. Beams from different sources are mutually incoherent.
When interfering, two waves add together to create a wave of greater amplitude than either one (constructive interference) or subtract from each other to create a wave of minima which may be zero (destructive interference), depending on their relative phase. Constructive or destructive interference are limit cases, and two waves always interfere, even if the result of the addition is complicated or not remarkable.
Two waves with constant relative phase will be coherent. The amount of coherence can readily be measured by the interference visibility, which looks at the size of the interference fringes relative to the input waves (as the phase offset is varied); a precise mathematical definition of the degree of coherence is given by means of correlation functions. More broadly, coherence describes the statistical similarity of a field, such as an electromagnetic field or quantum wave packet, at different points in space or time.
Qualitative concept
Coherence controls the visibility or contrast of interference patterns. For example, visibility of the double slit experiment pattern requires that both slits be illuminated by a coherent wave as illustrated in the figure. Large sources without collimation or sources that mix many different frequencies will have lower visibility.
Coherence contains several distinct concepts. Spatial coherence describes the correlation (or predictable relationship) between waves at different points in space, either lateral or longitudinal. Temporal coherence describes the correlation between waves observed at different moments in time. Both are observed in the Michelson–Morley experiment and Young's interference experiment. Once the fringes are obtained in the Michelson interferometer, when one of the mirrors is moved away gradually from the beam-splitter, the time for the beam to travel increases and the fringes become dull and finally disappear, showing temporal coherence. Similarly, in a double-slit experiment, if the space between the two slits is increased, the coherence dies gradually and finally the fringes disappear, showing spatial coherence. In both cases, the fringe amplitude slowly disappears, as the path difference increases past the coherence length.
Coherence was originally conceived in connection with Thomas Young's double-slit experiment in optics but is now used in any field that involves waves, such as acoustics, electrical engineering, neuroscience, and quantum mechanics. The property of coherence is the basis for commercial applications such as holography, the Sagnac gyroscope, radio antenna arrays, optical coherence tomography and telescope interferometers (Astronomical optical interferometers and radio telescopes).
Mathematical definition
The coherence function between two signals and is defined as
where is the cross-spectral density of the signal and and are the power spectral density functions of and , respectively. The cross-spectral density and the power spectral density are defined as the Fourier transforms of the cross-correlation and the autocorrelation signals, respectively. For instance, if the signals are functions of time, the cross-correlation is a measure of the similarity of the two signals as a function of the time lag relative to each other and the autocorrelation is a measure of the similarity of each signal with itself in different instants of time. In this case the coherence is a function of frequency. Analogously, if and are functions of space, the cross-correlation measures the similarity of two signals in different points in space and the autocorrelations the similarity of the signal relative to itself for a certain separation distance. In that case, coherence is a function of wavenumber (spatial frequency).
The coherence varies in the interval . If it means that the signals are perfectly correlated or linearly related and if they are totally uncorrelated. If a linear system is being measured, being the input and the output, the coherence function will be unitary all over the spectrum. However, if non-linearities are present in the system the coherence will vary in the limit given above.
Coherence and correlation
The coherence of two waves expresses how well correlated the waves are as quantified by the cross-correlation function. Cross-correlation quantifies the ability to predict the phase of the second wave by knowing the phase of the first. As an example, consider two waves perfectly correlated for all times (by using a monochromatic light source). At any time, the phase difference between the two waves will be constant. If, when they are combined, they exhibit perfect constructive interference, perfect destructive interference, or something in-between but with constant phase difference, then it follows that they are perfectly coherent. As will be discussed below, the second wave need not be a separate entity. It could be the first wave at a different time or position. In this case, the measure of correlation is the autocorrelation function (sometimes called self-coherence). Degree of correlation involves correlation functions.
Examples of wave-like states
These states are unified by the fact that their behavior is described by a wave equation or some generalization thereof.
Waves in a rope (up and down) or slinky (compression and expansion)
Surface waves in a liquid
Electromagnetic signals (fields) in transmission lines
Sound
Radio waves and microwaves
Light waves (optics)
Matter waves associated with, for examples, electrons and atoms
In system with macroscopic waves, one can measure the wave directly. Consequently, its correlation with another wave can simply be calculated. However, in optics one cannot measure the electric field directly as it oscillates much faster than any detector's time resolution. Instead, one measures the intensity of the light. Most of the concepts involving coherence which will be introduced below were developed in the field of optics and then used in other fields. Therefore, many of the standard measurements of coherence are indirect measurements, even in fields where the wave can be measured directly.
Temporal coherence
Temporal coherence is the measure of the average correlation between the value of a wave and itself delayed by , at any pair of times. Temporal coherence tells us how monochromatic a source is. In other words, it characterizes how well a wave can interfere with itself at a different time. The delay over which the phase or amplitude wanders by a significant amount (and hence the correlation decreases by significant amount) is defined as the coherence time . At a delay of the degree of coherence is perfect, whereas it drops significantly as the delay passes . The coherence length is defined as the distance the wave travels in time .
The coherence time is not the time duration of the signal; the coherence length differs from the coherence area (see below).
The relationship between coherence time and bandwidth
The larger the bandwidth – range of frequencies Δf a wave contains – the faster the wave decorrelates (and hence the smaller is):
Formally, this follows from the convolution theorem in mathematics, which relates the Fourier transform of the power spectrum (the intensity of each frequency) to its autocorrelation.
Narrow bandwidth lasers have long coherence lengths (up to hundreds of meters). For example, a stabilized and monomode helium–neon laser can easily produce light with coherence lengths of 300 m. Not all lasers have a high monochromaticity, however (e.g. for a mode-locked Ti-sapphire laser, Δλ ≈ 2 nm – 70 nm).
LEDs are characterized by Δλ ≈ 50 nm, and tungsten filament lights exhibit Δλ ≈ 600 nm, so these sources have shorter coherence times than the most monochromatic lasers.
Examples of temporal coherence
Examples of temporal coherence include:
A wave containing only a single frequency (monochromatic) is perfectly correlated with itself at all time delays, in accordance with the above relation. (See Figure 1)
Conversely, a wave whose phase drifts quickly will have a short coherence time. (See Figure 2)
Similarly, pulses (wave packets) of waves, which naturally have a broad range of frequencies, also have a short coherence time since the amplitude of the wave changes quickly. (See Figure 3)
Finally, white light, which has a very broad range of frequencies, is a wave which varies quickly in both amplitude and phase. Since it consequently has a very short coherence time (just 10 periods or so), it is often called incoherent.
Holography requires light with a long coherence time. In contrast, optical coherence tomography, in its classical version, uses light with a short coherence time.
Measurement of temporal coherence
In optics, temporal coherence is measured in an interferometer such as the Michelson interferometer or Mach–Zehnder interferometer. In these devices, a wave is combined with a copy of itself that is delayed by time . A detector measures the time-averaged intensity of the light exiting the interferometer. The resulting visibility of the interference pattern (e.g. see Figure 4) gives the temporal coherence at delay . Since for most natural light sources, the coherence time is much shorter than the time resolution of any detector, the detector itself does the time averaging. Consider the example shown in Figure 3. At a fixed delay, here , an infinitely fast detector would measure an intensity that fluctuates significantly over a time t equal to . In this case, to find the temporal coherence at , one would manually time-average the intensity.
Spatial coherence
In some systems, such as water waves or optics, wave-like states can extend over one or two dimensions. Spatial coherence describes the ability for two spatial points x1 and x2 in the extent of a wave to interfere when averaged over time. More precisely, the spatial coherence is the cross-correlation between two points in a wave for all times. If a wave has only 1 value of amplitude over an infinite length, it is perfectly spatially coherent. The range of separation between the two points over which there is significant interference defines the diameter of the coherence area, (Coherence length , often a feature of a source, is usually an industrial term related to the coherence time of the source, not the coherence area in the medium).
is the relevant type of coherence for the Young's double-slit interferometer. It is also used in optical imaging systems and particularly in various types of astronomy telescopes.
A distance away from an incoherent source with surface area ,
Sometimes people also use "spatial coherence" to refer to the visibility when a wave-like state is combined with a spatially shifted copy of itself.
Examples
Consider a tungsten light-bulb filament. Different points in the filament emit light independently and have no fixed phase-relationship. In detail, at any point in time the profile of the emitted light is going to be distorted. The profile will change randomly over the coherence time . Since for a white-light source such as a light-bulb is small, the filament is considered a spatially incoherent source. In contrast, a radio antenna array, has large spatial coherence because antennas at opposite ends of the array emit with a fixed phase-relationship. Light waves produced by a laser often have high temporal and spatial coherence (though the degree of coherence depends strongly on the exact properties of the laser). Spatial coherence of laser beams also manifests itself as speckle patterns and diffraction fringes seen at the edges of shadow.
Holography requires temporally and spatially coherent light. Its inventor, Dennis Gabor, produced successful holograms more than ten years before lasers were invented. To produce coherent light he passed the monochromatic light from an emission line of a mercury-vapor lamp through a pinhole spatial filter.
In February 2011 it was reported that helium atoms, cooled to near absolute zero / Bose–Einstein condensate state, can be made to flow and behave as a coherent beam as occurs in a laser. In the guided systems, the partial and full incoherence could be studied in terms of Guassian shell model. Moreover, the coherence properties of the output light from multimode nonlinear optical structures were found to obey the optical thermodynamic theory.
Spectral coherence of short pulses
Waves of different frequencies (in light these are different colours) can interfere to form a pulse if they have a fixed relative phase-relationship (see Fourier transform). Conversely, if waves of different frequencies are not coherent, then, when combined, they create a wave that is continuous in time (e.g. white light or white noise). The temporal duration of the pulse is limited by the spectral bandwidth of the light according to:
,
which follows from the properties of the Fourier transform and results in Küpfmüller's uncertainty principle (for quantum particles it also results in the Heisenberg uncertainty principle).
If the phase depends linearly on the frequency (i.e. ) then the pulse will have the minimum time duration for its bandwidth (a transform-limited pulse), otherwise it is chirped (see dispersion).
Measurement of spectral coherence
Measurement of the spectral coherence of light requires a nonlinear optical interferometer, such as an intensity optical correlator, frequency-resolved optical gating (FROG), or spectral phase interferometry for direct electric-field reconstruction (SPIDER).
Polarization and coherence
Light also has a polarization, which is the direction in which the electric or magnetic field oscillates. Unpolarized light is composed of incoherent light waves with random polarization angles. The electric field of the unpolarized light wanders in every direction and changes in phase over the coherence time of the two light waves. An absorbing polarizer rotated to any angle will always transmit half the incident intensity when averaged over time.
If the electric field wanders by a smaller amount the light will be partially polarized so that at some angle, the polarizer will transmit more than half the intensity. If a wave is combined with an orthogonally polarized copy of itself delayed by less than the coherence time, partially polarized light is created.
The polarization of a light beam is represented by a vector in the Poincaré sphere. For polarized light the end of the vector lies on the surface of the sphere, whereas the vector has zero length for unpolarized light. The vector for partially polarized light lies within the sphere.
Quantum coherence
The signature property of quantum matter waves, wave interference, relies on coherence. While initially patterned after optical coherence, the theory and experimental understanding of quantum coherence greatly expanded the topic.
Matter wave coherence
The simplest extension of optical coherence applies optical concepts to matter waves. For example, when performing the double-slit experiment with atoms instead of light waves, a sufficiently collimated atomic beam creates a coherent atomic wave-function illuminating both slits. Each slit acts as a separate but in-phase beam contributing to the intensity pattern on a screen. These two contributions give rise to an intensity pattern of bright bands due to constructive interference, interlaced with dark bands due to destructive interference, on a downstream screen. Many variations of this experiment have been demonstrated.
As with light, transverse coherence (across the direction of propagation) of matter waves is controlled by collimation. Because light, at all frequencies, travels the same velocity, longitudinal and temporal coherence are linked; in matter waves these are independent. In matter waves, velocity (energy) selection controls longitudinal coherence and pulsing or chopping controls temporal coherence.
Quantum optics
The discovery of the Hanbury Brown and Twiss effect – correlation of light upon coincidence – triggered Glauber's creation of uniquely quantum coherence analysis. Classical optical coherence becomes a classical limit for first-order quantum coherence; higher degree of coherence leads to many phenomena in quantum optics.
Macroscopic quantum coherence
Macroscopic scale quantum coherence leads to novel phenomena, the so-called macroscopic quantum phenomena. For instance, the laser, superconductivity and superfluidity are examples of highly coherent quantum systems whose effects are evident at the macroscopic scale. The macroscopic quantum coherence (off-diagonal long-range order, ODLRO) for superfluidity, and laser light, is related to first-order (1-body) coherence/ODLRO, while superconductivity is related to second-order coherence/ODLRO. (For fermions, such as electrons, only even orders of coherence/ODLRO are possible.) For bosons, a Bose–Einstein condensate is an example of a system exhibiting macroscopic quantum coherence through a multiple occupied single-particle state.
The classical electromagnetic field exhibits macroscopic quantum coherence. The most obvious example is the carrier signal for radio and TV. They satisfy Glauber's quantum description of coherence.
Quantum coherence as a resource
Recently M. B. Plenio and co-workers constructed an operational formulation of quantum coherence as a resource theory. They introduced coherence monotones analogous to the entanglement monotones. Quantum coherence has been shown to be equivalent to quantum entanglement in the sense that coherence can be faithfully described as entanglement, and conversely that each entanglement measure corresponds to a coherence measure.
Applications
Holography
Coherent superpositions of optical wave fields include holography. Holographic photographs have been used as art and as difficult to forge security labels.
Non-optical wave fields
Further applications concern the coherent superposition of non-optical wave fields. In quantum mechanics for example one considers a probability field, which is related to the wave function (interpretation: density of the probability amplitude). Here the applications concern, among others, the future technologies of quantum computing and the already available technology of quantum cryptography. Additionally the problems of the following subchapter are treated.
Modal analysis
Coherence is used to check the quality of the transfer functions (FRFs) being measured. Low coherence can be caused by poor signal to noise ratio, and/or inadequate frequency resolution.
| Physical sciences | Waves | Physics |
240123 | https://en.wikipedia.org/wiki/Plasticity%20%28physics%29 | Plasticity (physics) | In physics and materials science, plasticity (also known as plastic deformation) is the ability of a solid material to undergo permanent deformation, a non-reversible change of shape in response to applied forces. For example, a solid piece of metal being bent or pounded into a new shape displays plasticity as permanent changes occur within the material itself. In engineering, the transition from elastic behavior to plastic behavior is known as yielding.
Plastic deformation is observed in most materials, particularly metals, soils, rocks, concrete, and foams. However, the physical mechanisms that cause plastic deformation can vary widely. At a crystalline scale, plasticity in metals is usually a consequence of dislocations. Such defects are relatively rare in most crystalline materials, but are numerous in some and part of their crystal structure; in such cases, plastic crystallinity can result. In brittle materials such as rock, concrete and bone, plasticity is caused predominantly by slip at microcracks. In cellular materials such as liquid foams or biological tissues, plasticity is mainly a consequence of bubble or cell rearrangements, notably T1 processes.
For many ductile metals, tensile loading applied to a sample will cause it to behave in an elastic manner. Each increment of load is accompanied by a proportional increment in extension. When the load is removed, the piece returns to its original size. However, once the load exceeds a threshold – the yield strength – the extension increases more rapidly than in the elastic region; now when the load is removed, some degree of extension will remain.
Elastic deformation, however, is an approximation and its quality depends on the time frame considered and loading speed. If, as indicated in the graph opposite, the deformation includes elastic deformation, it is also often referred to as "elasto-plastic deformation" or "elastic-plastic deformation".
Perfect plasticity is a property of materials to undergo irreversible deformation without any increase in stresses or loads. Plastic materials that have been hardened by prior deformation, such as cold forming, may need increasingly higher stresses to deform further. Generally, plastic deformation is also dependent on the deformation speed, i.e. higher stresses usually have to be applied to increase the rate of deformation. Such materials are said to deform visco-plastically.
Contributing properties
The plasticity of a material is directly proportional to the ductility and malleability of the material.
Physical mechanisms
In metals
Plasticity in a crystal of pure metal is primarily caused by two modes of deformation in the crystal lattice: slip and twinning. Slip is a shear deformation which moves the atoms through many interatomic distances relative to their initial positions. Twinning is the plastic deformation which takes place along two planes due to a set of forces applied to a given metal piece.
Most metals show more plasticity when hot than when cold. Lead shows sufficient plasticity at room temperature, while cast iron does not possess sufficient plasticity for any forging operation even when hot. This property is of importance in forming, shaping and extruding operations on metals. Most metals are rendered plastic by heating and hence shaped hot.
Slip systems
Crystalline materials contain uniform planes of atoms organized with long-range order. Planes may slip past each other along their close-packed directions, as is shown on the slip systems page. The result is a permanent change of shape within the crystal and plastic deformation. The presence of dislocations increases the likelihood of planes.
Reversible plasticity
On the nanoscale the primary plastic deformation in simple face-centered cubic metals is reversible, as long as there is no material transport in form of cross-slip. Shape-memory alloys such as Nitinol wire also exhibit a reversible form of plasticity which is more properly called pseudoelasticity.
Shear banding
The presence of other defects within a crystal may entangle dislocations or otherwise prevent them from gliding. When this happens, plasticity is localized to particular regions in the material. For crystals, these regions of localized plasticity are called shear bands.
Microplasticity
Microplasticity is a local phenomenon in metals. It occurs for stress values where the metal is globally in the elastic domain while some local areas are in the plastic domain.
Amorphous materials
Crazing
In amorphous materials, the discussion of "dislocations" is inapplicable, since the entire material lacks long range order. These materials can still undergo plastic deformation. Since amorphous materials, like polymers, are not well-ordered, they contain a large amount of free volume, or wasted space. Pulling these materials in tension opens up these regions and can give materials a hazy appearance. This haziness is the result of crazing, where fibrils are formed within the material in regions of high hydrostatic stress. The material may go from an ordered appearance to a "crazy" pattern of strain and stretch marks.
Cellular materials
These materials plastically deform when the bending moment exceeds the fully plastic moment. This applies to open cell foams where the bending moment is exerted on the cell walls. The foams can be made of any material with a plastic yield point which includes rigid polymers and metals. This method of modeling the foam as beams is only valid if the ratio of the density of the foam to the density of the matter is less than 0.3. This is because beams yield axially instead of bending. In closed cell foams, the yield strength is increased if the material is under tension because of the membrane that spans the face of the cells.
Soils and sand
Soils, particularly clays, display a significant amount of inelasticity under load. The causes of plasticity in soils can be quite complex and are strongly dependent on the microstructure, chemical composition, and water content. Plastic behavior in soils is caused primarily by the rearrangement of clusters of adjacent grains.
Rocks and concrete
Inelastic deformations of rocks and concrete are primarily caused by the formation of microcracks and sliding motions relative to these cracks. At high temperatures and pressures, plastic behavior can also be affected by the motion of dislocations in individual grains in the microstructure.
Time-independent yielding and plastic flow in crystalline materials
Time-independent plastic flow in both single crystals and polycrystals is defined by a critical/maximum resolved shear stress (τCRSS), initiating dislocation migration along parallel slip planes of a single slip system, thereby defining the transition from elastic to plastic deformation behavior in crystalline materials.
Time-independent yielding and plastic flow in single crystals
The critical resolved shear stress for single crystals is defined by Schmid’s law τCRSS=σy/m, where σy is the yield strength of the single crystal and m is the Schmid factor. The Schmid factor comprises two variables λ and φ, defining the angle between the slip plane direction and the tensile force applied, and the angle between the slip plane normal and the tensile force applied, respectively. Notably, because m > 1, σy > τCRSS.
Critical resolved shear stress dependence on temperature, strain rate, and point defects
There are three characteristic regions of the critical resolved shear stress as a function of temperature. In the low temperature region 1 (T ≤ 0.25Tm), the strain rate must be high to achieve high τCRSS which is required to initiate dislocation glide and equivalently plastic flow. In region 1, the critical resolved shear stress has two components: athermal (τa) and thermal (τ*) shear stresses, arising from the stress required to move dislocations in the presence of other dislocations, and the resistance of point defect obstacles to dislocation migration, respectively. At T = T*, the moderate temperature region 2 (0.25Tm < T < 0.7Tm) is defined, where the thermal shear stress component τ* → 0, representing the elimination of point defect impedance to dislocation migration. Thus the temperature-independent critical resolved shear stress τCRSS = τa remains so until region 3 is defined. Notably, in region 2 moderate temperature time-dependent plastic deformation (creep) mechanisms such as solute-drag should be considered. Furthermore, in the high temperature region 3 (T ≥ 0.7Tm) έ can be low, contributing to low τCRSS, however plastic flow will still occur due to thermally activated high temperature time-dependent plastic deformation mechanisms such as Nabarro–Herring (NH) and Coble diffusional flow through the lattice and along the single crystal surfaces, respectively, as well as dislocation climb-glide creep.
Stages of time-independent plastic flow, post yielding
During the easy glide stage 1, the work hardening rate, defined by the change in shear stress with respect to shear strain (dτ/dγ) is low, representative of a small amount of applied shear stress necessary to induce a large amount of shear strain. Facile dislocation glide and corresponding flow is attributed to dislocation migration along parallel slip planes only (i.e. one slip system). Moderate impedance to dislocation migration along parallel slip planes is exhibited according to the weak stress field interactions between these dislocations, which heightens with smaller interplanar spacing. Overall, these migrating dislocations within a single slip system act as weak obstacles to flow, and a modest rise in stress is observed in comparison to the yield stress. During the linear hardening stage 2 of flow, the work hardening rate becomes high as considerable stress is required to overcome the stress field interactions of dislocations migrating on non-parallel slip planes (i.e. multiple slip systems), acting as strong obstacles to flow. Much stress is required to drive continual dislocation migration for small strains. The shear flow stress is directly proportional to the square root of the dislocation density (τflow ~ρ½), irrespective of the evolution of dislocation configurations, displaying the reliance of hardening on the number of dislocations present. Regarding this evolution of dislocation configurations, at small strains the dislocation arrangement is a random 3D array of intersecting lines. Moderate strains correspond to cellular dislocation structures of heterogeneous dislocation distribution with large dislocation density at the cell boundaries, and small dislocation density within the cell interior. At even larger strains the cellular dislocation structure reduces in size until a minimum size is achieved. Finally, the work hardening rate becomes low again in the exhaustion/saturation of hardening stage 3 of plastic flow, as small shear stresses produce large shear strains. Notably, instances when multiple slip systems are oriented favorably with respect to the applied stress, the τCRSS for these systems may be similar and yielding may occur according to dislocation migration along multiple slip systems with non-parallel slip planes, displaying a stage 1 work-hardening rate typically characteristic of stage 2. Lastly, distinction between time-independent plastic deformation in body-centered cubic transition metals and face centered cubic metals is summarized below.
Time-independent yielding and plastic flow in polycrystals
Plasticity in polycrystals differs substantially from that in single crystals due to the presence of grain boundary (GB) planar defects, which act as very strong obstacles to plastic flow by impeding dislocation migration along the entire length of the activated slip plane(s). Hence, dislocations cannot pass from one grain to another across the grain boundary. The following sections explore specific GB requirements for extensive plastic deformation of polycrystals prior to fracture, as well as the influence of microscopic yielding within individual crystallites on macroscopic yielding of the polycrystal. The critical resolved shear stress for polycrystals is defined by Schmid’s law as well (τCRSS=σy/ṁ), where σy is the yield strength of the polycrystal and ṁ is the weighted Schmid factor. The weighted Schmid factor reflects the least favorably oriented slip system among the most favorably oriented slip systems of the grains constituting the GB.
Grain boundary constraint in polycrystals
The GB constraint for polycrystals can be explained by considering a grain boundary in the xz plane between two single crystals A and B of identical composition, structure, and slip systems, but misoriented with respect to each other. To ensure that voids do not form between individually deforming grains, the GB constraint for the bicrystal is as follows:
εxxA = εxxB (the x-axial strain at the GB must be equivalent for A and B), εzzA = εzzB (the z-axial strain at the GB must be equivalent for A and B), and εxzA = εxzB (the xz shear strain along the xz-GB plane must be equivalent for A and B). In addition, this GB constraint requires that five independent slip systems be activated per crystallite constituting the GB. Notably, because independent slip systems are defined as slip planes on which dislocation migrations cannot be reproduced by any combination of dislocation migrations along other slip system’s planes, the number of geometrical slip systems for a given crystal system - which by definition can be constructed by slip system combinations - is typically greater than that of independent slip systems. Significantly, there is a maximum of five independent slip systems for each of the seven crystal systems, however, not all seven crystal systems acquire this upper limit. In fact, even within a given crystal system, the composition and Bravais lattice diversifies the number of independent slip systems (see the table below). In cases for which crystallites of a polycrystal do not obtain five independent slip systems, the GB condition cannot be met, and thus the time-independent deformation of individual crystallites results in cracks and voids at the GBs of the polycrystal, and soon fracture is realized. Hence, for a given composition and structure, a single crystal with less than five independent slip systems is stronger (exhibiting a greater extent of plasticity) than its polycrystalline form.
Implications of the grain boundary constraint in polycrystals
Although the two crystallites A and B discussed in the above section have identical slip systems, they are misoriented with respect to each other, and therefore misoriented with respect to the applied force. Thus, microscopic yielding within a crystallite interior may occur according to the rules governing single crystal time-independent yielding. Eventually, the activated slip planes within the grain interiors will permit dislocation migration to the GB where many dislocations then pile up as geometrically necessary dislocations. This pile up corresponds to strain gradients across individual grains as the dislocation density near the GB is greater than that in the grain interior, imposing a stress on the adjacent grain in contact. When considering the AB bicrystal as a whole, the most favorably oriented slip system in A will not be the that in B, and hence τACRSS ≠ τBCRSS. Paramount is the fact that macroscopic yielding of the bicrystal is prolonged until the higher value of τCRSS between grains A and B is achieved, according to the GB constraint. Thus, for a given composition and structure, a polycrystal with five independent slip systems is stronger (greater extent of plasticity) than its single crystalline form. Correspondingly, the work hardening rate will be higher for the polycrystal than the single crystal, as more stress is required in the polycrystal to produce strains. Importantly, just as with single crystal flow stress, τflow ~ρ½, but is also inversely proportional to the square root of average grain diameter (τflow ~d-½ ). Therefore, the flow stress of a polycrystal, and hence the polycrystal’s strength, increases with small grain size. The reason for this is that smaller grains have a relatively smaller number of slip planes to be activated, corresponding to a fewer number of dislocations migrating to the GBs, and therefore less stress induced on adjacent grains due to dislocation pile up. In addition, for a given volume of polycrystal, smaller grains present more strong obstacle grain boundaries. These two factors provide an understanding as to why the onset of macroscopic flow in fine-grained polycrystals occurs at larger applied stresses than in coarse-grained polycrystals.
Mathematical descriptions
Deformation theory
There are several mathematical descriptions of plasticity. One is deformation theory (see e.g. Hooke's law) where the Cauchy stress tensor (of order d-1 in d dimensions) is a function of the strain tensor. Although this description is accurate when a small part of matter is subjected to increasing loading (such as strain loading), this theory cannot account for irreversibility.
Ductile materials can sustain large plastic deformations without fracture. However, even ductile metals will fracture when the strain becomes large enough—this is as a result of work hardening of the material, which causes it to become brittle. Heat treatment such as annealing can restore the ductility of a worked piece, so that shaping can continue.
Flow plasticity theory
In 1934, Egon Orowan, Michael Polanyi and Geoffrey Ingram Taylor, roughly simultaneously, realized that the plastic deformation of ductile materials could be explained in terms of the theory of dislocations. The mathematical theory of plasticity, flow plasticity theory, uses a set of non-linear, non-integrable equations to describe the set of changes on strain and stress with respect to a previous state and a small increase of deformation.
Yield criteria
If the stress exceeds a critical value, as was mentioned above, the material will undergo plastic, or irreversible, deformation. This critical stress can be tensile or compressive. The Tresca and the von Mises criteria are commonly used to determine whether a material has yielded. However, these criteria have proved inadequate for a large range of materials and several other yield criteria are also in widespread use.
Tresca criterion
The Tresca criterion is based on the notion that when a material fails, it does so in shear, which is a relatively good assumption when considering metals. Given the principal stress state, we can use Mohr's circle to solve for the maximum shear stresses our material will experience and conclude that the material will fail if
where σ1 is the maximum normal stress, σ3 is the minimum normal stress, and σ0 is the stress under which the material fails in uniaxial loading. A yield surface may be constructed, which provides a visual representation of this concept. Inside of the yield surface, deformation is elastic. On the surface, deformation is plastic. It is impossible for a material to have stress states outside its yield surface.
Huber–von Mises criterion
The Huber–von Mises criterion is based on the Tresca criterion but takes into account the assumption that hydrostatic stresses do not contribute to material failure. M. T. Huber was the first who proposed the criterion of shear energy. Von Mises solves for an effective stress under uniaxial loading, subtracting out hydrostatic stresses, and states that all effective stresses greater than that which causes material failure in uniaxial loading will result in plastic deformation.
Again, a visual representation of the yield surface may be constructed using the above equation, which takes the shape of an ellipse. Inside the surface, materials undergo elastic deformation. Reaching the surface means the material undergoes plastic deformations.
| Physical sciences | Solid mechanics | null |
240244 | https://en.wikipedia.org/wiki/Biological%20interaction | Biological interaction | In ecology, a biological interaction is the effect that a pair of organisms living together in a community have on each other. They can be either of the same species (intraspecific interactions), or of different species (interspecific interactions). These effects may be short-term, or long-term, both often strongly influence the adaptation and evolution of the species involved. Biological interactions range from mutualism, beneficial to both partners, to competition, harmful to both partners. Interactions can be direct when physical contact is established or indirect, through intermediaries such as shared resources, territories, ecological services, metabolic waste, toxins or growth inhibitors. This type of relationship can be shown by net effect based on individual effects on both organisms arising out of relationship.
Several recent studies have suggested non-trophic species interactions such as habitat modification and mutualisms can be important determinants of food web structures. However, it remains unclear whether these findings generalize across ecosystems, and whether non-trophic interactions affect food webs randomly, or affect specific trophic levels or functional groups.
History
Although biological interactions, more or less individually, were studied earlier, Edward Haskell (1949) gave an integrative approach to the thematic, proposing a classification of "co-actions", later adopted by biologists as "interactions". Close and long-term interactions are described as symbiosis; symbioses that are mutually beneficial are called mutualistic.
The term symbiosis was subject to a century-long debate about whether it should specifically denote mutualism, as in lichens or in parasites that benefit themselves. This debate created two different classifications for biotic interactions, one based on the time (long-term and short-term interactions), and other based on the magnitude of interaction force (competition/mutualism) or effect of individual fitness, according the stress gradient hypothesis and Mutualism Parasitism Continuum. Evolutionary game theory such as Red Queen Hypothesis, Red King Hypothesis or Black Queen Hypothesis, have demonstrated a classification based on the force of interaction is important.
Classification based on time of interaction
Short-term interactions
Short-term interactions, including predation and pollination, are extremely important in ecology and evolution. These are short-lived in terms of the duration of a single interaction: a predator kills and eats a prey; a pollinator transfers pollen from one flower to another; but they are extremely durable in terms of their influence on the evolution of both partners. As a result, the partners coevolve.
Predation
In predation, one organism, the predator, kills and eats another organism, its prey. Predators are adapted and often highly specialized for hunting, with acute senses such as vision, hearing, or smell. Many predatory animals, both vertebrate and invertebrate, have sharp claws or jaws to grip, kill, and cut up their prey. Other adaptations include stealth and aggressive mimicry that improve hunting efficiency. Predation has a powerful selective effect on prey, causing them to develop antipredator adaptations such as warning coloration, alarm calls and other signals, camouflage and defensive spines and chemicals. Predation has been a major driver of evolution since at least the Cambrian period.
Pollination
In pollination, pollinators including insects (entomophily), some birds (ornithophily), and some bats, transfer pollen from a male flower part to a female flower part, enabling fertilisation, in return for a reward of pollen or nectar. The partners have coevolved through geological time; in the case of insects and flowering plants, the coevolution has continued for over 100 million years. Insect-pollinated flowers are adapted with shaped structures, bright colours, patterns, scent, nectar, and sticky pollen to attract insects, guide them to pick up and deposit pollen, and reward them for the service. Pollinator insects like bees are adapted to detect flowers by colour, pattern, and scent, to collect and transport pollen (such as with bristles shaped to form pollen baskets on their hind legs), and to collect and process nectar (in the case of honey bees, making and storing honey). The adaptations on each side of the interaction match the adaptations on the other side, and have been shaped by natural selection on their effectiveness of pollination.
Seed dispersal
Seed dispersal is the movement, spread or transport of seeds away from the parent plant. Plants have limited mobility and rely upon a variety of dispersal vectors to transport their propagules, including both abiotic vectors such as the wind and living (biotic) vectors like birds. Seeds can be dispersed away from the parent plant individually or collectively, as well as dispersed in both space and time. The patterns of seed dispersal are determined in large part by the dispersal mechanism and this has important implications for the demographic and genetic structure of plant populations, as well as migration patterns and species interactions. There are five main modes of seed dispersal: gravity, wind, ballistic, water, and by animals. Some plants are serotinous and only disperse their seeds in response to an environmental stimulus. Dispersal involves the letting go or detachment of a diaspore from the main parent plant.
Long-term interactions (symbioses)
The six possible types of symbiosis are mutualism, commensalism, parasitism, neutralism, amensalism, and competition. These are distinguished by the degree of benefit or harm they cause to each partner.
Mutualism
Mutualism is an interaction between two or more species, where species derive a mutual benefit, for example an increased carrying capacity. Similar interactions within a species are known as co-operation. Mutualism may be classified in terms of the closeness of association, the closest being symbiosis, which is often confused with mutualism. One or both species involved in the interaction may be obligate, meaning they cannot survive in the short or long term without the other species. Though mutualism has historically received less attention than other interactions such as predation, it is an important subject in ecology. Examples include cleaning symbiosis, gut flora, Müllerian mimicry, and nitrogen fixation by bacteria in the root nodules of legumes.
Commensalism
Commensalism benefits one organism and the other organism is neither benefited nor harmed. It occurs when one organism takes benefits by interacting with another organism by which the host organism is not affected. A good example is a remora living with a manatee. Remoras feed on the manatee's faeces. The manatee is not affected by this interaction, as the remora does not deplete the manatee's resources.
Parasitism
Parasitism is a relationship between species, where one organism, the parasite, lives on or in another organism, the host, causing it some harm, and is adapted structurally to this way of life. The parasite either feeds on the host, or, in the case of intestinal parasites, consumes some of its food.
Neutralism
Neutralism (a term introduced by Eugene Odum) describes the relationship between two species that interact but do not affect each other. Examples of true neutralism are virtually impossible to prove; the term is in practice used to describe situations where interactions are negligible or insignificant.
Amensalism
Amensalism (a term introduced by Edward Haskell) is an interaction where an organism inflicts harm to another organism without any costs or benefits received by itself. Amensalism describes the adverse effect that one organism has on another organism (figure 32.1). This is a unidirectional process based on the release of a specific compound by one organism that has a negative effect on another. A classic example of amensalism is the microbial production of antibiotics that can inhibit or kill other, susceptible microorganisms.
A clear case of amensalism is where sheep or cattle trample grass. Whilst the presence of the grass causes negligible detrimental effects to the animal's hoof, the grass suffers from being crushed. Amensalism is often used to describe strongly asymmetrical competitive interactions, such as has been observed between the Spanish ibex and weevils of the genus Timarcha which feed upon the same type of shrub. Whilst the presence of the weevil has almost no influence on food availability, the presence of ibex has an enormous detrimental effect on weevil numbers, as they consume significant quantities of plant matter and incidentally ingest the weevils upon it.
Competition
Competition can be defined as an interaction between organisms or species, in which the fitness of one is lowered by the presence of another. Competition is often for a resource such as food, water, or territory in limited supply, or for access to females for reproduction. Competition among members of the same species is known as intraspecific competition, while competition between individuals of different species is known as interspecific competition. According to the competitive exclusion principle, species less suited to compete for resources should either adapt or die out. This competition within and between species for resources plays a critical role in natural selection.
Classification based on effect on fitness
Biotic interactions can vary in intensity (strength of interaction), and frequency (number of interactions in a given time). There are direct interactions when there is a physical contact between individuals or indirect interactions when there is no physical contact, that is, the interaction occurs with a resource, ecological service, toxine or growth inhibitor. The interactions can be directly determined by individuals (incidentally) or by stochastic processes (accidentally), for instance side effects that one individual have on other. They are divided into six major types: Competition, Antagonism, Amensalism, Neutralism, Commensalism and Mutualism.
Non-trophic interactions
Some examples of non-trophic interactions are habitat modification, mutualism and competition for space. It has been suggested recently that non-trophic interactions can indirectly affect food web topology and trophic dynamics by affecting the species in the network and the strength of trophic links. It is necessary to integrate trophic and non-trophic interactions in ecological network analyses. The few empirical studies that address this suggest food web structures (network topologies) can be strongly influenced by species interactions outside the trophic network. However these studies include only a limited number of coastal systems, and it remains unclear to what extent these findings can be generalized. Whether non-trophic interactions typically affect specific species, trophic levels, or functional groups within the food web, or, alternatively, indiscriminately mediate species and their trophic interactions throughout the network has yet to be resolved. sessile species with generally low trophic levels seem to benefit more than others from non-trophic facilitation, though facilitation benefits higher trophic and more mobile species as well.
| Biology and health sciences | Ecology | Biology |
14312829 | https://en.wikipedia.org/wiki/Amazon%20Kindle | Amazon Kindle | Amazon Kindle is a series of e-readers designed and marketed by Amazon. Amazon Kindle devices enable users to browse, buy, download, and read e-books, newspapers, magazines, Audible audiobooks, and other digital media via wireless networking to the Kindle Store. The hardware platform, which Amazon subsidiary Lab126 developed, began as a single device in 2007. Currently, it comprises a range of devices, including e-readers with E Ink electronic paper displays and Kindle applications on all major computing platforms. All Kindle devices integrate with Windows and macOS file systems and Kindle Store content and, as of March 2018, the store had over six million e-books available in the United States.
Naming and evolution
In 2004, Amazon founder and CEO Jeff Bezos instructed the company's employees to build the world's best electronic reader before Amazon's competitors could. Amazon originally used the codename Fiona for the device.
Branding consultants Michael Cronan and Karin Hibma devised the Kindle name. Lab126 asked them to name the product, and they suggested "kindle", meaning to light a fire. They felt this was an apt metaphor for reading and intellectual excitement.
Kindle hardware evolved from the original Kindle introduced in 2007 and the Kindle DX (with its larger 9.7" screen) introduced in 2009. The DX remained the only non-6" eink Kindle device until the 2017 introduction of the Oasis 2. The range included early generation devices with a keyboard (Kindle Keyboard), devices with touch-sensitive, lighted, high-resolution screens (Kindle Paperwhite), early generations of a tablet computer with the Kindle app (Kindle Fire), and low-priced devices with a touch-sensitive screen (Kindle 7). However, the Kindle e-reader has often been a narrow-purpose device for reading rather than being multipurpose hardware that might create distractions while reading. Active Content support was introduced in 2010 only to be dropped from new Kindle devices in late 2014. After the first three generations, the Kindle Fire tablet branding was changed to Amazon Fire in 2014; this name change reflected their wider capabilities as an Android-derived tablet. Other later developments include devices with larger eink displays such as the Kindle Oasis 2 (2017) at 7" and the Paperwhite 5 (2021) at 6.8", as well as a device with a 10.2" screen and Wacom stylus support called the Kindle Scribe (2022). In 2022 Amazon also introduced the 11th gen Kindle with a 300 PPI display, ending the use of the 6" 167 PPI display that had been on every basic Kindle since 2007. In 2024 Amazon introduced the first color E Ink Kindle, the Kindle Colorsoft Signature Edition.
Amazon has also introduced Kindle apps for use on various devices and platforms, including Windows, macOS, Android, iOS, BlackBerry 10 and Windows Phone. Amazon also has a cloud reader to allow users to read e-books using modern web browsers.
Device specifications
Features
Kindle devices support dictionary and Wikipedia look-up functions when highlighting a word in an e-book. The font type, size and margins can be customized. Kindles are charged by connecting to a computer's USB port or to an AC adapter. Users needing accessibility due to impaired vision can use an audio adapter to listen to any e-book read aloud on supported Kindles, or those with difficulty in reading text may use the Amazon Ember Bold font for darker text and other fonts may too have bold font versions.
The Kindle also contains experimental features such as a web browser that uses NetFront based on WebKit. The browser can freely access the Kindle Store and Wikipedia on 3G models while the browser may be limited to 50 MB of data per month to websites other than Amazon and Wikipedia, Other possible experimental features, depending on the model are a Text-to-Speech engine that can read the text from e-books and an MP3 player that can be used to play music while reading.
The Kindle's operating system updates are designed to be received wirelessly and installed automatically during a period in sleep mode in which Wi-Fi is turned on. A user may install firmware updates manually by downloading the firmware for their device and copying the file to the device's root directory. The Kindle operating system uses the Linux kernel with a Java app for reading e-books.
Send to Kindle service
Amazon initially offered a Personal Documents Service to add content to a user's Kindle which only worked via email. Documents were sent directly to the Kindle via WhisperSync. Later expansions added cloud library features and content management. The modern service is called Send to Kindle and is available through various means such as email, website, app, or browser extension. It allows the user to send files such as EPUB, PDF, HTML pages, Microsoft Word documents, GIF, PNG, and BMP graphics directly to the user's Kindle library. When Amazon receives the file, it converts the file to Kindle File Format and stores it in the user's online library (called "Your Content" by Amazon). Content added via Send to Kindle is added to the user library as Personal Documents by default, but some Send to Kindle interfaces allow users to send a document to a specific device and skip adding it to the library. The Send to Kindle service's personal documents can be accessed by all Kindle hardware devices as well as iOS and Android devices using the Kindle app.
Until August 2022, in addition to the document types mentioned above, this service could be used to send unprotected and original version only .mobi/.azw files to a user's Kindle library.
Sending the file is free if downloaded using Wi-Fi, but, prior to 2021, cost $0.15 per MB when using Kindle's former 3G service.
Format support by device
The first Kindle could read unprotected Mobipocket files (MOBI, PRC), plain text files (TXT), Topaz format books (TPZ) and Amazon's AZW format.
The Kindle 2 added native PDF capability with the version 2.3 firmware upgrade. The Kindle 1 could not read PDF files, but Amazon provides experimental conversion to the native AZW format, with the caveat that not all PDFs may format correctly. The Kindle 2 added the ability to play the Audible Enhanced (AAX) format. The Kindle 2 can also display HTML files.
The fourth and later generation Kindles, Touch, Paperwhite (all generations), Voyage and Oasis (all generations) can display AZW, AZW3, TXT, PDF, unprotected MOBI, and PRC files natively. HTML, DOC, DOCX, JPEG, GIF, PNG, and BMP are usable through Amazon's conversion service. The Keyboard, Touch, Oasis 2 & 3, Kindle 8 & 9, and Paperwhite 4 can also play Audible Enhanced (AA, AAX). The Kindle (7, 8 & 9), Kindle Paperwhite (2, 3, 4 & 5), Voyage and Oasis (1, 2 & 3) can display KFX files natively. KFX is Amazon's successor to the AZW3 format.
Kindles cannot natively display EPUB files. However, at least two methods allow viewing the content of EPUB formatted content on Kindles:
Specialized software like Calibre allows EPUB or some other unsupported files to be converted to one of the supported file formats.
Kindles can be jailbroken to allow third-party software, such as KOReader which does support EPUB, to be installed.
In late April 2022, Amazon announced that Send to Kindle will support EPUB, beginning in late 2022.
Multiple devices and organization
An e-book may be downloaded from Amazon to several devices at the same time, as long as the devices are registered to the same Amazon account. A sharing limit typically ranges from one to six devices, depending on an undisclosed number of licenses set by the publisher. When a limit is reached, the user must remove the e-book from some device or unregister a device containing the e-book in order to add the e-book to another device.
The original Kindle and Kindle 2 did not allow the user to organize books into folders. The user could only select what type of content to display on the home screen and whether to organize by author, title, or download date. Kindle software version 2.5 allowed for the organization of books into "Collections" which behave like non-structured tags/labels: a collection cannot include other collections, and one book may be added to multiple collections. These collections are normally set and organized on the Kindle itself, one book at a time. The set of all collections of a first Kindle device can be imported to a second Kindle device that is connected to the cloud and is registered to the same user; as the result of this operation, the documents that are on the second device now become organized according to the first device's collections. There is no option to organize by series or series order, as the AZW format does not possess the necessary metadata fields.
X-Ray
X-Ray is a reference tool that is incorporated in Kindle Touch and later devices, the Fire tablets, the Kindle app for mobile platforms and Fire TV. X-Ray lets users explore in more depth the contents of a book, by accessing preloaded files with relevant information, such as the most common characters, locations, themes, or ideas.
Annotations
Users can bookmark, highlight, and search through content. Pages can be bookmarked for reference, and notes can be added to relevant content. While a book is open on the display, menu options allow users to search for synonyms and definitions from the built-in dictionary. The device also remembers the last page read for each book. Pages can be saved as a "clipping", or a text file containing the text of the currently displayed page. All clippings are appended to a single file, which can be downloaded over a USB cable. Due to the TXT format of the clippings file, all formatting (such as bold, italics, bigger fonts for headlines, etc.) is stripped off the original text.
Textbook rentals
On July 18, 2011, Amazon began a program that allows college students to rent Kindle textbooks from three different publishers for a fixed period of time.
Collection of users reading data
Kindle devices may report information about their users' reading data that includes the last page read, how long each e-book was opened, annotations, bookmarks, notes, highlights, or similar markings to Amazon. The Kindle stores this information on all Amazon e-books but it is unclear if this data is stored for non-Amazon e-books. There is a lack of e-reader data privacy — Amazon knows the user's identity, what the user is reading, whether the user has finished the book, what page the user is on, how long the user has spent on each page, and which passages the user may have highlighted.
Kindle ecosystem
Kindle Store
Content from Amazon's Kindle Store is encoded in Amazon's proprietary Kindle formats (.azw, .kf8 and .kfx). In addition to published content, Kindle users can also access the Internet using the experimental web browser, which uses NetFront. Users can use the Kindle Store to access reading material using the Kindle itself or through a web browser to access content. The store features Kindle Unlimited for unlimited access to over one million e-books for a monthly fee.
Content for the Kindle can be purchased online and downloaded wirelessly in some countries, using either standard Wi-Fi or Amazon's 3G Whispernet network. Whispernet is accessible without any monthly fees or a subscription, although fees can be incurred for the delivery of periodicals and other content when roaming internationally beyond the customer's home country. Through a service called Whispersync, customers can synchronize reading progress, bookmarks, and other information across Kindle hardware and other mobile devices. The Kindles that only can access Whispernet via the 3G network had that network turned off in December 2021 due to the carriers retiring 3G.
For U.S. customers traveling abroad, Amazon originally charged a $1.99 fee to download e-books over 3G while overseas, but later removed the fee. Fees remain for wireless 3G delivery of periodical subscriptions and personal documents, while Wi-Fi delivery has no extra charge.
In addition to the Kindle Store, content for the Kindle can be purchased from various independent sources such as Fictionwise and Baen Ebooks. Public domain titles are also obtainable for the Kindle via content providers such as Project Gutenberg, The Internet Archive and the World Public Library. In 2011, the Kindle Store had more than twice as much paid content as its nearest competitor, Barnes & Noble.
Public libraries that offer books via OverDrive, Inc. can also choose to lend titles for the Kindle and Kindle reading apps in the US via Libby. Books can be checked out from the library's own site, which forwards to Amazon for the completion of the checkout process. The Libby app stores user account and library details during set up and can send content to the users Amazon account at the time of checkout. Amazon then delivers the title to the Kindle for the duration of the loan, though some titles may require transfer via a USB connection to a computer. If the book is later checked out again or purchased, annotations and bookmarks are preserved.
Kindle applications for reading on other devices
Amazon released the Kindle for PC application in late 2009, available for Microsoft Windows systems. This application allows ebooks from Amazon's store or personal ebooks to be read on a personal computer, with no Kindle device required. Amazon released a Kindle for Mac app for Apple Macintosh & OS X systems in early 2010. In June 2010, Amazon released the Amazon Kindle for Android. Soon after the Android release, versions for the Apple iOS (iPhone and iPad) and BlackBerry OS phones were available. In January 2011, Amazon released Kindle for Windows Phone. In July 2011, Kindle for HP TouchPad (running webOS) was released in the U.S. as a beta version. In August 2011, Amazon released an HTML5-based webapp for supported web browsers called Kindle Cloud Reader. In 2013, Amazon has expressed no interest in releasing a separate Kindle application for Linux systems; the Cloud Reader can be used on supported browsers in Linux.
On April 17, 2014, Samsung announced it would discontinue its own e-book store effective July 1, 2014 and it partnered with Amazon to create the Kindle for Samsung app optimized for display on Samsung Galaxy devices. The app uses Amazon's e-book store and it includes a monthly limited selection of free e-books.
In June 2016, Amazon released the Page Flip feature to its Kindle applications that debuted on its e-readers a few years previously. This feature allows the user to flip through nine thumbnails of page images at a time.
Kindle Direct Publishing
Concurrently with the release of the first Kindle device, Amazon launched Kindle Direct Publishing, used by authors and publishers to independently publish their books directly to Kindle and Kindle Apps worldwide. Authors can upload documents in several formats for delivery via Whispernet and charge between $0.99 and $200.00 per download.
In a December 5, 2009 interview with The New York Times, Amazon CEO Jeff Bezos revealed that Amazon keeps 65% of the revenue from all e-book sales for the Kindle; the remaining 35% is split between the book author and publisher. After numerous commentators observed that Apple's popular App Store offers 70% of royalties to the publisher, Amazon began a program that offers 70% royalties to Kindle publishers who agree to certain conditions. Some of these conditions, such as the inability to opt out of the lendability feature, have caused some controversy.
Kindle Development Kit
On January 21, 2010, Amazon announced the release of its Kindle Development Kit (KDK). KDK aims to allow developers to build "active content" for the Kindle, and a beta version was announced with a February 2010 release date. A number of companies have already experimented with delivering active content through the Kindle's bundled browser, and the KDK gives sample code, documentation and a Kindle Simulator together with a new revenue sharing model for developers. The KDK is based on the Java programming language's Personal Basis Profile packaged Java APIs.
, the Kindle store offered over 400 items labeled as active content. These items include simple applications and games, including a free set provided by Amazon Digital Services. As of 2014, active content is only available to users with a U.S. billing address.
In October 2014, Amazon announced that the Voyage and future e-readers would not support active content because most users prefer to use apps on their smartphones and tablets, but the Paperwhite first-iteration and earlier Kindles would continue to support active content.
Reception
Sales
Specific Kindle device sales numbers are not released by Amazon; however, according to anonymous inside sources, over three million Kindles had been sold as of December 2009, while external estimates, as of Q4-2009, place the number at about 1.5 million. According to James McQuivey of Forrester Research, estimates are ranging around four million, as of mid-2010.
In 2010, Amazon remained the undisputed leader in the e-reader category, accounting for 59% of e-readers shipped, and it gained 14 percentage points in share. According to an International Data Corporation (IDC) study from March 2011, sales for all e-book readers worldwide reached 12.8 million in 2010; 48% of them were Kindles. In the last three months of 2010, Amazon announced that in the United States its e-book sales had surpassed sales of paperback books for the first time.
In January 2011, Amazon announced that digital books were outselling their traditional print counterparts for the first time ever on its site, with an average of 115 Kindle editions being sold for every 100 paperback editions. In December 2011, Amazon announced that customers had purchased "well over" one million Kindles per week since the end of November 2011; this includes all available Kindle models and also the Kindle Fire tablet. IDC estimated that the Kindle Fire sold about 4.7 million units during the fourth quarter of 2011. Pacific Crest estimated that the Kindle Fire models sold six million units during Q4 2012.
Morgan Stanley estimates that Amazon sold $3.57 billion worth of Kindle e-readers and tablets in 2012, $4.5 billion in Kindle device sales in 2013 and $5 billion in Kindle device sales in 2014.
Amazon claims that their sales had reached a decade-long high before the announcement of the 2024 models.
Aftermarket
Working Kindles in good condition can be sold, traded, donated or recycled in the aftermarket. Due to some Kindle devices being limited to use as reading device and the hassle of reselling Kindles, some people choose to donate their Kindle to schools, developing countries, literacy organizations, or charities. "The Kindle Classroom Project" promotes reading by distributing donated Kindles to schools in need. Worldreader and "Develop Africa" ships donated e-readers to schools in developing countries in Africa for educational use. "Project Hart" may take donations of e-readers that could be given to people in need.
Whether in good condition or not, Kindles should not be disposed of in normal waste due to the device's electronic ink components and batteries. Instead, Kindles at the end of their useful life should be recycled. In the United States, Amazon runs their own program, 'Take Back', which allows owners to print out a prepaid shipping label, which can be used to return the device for disposal.
Criticism
Removal of Nineteen Eighty-Four
On July 17, 2009, Amazon withdrew from sale two e-books by George Orwell, Animal Farm and Nineteen Eighty-Four, refunding the purchase price to those who had bought them, and remotely deleted these titles from purchasers' devices without warning using a backdoor after discovering that the publisher lacked rights to publish these books. The two books were protected by copyright in the United States, but they were in the public domain in Canada, Australia and other countries. | Technology | Specific hardware | null |
14313934 | https://en.wikipedia.org/wiki/Mass%20psychogenic%20illness | Mass psychogenic illness | Mass psychogenic illness (MPI), also called mass sociogenic illness, mass psychogenic disorder, epidemic hysteria or mass hysteria, involves the spread of illness symptoms through a population where there is no infectious agent responsible for contagion. It is the rapid spread of illness signs and symptoms affecting members of a cohesive group, originating from a nervous system disturbance involving excitation, loss, or alteration of function, whereby physical complaints that are exhibited unconsciously have no corresponding organic causes that are known.
Signs and symptoms
Timothy F. Jones of the Tennessee Department of Health compiles the following symptoms based on their commonality in outbreaks occurring in 1980–1990:
Causes and risk factors
MPI is distinct from other types of collective or mass delusions by involving physical symptoms. Qualities of MPI outbreaks often include:
symptoms that have no plausible organic basis;
symptoms that are transient and benign;
symptoms with rapid onset and recovery;
occurrence in a segregated group;
the presence of extraordinary anxiety;
symptoms that are spread via sight, sound or oral communication;
a spread that moves down the age scale, beginning with older or higher-status people;
British psychiatrist Simon Wessely distinguishes between two forms of MPI:
Mass anxiety hysteria "consists of episodes of acute anxiety, occurring mainly in schoolchildren. Prior tension is absent and the rapid spread is by visual contact."
Mass motor hysteria "consists of abnormalities in motor behaviour. It occurs in any age group and prior tension is present. Initial cases can be identified and the spread is gradual. ... [T]he outbreak may be prolonged."
While his definition is sometimes adhered to, others contest Wessely's definition and describe outbreaks with qualities of both mass motor hysteria and mass anxiety hysteria.
The DSM-IV-TR does not define a diagnosis for this condition but the text describing conversion disorder states that "In 'epidemic hysteria', shared symptoms develop in a circumscribed group of people following 'exposure' to a common precipitant."
Prevalence and intensity
Cases of MPI frequently involve adolescents and children as the primary affected groups, with females often being disproportionately impacted. The hypothesis that those prone to extraversion or neuroticism, or those with low IQ scores, are more likely to be affected in an outbreak of hysterical epidemic has not been consistently supported by research. Bartholomew and Wessely state that it "seems clear that there is no particular predisposition to mass sociogenic illness and it is a behavioural reaction that anyone can show in the right circumstances."
Intense media coverage seems to exacerbate outbreaks. The illness may also recur after the initial outbreak. John Waller advises that once it is determined that the illness is psychogenic, it should not be given credence by authorities. For example, in the Singapore factory case study, calling in a medicine man to perform an exorcism seemed to perpetuate the outbreak.
History
Medieval period
The earliest studied cases linked with epidemic hysteria are the dancing manias of the Middle Ages, including St. John's dance and tarantism. These were supposed to be associated with spirit possession or the bite of the tarantula. Those with dancing mania would dance in large groups, sometimes for weeks at a time. The dancing was sometimes accompanied by stripping, howling, the making of obscene gestures, or reportedly laughing or crying to the point of death. Dancing mania was widespread over Europe.
Between the 15th and 19th centuries, instances of motor hysteria were common in nunneries. The young ladies that made up these convents were sometimes forced there by family. Once accepted, they took vows of chastity and poverty. Their lives were highly regimented and often marked by strict disciplinary action. The nuns would exhibit a variety of behaviors, usually attributed to demonic possession. They would often use crude language and exhibit suggestive behaviors. One convent's nuns would regularly meow like cats. Priests were often called in to exorcise demons.
In factories
MPI outbreaks occurred in factories following the industrial revolution (1760–1840) in England, France, Germany, Italy, Russia, the United States and Singapore.
W. H. Phoon, Ministry of Labour in Singapore, gives a case study of six outbreaks of MPI in Singapore factories between 1973 and 1978. They were characterized by (1) hysterical seizures of screaming and general violence, wherein tranquilizers were ineffective (2) trance states, where a worker would claim to be speaking under the influence of a spirit or jinn and (3) frightened spells: some workers complained of unprecedented fear, or of being cold, numb, or dizzy. Outbreaks would subside in about a week. Often a bomoh (medicine man) would be called in to do a ritual exorcism. This technique was not effective and sometimes seemed to exacerbate the MPI outbreak. Females and Malay people were affected disproportionately.
Especially notable is the "June Bug" outbreak: In June 1962, a peak month in factory production, 62 workers at a dressmaking factory in a textile town in the Southern United States experienced symptoms including severe nausea and breaking out on the skin. Most outbreaks occurred during the first shift, where four fifths of the workers were female. Of 62 total outbreaks, 59 were women, some of whom believed they were bitten by bugs from a fabric shipment. Entomologists and others were called in to discover the pathogen, but none was found.
Kerchoff coordinated the interview of affected and unaffected workers at the factory, and summarized his findings:
Strain – those affected were more likely to work overtime frequently and provided the majority of the family income. Many were married with children.
Affected persons tended to deny their difficulties. Kerchoff postulates that such were "less likely to cope successfully under conditions of strain."
Results seemed consistent with a model of social contagion. Groups of affected persons tended to have strong social ties.
Kerchoff linked the rapid rate of contagion with the apparent reasonableness of the bug infestation theory and the credence given to it in accompanying news stories.
In 1974, Stahl and Lebedun described an outbreak of mass sociogenic illness in the data center of a university town in the United States Midwest. Ten of 39 workers smelling an unconfirmed "mystery gas" were rushed to a hospital with symptoms of dizziness, fainting, nausea and vomiting. They reported that most workers were young women, either putting their husbands through school or supplementing the family income. Those affected were found to have high levels of job dissatisfaction. Those with strong social ties tended to have similar reactions to the supposed gas, which only one unaffected woman reported smelling. No gas was detected in tests of the data center.
In schools
In 1962, the Tanganyika laughter epidemic was an outbreak of laughing attacks, rumored to have occurred in or near the village of Kanshasa on the western coast of Lake Victoria in what is now Tanzania, eventually affecting 14 different schools and over 1,000 people.
On the morning of Thursday 7 October 1965, at a girls' school in Blackburn in England, several girls complained of dizziness. Some fainted. Within a couple of hours, 85 girls from the school were rushed by ambulance to a nearby hospital after fainting. Symptoms included swooning, moaning, chattering of teeth, hyperpnea, and tetany. Moss and McEvedy published their analysis of the event about one year later. Their conclusions follow. Their conclusion about the above-average extraversion and neuroticism of those affected is not necessarily typical of MPI:
Clinical and laboratory findings were essentially negative.
Investigations by the public health authorities did not uncover any evidence of pollution of food or air.
The epidemiology of the outbreak was investigated by means of questionnaires administered to the whole school population. It was established that the outbreaks began among the 14-year-olds, but that the heaviest incidence moved to the youngest age groups.
By using the Eysenck Personality Inventory, it was established that, in all age groups, the mean E [extraversion] and N [neuroticism] scores of the affected were higher than those of the unaffected.
The younger girls proved more susceptible, but disturbance was more severe and lasted longer in the older girls.
It was considered that the epidemic was hysterical, that a previous polio epidemic had rendered the population emotionally vulnerable, and that a three-hour parade, producing 20 faints on the day before the first outbreak, had been the specific trigger.
The data collected were thought to be incompatible with organic theories and with the compromise theory of an organic nucleus.
In 1974, mass hysteria affected schools in Berry, Alabama, and Miami Beach. In Berry, it took the form of recurring itches. In the episode in Miami Beach initially triggering fears of poison gas. It was traced back to a popular student who happened to be sick with a virus.
In June 1990, thousands were affected by the spread of a supposed illness in a province of Kosovo in March to June 1990, exclusively affecting ethnic Albanians, most of whom were young adolescents. Symptoms included headaches, dizziness, impeded respiration, muscle weakness, burning sensations, cramps, retrosternal/chest pain, dry mouth and nausea. After the illness had subsided, a bipartisan Federal Commission released a document, offering the explanation of psychogenic illness. Radovanovic of the Department of Community Medicine and Behavioural Sciences Faculty of Medicine in Safat, Kuwait, reported:
This document did not satisfy either of the two ethnic groups. Many Albanian doctors believed that what they had witnessed was an unusual epidemic of poisoning. The majority of their Serbian colleagues also ignored any explanation in terms of psychopathology. They suggested that the incident was faked with the intention of showing Serbs in a bad light but that it failed due to poor organization.
Rodovanovic stated that this reported instance of mass sociogenic illness was precipitated by the demonstrated volatile and culturally tense situation in the province.
Another possible case occurred in Belgium in June 1999 when people, mainly schoolchildren, became ill after drinking Coca-Cola. In the end, scientists were divided over the scale of the outbreak, whether it fully explains the many different symptoms and the scale to which sociogenic illness affected those involved.
Starting around 2009, a spate of apparent poisonings at girls' schools across Afghanistan began to be reported; symptoms included dizziness, fainting and vomiting. The United Nations, World Health Organization and NATO's International Security Assistance Force carried out investigations of the incidents over multiple years, but never found any evidence of toxins or poisoning in the hundreds of blood, urine and water samples they tested. The conclusion of the investigators was that the girls were experiencing a mass psychogenic illness.
In 2011, a possible outbreak of mass psychogenic illness occurred at Le Roy Junior-Senior High School, in upstate New York, US, in which multiple students began having symptoms similar to Tourette syndrome. Various health professionals ruled out such factors as Gardasil, drinking water contamination, illegal drugs, carbon monoxide poisoning and various other potential environmental or infectious causes, before diagnosing the students with a conversion disorder and mass psychogenic illness.
In August 2019 the BBC reported that schoolgirls at the Ketereh national secondary school (SMK Ketereh) in Kelantan, Malaysia, started screaming, with some claiming to have seen 'a face of pure evil'. Simon Wessely of King's College Hospital, London, suggested it was a form of 'collective behaviour'. Robert Bartholomew, an American medical sociologist and author, said, "It is no coincidence that Kelantan, the most religiously conservative of all Malaysian states, is also the one most prone to outbreaks." This view is supported by Afiq Noor, an academic, who argues that the stricter implementation of Islamic law in school in states such as Kelantan is linked to the outbreaks. He suggested that the screaming outbreak was caused by the constricted environment. In Malaysian culture, burial sites and trees are common settings for supernatural tales about the spirits of dead infants (toyol), vampiric ghosts (pontianak) and vengeful female spirits (penanggalan). Authorities responded to the Kelantan outbreak by cutting down trees around the school.
Outbreaks of mass psychogenic illness "have been reported in Catholic convents and monasteries across Mexico, Italy and France, in schools in Kosovo and even among cheerleaders in a rural North Carolina town".
Episodes of mass hysteria have been frequent in Nepalese schools, at times even leading to the temporary closure of those schools involved. In 2018, a unique phenomenon of "recurrent epidemic of mass hysteria" was reported from a school of Pyuthan district of western Nepal. After a nine-year-old school girl developed crying and shouting episodes. Other children of the same school became affected in rapid succession, resulting in 47 affected students, 37 females, 10 males, in the same day. Since 2016, similar episodes of mass psychogenic illness have been occurring every year at the same school. This is seen as a rather atypical case of recurrent mass hysteria.
In July 2022, reports of up to 15 girls showing unusual symptoms such as screaming, trembling, and banging their heads came up from a government school in Bageshwar, Uttarakhand, India. Mass psychological illness has been suggested as a possible cause.
In late 2022 and early 2023, thousands of students, mostly girls, in numerous schools in Iran were initially believed to have been poisoned in various and undetermined manners by unidentified perpetrators and numerous arrests were made. On 29 April 2023, the Iranian Intelligence Ministry released the findings of a comprehensive investigation which concluded that the reported illnesses were not caused by any toxic substances. Instead they were suggested to have been due to a variety of reasons, including exposure to a variety of non-toxic substances, mass hysteria, and malingering.
In October 2023, over 100 students from the St. Theresa's Eregi Girls’ High School in Musoli, Kenya were hospitalized due to rapid and involuntary arm and leg movement, sometimes accompanied by headaches and vertigo. Routine medical tests revealed nothing unusual, and there were no signs of infectious disease as a cause. Ultimately it was decided that the events were caused by “stress due to upcoming exams” and the incident was determined to be an incident of “hysteria”.
Due to the determination of collective stress as the cause, medical sociologist Robert Bartholomew favors the neutral term mass psychogenic illness over mass hysteria, as people respond more favorably to a diagnosis of stress induced symptoms than to a diagnosis of mass hysteria. Bartholomew notes such outbreaks are not unusual in schools in the developing world. This is particularly true in schools in which discipline is tight and accompanied with cultural strain between administrators and students. An outbreak can be preceded by months of such tension, which then results in physical symptoms such as seen in Musoli. Far from faking it, “Under such prolonged stress, the nerves and neurons that send messages to the brain become disrupted, resulting in an array of neurological symptoms such as twitching, shaking, convulsions, and trance-like states.”
Bartholomew observes that school-stress borne illness such as occurred here have not been uncommon in Africa since the 1960s. Some appear to be due to Christian missionary schools largely ignoring local traditions and mythologies. Instead, such schools impart their own mythologies and culture. This may create overwhelming anxiety due to the students being taught one thing at home, such as ancestor worship, which is then forbidden at a Christian mythology based school.
Other such outbreaks have similar tradition-based causes, such as a 1995 outbreak of “bouts of screaming, crying, foaming at the mouth, and partial paralysis” in over 600 girls at an African Muslim school in Northern Nigeria. This outbreak was surmised to be due to expectations of traditional arranged marriage, colliding with modernity's emphasis on romantic love that the students had observed in movies. The difference between these two cases of mass psychogenic illness reinforces that each outbreak needs to be evaluated in the specific circumstances in which it occurred, as such instances are “never spontaneous reactions to stress per se; they are always couched in some unique context.”
Terrorism and biological warfare
In 2002, Bartholomew and Wessely stated that the "concern that after a chemical, biological or nuclear attack, public health facilities may be rapidly overwhelmed by the anxious and not just the medical and psychological casualties." Early symptoms of those affected by MPI are difficult to differentiate from those actually exposed to the dangerous agent.
The first Iraqi missile hitting Israel during the Persian Gulf War was believed to contain chemical or biological weapons. Though this was not the case, 40% of those in the vicinity of the blast reported breathing problems.
Right after the 2001 anthrax attacks in October 2001, there were over 2,300 false anthrax alarms in the United States. Some reported physical symptoms of what they believed to be anthrax.
In 2001, a man sprayed what was later found to be a window cleaner into a subway station in Maryland. Thirty-five people were treated for nausea, headaches and sore throats.
Havana syndrome
Beginning in 2016, some staff stationed at the US embassy in Cuba reported medical symptoms that initially were attributed to "sonic attacks", and later to other unknown weaponry. The symptoms were dubbed "Havana syndrome" by the media. The following year, some US government employees in China reported similar symptoms. Eventually, similar reports came from US government employees and their families around the globe, including in Washington DC. Due to lack of evidence of actual attack and other factors, some scientists suggested the alleged symptoms were psychogenic in nature.
Seven U.S. intelligence agencies headed by the CIA spent years reviewing thousands of possible cases of Havana syndrome and preparing a report. On March 1, 2023, the House Intelligence Committee released an unclassified version of the report, titled an "Intelligence Community Assessment". Politico summarized the results by saying, "The finding undercuts a years-long narrative, propped up by more than a thousand reports from government employees, that a foreign adversary used pulsed electro-magnetic energy waves to sicken Americans."
Children in recent refugee families
Refugee children in Sweden have been reported to fall into coma-like states on learning their families will be deported. The condition, known as resignation syndrome (), is believed to only exist among the refugee population in Sweden, where it has been prevalent since the early part of the 21st century. Commentators state "a degree of psychological contagion" is inherent to the condition, by which young friends and relatives of the affected individual can also come to have the condition.
In a 130-page report on the condition, commissioned by the government and published in 2006, a team of psychologists, political scientists and sociologists hypothesized that it was a culture-bound syndrome, a psychological illness endemic to a specific society.
This phenomenon has later been called into question, with children witnessing that they were forced, by their parents, to act in a certain way in order to increase chances of being granted residence permits. As evidenced by medical records, healthcare professionals were aware of this scam, and witnessed parents who actively refused aid for their children, but remained silent. Later, Sveriges Television, Sweden's national public television broadcaster, were severely critiqued by investigative journalist Janne Josefsson for failing to uncover the truth.
Society and culture
Social media
After the rise of a popular breakthrough YouTube channel in 2019, where the presenter exhibits extensive Tourette's-like behavior, there was a sharp rise in young people referred to clinics specializing in tics, thought to be related to social contagion spread via the Internet, and also to stress from eco-anxiety and the COVID-19 pandemic.
A report published in August 2021 found found evidence that social media was the primary vector for transmission and that it predominantly affects adolescent girls, declaring the phenomenon the first recorded instance of mass social media–induced illness (MSMI).
Research
Diagnostic challenges
Besides the difficulties common to all research involving the social sciences, including a lack of opportunity for controlled experiments, MSI or MPI presents special difficulties to researchers in this field. Balaratnasingam and Janca report that the methods for "diagnosis of mass hysteria remain contentious." According to Jones, the effects resulting from MPI "can be difficult to differentiate from [those of] bioterrorism, rapidly spreading infection or acute toxic exposure."
These troubles result from the residual diagnosis of MPI. There is a lack of logic in an argument that proceeds: "There isn't anything, so it must be MPI." It is an example of an argument from ignorance, with ignorance here intended to mean "an absence of contrary evidence". It precludes the notion that an organic factor could have been overlooked (i.e. that there may have been insufficient investigation), or the possibility that the answer may currently be unknown but known at a future point in time. Nevertheless, running an extensive number of tests extends the probability of false positives. Singer, of the Uniformed Schools of Medicine, has summarized the problems with such a diagnosis:
[Y]ou find a group of people getting sick, you investigate, you measure everything you can measure ... and when you still can't find any physical reason, you say "well, there's nothing else here, so let's call it a case of MPI."
Relationship to autism and mirror neurons
Due to the role of the visual and auditory systems in MPI, a link between MPI and mirror neurons has been suggested.
In this context, MPI appears as the neurological opposite of autism, caused by an overactive, not underactive, mirror neuron system.
This could explain the gender difference bias observed in these two conditions, with autism predominantly affecting males (persons with autism show diminished activity in the mirror neuron system), and MPI predominantly affecting young girls, who appear to have a more sensitive mirror system.
| Biology and health sciences | Mental disorders | Health |
18380118 | https://en.wikipedia.org/wiki/Caiman | Caiman | A caiman ( (also spelled cayman) from Taíno kaiman) is an alligatorid belonging to the subfamily Caimaninae, one of two primary lineages within the Alligatoridae family, the other being alligators. Caimans are native to Central and South America and inhabit marshes, swamps, lakes, and mangrove rivers. They have scaly skin and live a fairly nocturnal existence. They are relatively small-sized crocodilians with an average maximum weight of depending on species, with the exception of the black caiman (Melanosuchus niger), which can grow more than in length and weigh in excess of 450 kg (1,000 Ib). The black caiman is the largest caiman species in the world and is found in the slow-moving rivers and lakes that surround the Amazon basin. The smallest species is the Cuvier's dwarf caiman (Paleosuchus palpebrosus), which grows to long. There are six different species of caiman found throughout the watery jungle habitats of Central and Southern America. The average length for most of the other caiman species is about long.
Caimans are distinguished from alligators, their closest relatives, by a few defining features: a lack of a bony septum between the nostrils, ventral armor composed of overlapping bony scutes formed from two parts united by a suture, and longer and sharper teeth than alligators, plus caimans tend to be more agile and crocodile-like in their movements. The calcium rivets on caiman scales make their hides stiffer.
Several extinct forms are known, including Purussaurus, a giant Miocene genus that grew to and the equally large Mourasuchus, which had a wide duck-like snout.
Behavior
Caimans are predators and, like alligators and crocodiles, their diet largely consists of fish. Caimans also hunt insects, birds, small mammals and reptiles.
Due to their large size and ferocious nature, caimans have few natural predators within their environments. Humans are their main predators, because the animals have been hunted for their meat and skin. Jaguars, anacondas and crocodiles are the only other predators of caimans, although they usually prey on the smaller specimens or specific species of caiman such as the Spectacled Caiman and Yacare caiman. During summer or droughts, caimans may dig a burrow and go into a form of summer hibernation called aestivation.
Female caimans build a large nest in which to lay their eggs. The nests can be more than wide. Female caimans lay between 10 and 50 eggs, which hatch within about six weeks. Once they have hatched, the mother caiman takes her young to a shallow pool of water, where they can learn how to hunt and swim. The juveniles of spectacled caiman have been shown to stay together in pods for up to 18 months.
Phylogeny
Caimaninae is cladistically defined as Caiman crocodylus (the spectacled caiman) and all species closer to it than to Alligator mississippiensis (the American alligator). This is a stem-based definition for caimaninae, and means that it includes more basal extinct caimanine ancestors that are more closely related to living caimans than to alligators. The clade Jacarea includes the most derived caimans, being defined as the last common ancestor of Caiman latirostris (Broad-snouted caiman), Caiman crocodilus (Spectacled caiman), Caiman yacare (Yacare caiman), Melanosuchus niger (Black caiman), and all its descendants.
Below is a cladogram showing the phylogeny of Caimaninae, modified from Hastings et al. (2013).
Here is an alternative cladogram from Bona et al. 2018.
The Late Cretaceous taxa Stangerochampsa, Brachychampsa and Albertochampsa have been previously referred to as stem-group caimans, but Walter et al. (2022) recovered them as the basalmost alligatorines based on phylogenetic analysis and claimed that the earliest definitive stem-group caimans are known from the earliest Paleocene. A different study by Adam Cossette and David Tarailo in 2024 recovered Brachychampsa and relatives in a clade at the base of Caimaninae. They named this clade Brachychampsini, defining it as "the largest clade of alligatorids more closely related to Brachychampsa montana than to Caiman crocodilus or Alligator mississippiensis".
| Biology and health sciences | Reptiles | null |
1806596 | https://en.wikipedia.org/wiki/Corriente | Corriente | The Corriente is an American breed of small cattle, used principally for rodeo events. It derives from Criollo Mexicano stock, which in turn descends from Iberian cattle brought to the Americas by the Conquistadors, and introduced in the sixteenth and seventeenth centuries to various parts of what is now Mexico.
A breed association, the North American Corriente Association, was formed in 1982.
History
Iberian cattle were brought to the Americas by the Conquistadors, and were introduced in the sixteenth and seventeenth centuries to various parts of what is now Mexico. From these the various types or breeds of Criollo Mexicano have developed.
Small cattle for use in rodeo events were exported to the United States in large numbers from the Mexican states of Chihuahua and Sonora, although in the late twentieth century this became difficult as a result of stringent border regulations. In Chihuahua annual exports were in the region of head, and 'Criollo de Rodeo' became an alternate name for the Criollo di Chihuahua; in Sonora, where the Frijolillo is the predominant Criollo breed, small cattle of any kind were commonly known as 'Corriente', meaning 'running'. When a breed association for rodeo cattle was formed in the United States in 1982, this was name chosen for the new breed, and the association was called the North American Corriente Association. The foundation stock of the Corriente breed included some Florida Scrub cattle and other similar cattle from Louisiana.
In 2010 the number of breeding cows was . In 2016 there were 114 breeders of the Corriente.
Characteristics
Like other Criollo cattle of the Americas and many breeds of southern Europe, the Corriente is principally of taurine (European) derivation, but has a small admixture of indicine genetic heritage; this may be a consequence of gene flow across the Strait of Gibraltar from cattle of African origin dating to before the time of the Spanish Conquest. A single-nucleotide polymorphism genotyping study in 2013 found the level of zebuine introgression in the Corriente to be approximately , not significantly different from that seen in the Colombian Romosinuano and the Texas Longhorn.
The Corriente is small, with an average weight of for cows and for bulls. It is lean, agile and athletic. The horns come straight out and then curve forward and often slightly upward; they are heavy but not particularly long. The coat may be of any color but pure white. Solid, brindle and paint colors are seen.
Use
The Corriente is primarily used for rodeo sports such as team roping and steer wrestling. It either is or is not also reared for beef; cattle no longer suitable for rodeo work may be fattened for slaughter. The meat is included in the Ark of Taste of the Slow Food Foundation for Biodiversity.
| Biology and health sciences | Cattle | Animals |
1807812 | https://en.wikipedia.org/wiki/Purgatorius | Purgatorius | Purgatorius is a genus of seven extinct eutherian species typically believed to be the earliest example of a primate or protoprimate, a primatomorph precursor to the Plesiadapiformes, dating to as old as 66 million years ago. The first remains (P. unio and P. ceratops) were reported in 1965, from what is now eastern Montana's Tullock Formation (early Paleocene, Puercan), specifically at Purgatory Hill (hence the animal's name) in deposits believed to be about 63 million years old, and at Harbicht Hill in the lower Paleocene section of the Hell Creek Formation. Both locations are in McCone County, Montana.
They have also been found in the Ravenscrag Formation and widely discovered in the early Paleocene Bug Creek Group, along with leptictids. These deposits were once thought to be late Cretaceous, but it is now clear that they are Paleocene channels with time-averaged fossil assemblages. It is thought to have been rat-sized ( long and 1.3 ounces (about 37 grams)) and a diurnal insectivore, which burrowed through small holes in the ground. In life, it would have resembled a squirrel or a tree shrew (most likely the latter, given that tree shrews are one of the closest living relatives of primates, and Purgatorius is considered to be the progenitor to primates). The oldest remains of Purgatorius date back to ~65.921 mya, or between 105 thousand to 139 thousand years after the K-Pg boundary.
Description of remains
Postcanine dentition of P. unio is documented by 13 dentulous, fragmentary mandibles, a fragmentary maxillary and more than 50 isolated teeth from Garbani Locality 80 km west of Purgatory Hill. P. ceratops is represented by an isolated lower molar found at Harbicht Hill, McCone County. The report of the occurrence of Purgatorius in the Late Cretaceous was based on an isolated, worn molar found in a channel filling that contains early Puercan fossils. It is also abundantly represented in Pu 2-3 local faunas in the northwestern interior, suggesting that it came into the area between 64.75 and 64.11 Mya. Fragmentary dentition from the Garbani Channel fauna from Purgatorius janisae shows that the lower dental formula was 3.1.4.3.
Dentition
The type specimen of P. unio, a damaged upper molar, is essentially identical to teeth found at the Garbani Locality. Data from this sample support Van Valen and Sloan's identification of topotypic lower molars, and also demonstrate that the lower dentition of P. unio includes seven postcanines. The alveolus for the single root of P1, crown unknown, is smaller than those for the canine or P2. The second lower pre- molar is smaller than P3; both are two- rooted. The fourth lower premolar is submolariform. A metaconid is lacking, although on some teeth slight thickenings of the enamel are present in this region. Talonid cusps are slightly differentiated. The first and second lower molars are approximately the same length (M1, average length x=- 1.93 mm, N- 13; M2, x=2.00 mm, N- 9); M. is longer (x= 2.32 mm, N -7). Widths of talonids of M1.2 vary from less than to greater than widths of trigonids. Hypoconulid of M. is enlarged, salient, and on some teeth incipiently doubled by addition of a lingual cusp.
Ankle bones
Bones from the ankle are similar to those of primates, and were suited for a life up in trees.
Relationship
For many years, there has been debate as to whether Purgatorius is a primitive member of the primates or a basal member of the Plesiadapiforms. Several characters of the dentition of Purgatorius, which includes its incisor morphology, can ally it with later plesiadapiforms. The prism cross sections are highly variable with circular, horseshoe and irregular shapes, while the prisms of cheek teeth are radially arranged. Due to the fragmentary dentaries found in the Garbani Channel fauna from Purgatorius janisae the morphology of the canine and incisor alveoli suggest the derived gradient in the crown size of: I1>or = I2>I3<C. Isolated upper incisors referable from P. janisae exhibit some typical plesiadapiform specializations. Due to general morphology of the postcanine dentition of Purgatorius, it could be expected to be characterized as a primitive member of the primates. However, due to the specializations of its incisors of P. janisae it is considered by some investigators as a basal member of the Pleasiadapiformes sensu lato.
A phylogenetic analysis of 177 mammal taxa (mostly Cretaceous and Palaeocene fossils), published in 2015, suggests that Purgatorius may not be closely related to primates at all, but instead falls outside crown-group placentals – specifically as the sister taxon to Protungulatum. Similar results had been obtained in previous studies with far fewer species. But this study had been criticized and refuted by subsequent authors and their studies.
| Biology and health sciences | Other primates | Animals |
1809181 | https://en.wikipedia.org/wiki/Mathematical%20structure | Mathematical structure | In mathematics, a structure on a set (or on some sets) refers to providing it (or them) with certain additional features (e.g. an operation, relation, metric, or topology). Τhe additional features are attached or related to the set (or to the sets), so as to provide it (or them) with some additional meaning or significance.
A partial list of possible structures are measures, algebraic structures (groups, fields, etc.), topologies, metric structures (geometries), orders, graphs, events, equivalence relations, differential structures, and categories.
Sometimes, a set is endowed with more than one feature simultaneously, which allows mathematicians to study the interaction between the different structures more richly. For example, an ordering imposes a rigid form, shape, or topology on the set, and if a set has both a topology feature and a group feature, such that these two features are related in a certain way, then the structure becomes a topological group.
Map between two sets with the same type of structure, which preserve this structure [morphism: structure in the domain is mapped properly to the (same type) structure in the codomain] is of special interest in many fields of mathematics. Examples are homomorphisms, which preserve algebraic structures; continuous functions, which preserve topological structures; and differentiable functions, which preserve differential structures.
History
In 1939, the French group with the pseudonym Nicolas Bourbaki saw structures as the root of mathematics. They first mentioned them in their "Fascicule" of Theory of Sets and expanded it into Chapter IV of the 1957 edition. They identified three mother structures: algebraic, topological, and order.
Example: the real numbers
The set of real numbers has several standard structures:
An order: each number is either less than or greater than any other number.
Algebraic structure: there are operations of addition and multiplication, the first of which makes it into a group and the pair of which together make it into a field.
A measure: intervals of the real line have a specific length, which can be extended to the Lebesgue measure on many of its subsets.
A metric: there is a notion of distance between points.
A geometry: it is equipped with a metric and is flat.
A topology: there is a notion of open sets.
There are interfaces among these:
Its order and, independently, its metric structure induce its topology.
Its order and algebraic structure make it into an ordered field.
Its algebraic structure and topology make it into a Lie group, a type of topological group.
| Mathematics | Set theory | null |
1810098 | https://en.wikipedia.org/wiki/Structure%20formation | Structure formation | In physical cosmology, structure formation describes the creation of galaxies, galaxy clusters, and larger structures starting from small fluctuations in mass density resulting from processes that created matter. The universe, as is now known from observations of the cosmic microwave background radiation, began in a hot, dense, nearly uniform state approximately 13.8 billion years ago. However, looking at the night sky today, structures on all scales can be seen, from stars and planets to galaxies. On even larger scales, galaxy clusters and sheet-like structures of galaxies are separated by enormous voids containing few galaxies. Structure formation models gravitational instability of small ripples in mass density to predict these shapes, confirming the consistency of the physical model.
The modern Lambda-CDM model is successful at predicting the observed large-scale distribution of galaxies, clusters and voids; but on the scale of individual galaxies there are many complications due to highly nonlinear processes involving baryonic physics, gas heating and cooling, star formation and feedback. Understanding the processes of galaxy formation is a major topic of modern cosmology research, both via observations such as the Hubble Ultra-Deep Field and via large computer simulations.
Before the first structures
Structure formation began some time after recombination, when the early universe cooled enough from expansion to allow the formation of stable hydrogen and helium atoms.
At this point the cosmic microwave background(CMB) is emitted; many careful measurements of the CMB provide key information about the initial state of the universe before structure formation. The measurements support a model of small fluctuations in density, critical seeds for structures to come.
Very early universe
In this stage, some mechanism, such as cosmic inflation, was responsible for establishing the initial conditions of the universe: homogeneity, isotropy, and flatness. Cosmic inflation also would have amplified minute quantum fluctuations (pre-inflation) into slight density ripples of overdensity and underdensity (post-inflation).
Growth of structure
The early universe was dominated by radiation; in this case density fluctuations larger than the cosmic horizon grow proportional to the scale factor, as the gravitational potential fluctuations remain constant. Structures smaller than the horizon remained essentially frozen due to radiation domination impeding growth. As the universe expanded, the density of radiation drops faster than matter (due to redshifting of photon energy); this led to a crossover called matter-radiation equality at ~ 50,000 years after the Big Bang. After this all dark matter ripples could grow freely, forming seeds into which the baryons could later fall. The particle horizon at this epoch induces a turnover in the matter power spectrum which can be measured in large redshift surveys.
Recombination
The universe was dominated by radiation for most of this stage, and due to the intense heat and radiation, the primordial hydrogen and helium were fully ionized into nuclei and free electrons. In this hot and dense situation, the radiation (photons) could not travel far before Thomson scattering off an electron. The universe was very hot and dense, but expanding rapidly and therefore cooling. Finally, at a little less than 400,000 years after the 'bang', it became cool enough (around 3000 K) for the protons to capture negatively charged electrons, forming neutral hydrogen atoms. (Helium atoms formed somewhat earlier due to their larger binding energy). Once nearly all the charged particles were bound in neutral atoms, the photons no longer interacted with them and were free to propagate for the next 13.8 billion years; we currently detect those photons redshifted by a factor 1090 down to 2.725 K as the Cosmic Microwave Background Radiation (CMB) filling today's universe. Several remarkable space-based missions (COBE, WMAP, Planck), have detected very slight variations in the density and temperature of the CMB. These variations were subtle, and the CMB appears very nearly uniformly the same in every direction. However, the slight temperature variations of order a few parts in 100,000 are of enormous importance, for they essentially were early "seeds" from which all subsequent complex structures in the universe ultimately developed.
Dark matter structure
After the first matter condensed, the radiation traveled away, leaving a slightly inhomogeneous dark matter subject to gravitational interaction. The interaction eventually collapses the dark matter into "halos" that then attracts the normal or baryonic matter, primarily hydrogen. As the density of hydrogen increases due gravitational attraction, stars ignite, emitting ultraviolet light that re-ionizes any surrounding atoms. The gravitational interaction continues in hierarchical structure formation: the smaller gravitationally bound structures such as the first stars and stellar clusters form, then galaxies, followed by groups, clusters and superclusters of galaxies.
Linear structure
Dark matter plays a crucial role in structure formation because it feels only the force of gravity: the gravitational Jeans instability which allows compact structures to form is not opposed by any force, such as radiation pressure. As a result, dark matter begins to collapse into a complex network of dark matter halos well before ordinary matter, which is impeded by pressure forces. Without dark matter, the epoch of galaxy formation would occur substantially later in the universe than is observed.
The physics of structure formation in this epoch is particularly simple, as dark matter perturbations with different wavelengths evolve independently. As the Hubble radius grows in the expanding universe, it encompasses larger and larger disturbances. During matter domination, all causal dark matter perturbations grow through gravitational clustering. However, the shorter-wavelength perturbations that are included during radiation domination have their growth suppressed until matter domination. At this stage, luminous, baryonic matter is expected to mirror the evolution of the dark matter simply, and their distributions should closely trace one another.
It is straightforward to calculate this "linear power spectrum" and, as a tool for cosmology, it is of comparable importance to the cosmic microwave background. Galaxy surveys have measured the power spectrum, such as the Sloan Digital Sky Survey, and by surveys of the Lyman-α forest. Since these studies observe radiation emitted from galaxies and quasars, they do not directly measure the dark matter, but the large-scale distribution of galaxies (and of absorption lines in the Lyman-α forest) is expected to mirror the distribution of dark matter closely. This depends on the fact that galaxies will be larger and more numerous in denser parts of the universe, whereas they will be comparatively scarce in rarefied regions.
Nonlinear structure
When the perturbations have grown sufficiently, a small region might become substantially denser than the mean density of the universe. At this point, the physics involved becomes substantially more complicated. When the deviations from homogeneity are small, the dark matter may be treated as a pressureless fluid and evolves by very simple equations. In regions which are significantly denser than the background, the full Newtonian theory of gravity must be included. (The Newtonian theory is appropriate because the masses involved are much less than those required to form a black hole, and the speed of gravity may be ignored as the light-crossing time for the structure is still smaller than the characteristic dynamical time.) One sign that the linear and fluid approximations become invalid is that dark matter starts to form caustics in which the trajectories of adjacent particles cross, or particles start to form orbits. These dynamics are best understood using N-body simulations (although a variety of semi-analytic schemes, such as the Press–Schechter formalism, can be used in some cases). While in principle these simulations are quite simple, in practice they are tough to implement, as they require simulating millions or even billions of particles. Moreover, despite the large number of particles, each particle typically weighs 109 solar masses and discretization effects may become significant. The largest such simulation as of 2005 is the Millennium simulation.
The result of N-body simulations suggests that the universe is composed largely of voids, whose densities might be as low as one-tenth the cosmological mean. The matter condenses in large filaments and haloes which have an intricate web-like structure. These form galaxy groups, clusters and superclusters. While the simulations appear to agree broadly with observations, their interpretation is complicated by the understanding of how dense accumulations of dark matter spur galaxy formation. In particular, many more small haloes form than we see in astronomical observations as dwarf galaxies and globular clusters. This is known as the Dwarf galaxy problem, and a variety of explanations have been proposed. Most account for it as an effect in the complicated physics of galaxy formation, but some have suggested that it is a problem with our model of dark matter and that some effect, such as warm dark matter, prevents the formation of the smallest haloes.
Gas evolution
The final stage in evolution comes when baryons condense in the centres of galaxy haloes to form galaxies, stars and quasars. Dark matter greatly accelerates the formation of dense haloes. As dark matter does not have radiation pressure, the formation of smaller structures from dark matter is impossible. This is because dark matter cannot dissipate angular momentum, whereas ordinary baryonic matter can collapse to form dense objects by dissipating angular momentum through radiative cooling. Understanding these processes is an enormously difficult computational problem, because they can involve the physics of gravity, magnetohydrodynamics, atomic physics, nuclear reactions, turbulence and even general relativity. In most cases, it is not yet possible to perform simulations that can be compared quantitatively with observations, and the best that can be achieved are approximate simulations that illustrate the main qualitative features of a process such as a star formation.
Modelling structure formation
Cosmological perturbations
Much of the difficulty, and many of the disputes, in understanding the large-scale structure of the universe can be resolved by better understanding the choice of gauge in general relativity. By the scalar-vector-tensor decomposition, the metric includes four scalar perturbations, two vector perturbations, and one tensor perturbation. Only the scalar perturbations are significant: the vectors are exponentially suppressed in the early universe, and the tensor mode makes only a small (but important) contribution in the form of primordial gravitational radiation and the B-modes of the cosmic microwave background polarization. Two of the four scalar modes may be removed by a physically meaningless coordinate transformation. Which modes are eliminated determine the infinite number of possible gauge fixings. The most popular gauge is Newtonian gauge (and the closely related conformal Newtonian gauge), in which the retained scalars are the Newtonian potentials Φ and Ψ, which correspond exactly to the Newtonian potential energy from Newtonian gravity. Many other gauges are used, including synchronous gauge, which can be an efficient gauge for numerical computation (it is used by CMBFAST). Each gauge still includes some unphysical degrees of freedom. There is a so-called gauge-invariant formalism, in which only gauge invariant combinations of variables are considered.
Inflation and initial conditions
The initial conditions for the universe are thought to arise from the scale invariant quantum mechanical fluctuations of cosmic inflation. The perturbation of the background energy density at a given point in space is then given by an isotropic, homogeneous Gaussian random field of mean zero. This means that the spatial Fourier transform of – has the following correlation functions
,
where is the three-dimensional Dirac delta function and is the length of . Moreover, the spectrum predicted by inflation is nearly scale invariant, which means
,
where is a small number. Finally, the initial conditions are adiabatic or isentropic, which means that the fractional perturbation in the entropy of each species of particle is equal.
The resulting predictions fit very well with observations.
| Physical sciences | Physical cosmology | Astronomy |
20571589 | https://en.wikipedia.org/wiki/Regulation%20of%20chemicals | Regulation of chemicals | The regulation of chemicals is the legislative intent of a variety of national laws or international initiatives such as agreements, strategies or conventions. These international initiatives define the policy of further regulations to be implemented locally as well as exposure or emission limits. Often, regulatory agencies oversee the enforcement of these laws.
Chemicals are regulated for:
environmental protection (chemical waste, and chemical pollution of water, air, subterrestrial,and terrestrial environments such as of pesticides)
human health (such as in cosmetics and foods) and drugs (recreational and pharmaceuticals)
chemical weapons prohibition (such as for the Chemical Weapons Convention)
International initiatives
Strategic Approach to International Chemicals Management
(SAICM) -. This initiative was adopted at the International Conference on Chemicals Management (ICCM), which took place from 4–6 February 2006 in Dubai gathering Governments and intergovernmental and non-governmental organizations. It defines a policy framework to foster the sound worldwide management of chemicals.
This initiative covers risk assessments of chemicals and harmonized labeling up to tackling obsolete and stockpiled products. Are included provisions for national centers aimed at helping in the developing world, training staff in chemical safety as well as dealing with spills and accidents. SAICM is a voluntary agreement.
A second International Conference on Chemicals Management -ICCM2- held in May 2009 in Geneva took place to enhance synergies and cost-effectiveness and to promote SAICM’s multi-sectorial nature.
Globally Harmonized System of Classification and Labeling of Chemicals (GHS)[]
The “Globally Harmonized System of Classification and Labelling of Chemicals” (GHS) proposes harmonized hazard communication elements, including labels and safety data sheets. It was adopted by the United Nations Economic Commission for Europe (UNECE) in 2002. This system aims to ensure a better protection of human health and the environment during the handling of chemicals, including their transport and use. The classification of chemicals is done based on their hazard. This harmonization will facilitate trade when implemented entirely.
Stockholm Convention -
The Stockholm Convention is a global treaty to protect human health and the environment from persistent organic pollutants(POPs). It entered into force, on 17 May 2004, and over 150 countries signed the Convention. In May 2009, nine new chemicals are proposed for listing which then contained 12 substances.
Rotterdam Convention –
The objectives of the Rotterdam Convention are:
to promote shared responsibility and cooperative efforts among Parties in the international trade of certain hazardous chemicals in order to protect human health and the environment from potential harm;
to contribute to the environmentally sound use of those hazardous chemicals, by facilitating information exchange about their characteristics, by providing for a national decision-making process on their import and export and by disseminating these decisions to Parties.
The text of the Convention was adopted on 10 September 1998 by a Conference in Rotterdam, the Netherlands. The Convention entered into force on 24 February 2004. The Convention creates legally binding obligations for the implementation of the Prior Informed Consent (PIC) procedure.
Basel Convention –
The Basel Convention on the Control of Trans-boundary Movements of Hazardous Wastes and their Disposal is a global environmental agreement on hazardous and other wastes. It came into force in 1992. The Convention has 172 Parties and aims to protect human health and the environment against the adverse effects resulting from the generation, management, transboundary movements and disposal of hazardous and other wastes.
Montreal Protocol – The Montreal Protocol was a globally coordinated regulatory action that sought to regulate ozone-depleting chemicals. 191 countries have ratified the treaty.
Global Framework on Chemicals -
The plan was adopted on 30 September 2023 in Bonn at the fifth session of the International Conference on Chemicals Management organized by the UN Environment Programme (UNEP).
Regional regulations
USA: The Environmental Protection Agency (EPA) of the US announced in 2009 that the chemicals management laws would be strengthened, and that it would initiate a comprehensive approach to enhance the chemicals management program, including:
New Regulatory Risk Management Actions
Development of Chemical Action Plans, which will target the risk management efforts on chemicals of concern
Requiring Information Needed to Understand Chemical Risks
Increasing Public Access to Information About Chemicals
Engaging Stakeholders in Prioritizing Chemicals for Future Risk Management Action.
Chemicals are regulated under various laws including the Toxic Substances Control Act (TSCA). In 2010, Congress was considering a new law entitled the Safe Chemicals Act. Over the following several years, the Senate considered a number of legislative texts to amend the TSCA. These included the Safer Chemicals Act, several versions of which were introduced by Senator Frank Lautenberg (D-NJ), with the latest in 2013, and the Chemical Safety Improvement Act (S. 1009, CSIA) introduced by Senators Lautenberg and David Vitter (R-LA) in 2013. Senator Lautenberg died shortly after CSIA's introduction, and over time his mantle was picked up by Senator Tom Udall (D-NM), who continued to work with Senator Vitter on revisions to the CSIA. The result of that effort was the Frank R. Lautenberg Chemical Safety for the 21st Century Act, passed by the Senate on December 17, 2015. The Toxic Substances Control Act (TSCA) Modernization Act of 2015 (H.R. 2576), passed the House of Representatives on June 23, 2015.
Revised legislation, which resolved differences between the House and Senate versions, was forwarded to the President on June 14, 2016. President Obama signed the bill into law on June 22, 2016. The Senator's widow, Bonnie Lautenberg, was present at the White House signing ceremony.
EU: Chemicals in Europe are managed by the REACH (Registration, Evaluation and Authorization and Restriction of Chemicals) and the CLP (Classification, Labeling and Packaging) regulations. Specific regulations exist for specific families of products such as Fertilizers, Detergents, Explosives, Pyrotechnic Articles, Drug Precursors.
Canada: In Canada, the Chemicals Management Plan is responsible for designating priority chemicals, gathering public information about those chemicals, and generating risk assessment and management strategies.
Issues
A study suggests and defines a 'planetary boundary' for novel entities such as plastic- and chemical pollution and concluded that it has been crossed, suggesting – alongside many other studies and indicators – that more and improved regulations or related changes (e.g. enforcement- or trade-related changes) are necessary.
Using drug discovery artificial intelligence algorithms, researchers generated 40,000 potential chemical weapon candidates, which may be relevant to timely regulation of chemicals and related products that can be used to manufacture the fraction of viable candidates. According to a senior scientist author of the study, synthesizing these chemicals for real harm would be the more difficult part and certain needed molecules for doing so are known and regulated – however, some viable candidates may only require currently non-regulated compounds.
Other issues include:
the public and academic debate about drug prohibition or about health policy in respect to recreational drugs, nootropics and bodybuilding supplements
the lack of various requirements, quality standards and lab testing for dietary supplements (various product information may also be necessary in some cases – for example in the case of the supplement C60 which in a study showed significant morbidity and mortality in mice in under 2 weeks when exposed to room-level light levels)
| Physical sciences | Basics: General | Chemistry |
19385671 | https://en.wikipedia.org/wiki/Rectum | Rectum | The rectum (: rectums or recta) is the final straight portion of the large intestine in humans and some other mammals, and the gut in others. Before expulsion through the anus or cloaca, the rectum stores the feces temporarily. The adult human rectum is about long, and begins at the rectosigmoid junction (the end of the sigmoid colon) at the level of the third sacral vertebra or the sacral promontory depending upon what definition is used. Its diameter is similar to that of the sigmoid colon at its commencement, but it is dilated near its termination, forming the rectal ampulla. It terminates at the level of the anorectal ring (the level of the puborectalis sling) or the dentate line, again depending upon which definition is used. In humans, the rectum is followed by the anal canal, which is about long, before the gastrointestinal tract terminates at the anal verge. The word rectum comes from the Latin rēctum intestīnum, meaning straight intestine.
Structure
The human rectum is a part of the lower gastrointestinal tract. The rectum is a continuation of the sigmoid colon, and connects to the anus. The rectum follows the shape of the sacrum and ends in an expanded section called an ampulla where feces is stored before its release via the anal canal. An ampulla () is a cavity, or the dilated end of a duct, shaped like a Roman ampulla. The rectum joins with the sigmoid colon at the level of S3, and joins with the anal canal as it passes through the pelvic floor muscles.
Unlike other portions of the colon, the rectum does not have distinct taeniae coli. The taeniae blend with one another in the sigmoid colon five centimeters above the rectum, becoming a singular longitudinal muscle that surrounds the rectum on all sides for its entire length.
Blood supply and drainage
The blood supply of the rectum changes between the top and bottom portions. The top two thirds is supplied by the superior rectal artery. The lower third is supplied by the middle and inferior rectal arteries.
The superior rectal artery is a single artery that is a continuation of the inferior mesenteric artery, when it crosses the pelvic brim. It enters the mesorectum at the level of S3, and then splits into two branches, which run at the lateral back part of the rectum, and then the sides of the rectum. These then end in branches in the submucosa, which join with () with branches of the middle and inferior rectal arteries.
Microanatomy
The microanatomy of the wall of the rectum is similar to the rest of the gastrointestinal tract; namely, that it possesses a mucosa with a lining of a single layer of column-shaped cells with mucus-secreting goblet cells interspersed, resting on a lamina propria, with a layer of smooth muscle called muscularis mucosa. This sits on an underlying submucosa of connective tissue, surrounded by a muscularis propria of two bands of muscle, an inner circular band and an outer longitudinal one. There are a higher concentration of goblet cells in the rectal mucosa than other parts of the gastrointestinal tract.
The lining of the rectum changes sharply at the line where the rectum meets the anus. Here, the lining changes from the column-shaped cells of the rectum to multiple layers of flat cells.
Function
The rectum acts as a temporary storage site for feces. The rectum receives fecal material from the descending colon, transmitted through regular muscle contractions called peristalsis. As the rectal walls expand due to the materials filling it from within, stretch receptors from the nervous system located in the rectal walls stimulate the desire to pass feces, a process called defecation.
An internal and external anal sphincter, and resting contraction of the puborectalis, prevent leakage of feces (fecal incontinence). As the rectum becomes more distended, the sphincters relax and a reflex expulsion of the contents of the rectum occurs. Expulsion occurs through contractions of the muscles of the rectum.
The urge to voluntarily defecate occurs after the rectal pressure increases to beyond 18 mmHg; and reflex expulsion at 55 mmHg. In voluntary defecation, in addition to contraction of the rectal muscles and relaxation of the external anal sphincter, abdominal muscle contraction, and relaxation of the puborectalis muscle occurs. This acts to make the angle between the rectum and anus straighter, and facilitate defecation.
Clinical significance
Examination
For the diagnosis of certain ailments, a rectal exam may be done. These include faecal impaction, prostatic cancer and benign prostatic hypertrophy in men, faecal incontinence, and internal haemorrhoids. Forms of medical imaging used to examine the rectum include CT scans and MRI scans. An ultrasound probe may be inserted into the rectum to view nearby structures such as the prostate.
Colonoscopy and sigmoidoscopy are forms of endoscopy that use a guided camera to directly view the rectum. The instruments may have the ability to take biopsies if needed, for diagnosis of diseases such as cancer. A proctoscope is another instrument that is used to visualise the rectum.
Body temperature can also be taken in the rectum. Rectal temperature can be taken by inserting a medical thermometer not more than into the rectum via the anus. A mercury thermometer should be inserted for 3 to 5 minutes; a digital thermometer should remain inserted until it beeps. Normal rectal temperature generally ranges from and is about above oral (mouth) temperature and about above axilla (armpit) temperature. Availability of less invasive temperature-taking methods including tympanic (ear) and forehead thermometers has facilitated reduced use of this method.
Route of administration
Some medications are also administered via the rectum (). By their definitions, suppositories are inserted, and enemas are injected into the rectum. Medications might be given via the rectum to relieve constipation, to treat conditions near the rectum, such as fissures or haemorrhoids, or to give medications that are systemically active when taking them by mouth is not possible. People do not tend to like medications administered by this route because of both cultural issues, discomfort, and issues that may affect the medication working, such as leakage.
Constipation
One cause of constipation is faecal impaction in the rectum, in which a dry, hard stool forms. Constipation is most commonly due to dietary and lifestyle factors such as inadequate hydration, immobility, and lack of dietary fibre, although there are many potential causes. Such causes may include obstruction because of narrowing, local disease (such as Crohn's disease, fissures or haemorrhoids), or diseases affecting the neurological control of the bowel, or slow bowel transit time, including spinal cord injury and multiple sclerosis; use of medications such as opioids, and conditions such as diabetes mellitus, as well as severe illness. High calcium levels and low thyroid activity may also cause constipation.
Testing may be carried out to investigate the cause. This may include blood tests such as biochemistry, calcium levels, thyroid function tests. A digital rectal examination may be performed to see if there is stool in the rectum, and whether there is an obstruction. When symptoms such as weight loss, bleeding through the rectum, or pain are present, additional investigations such as a CT scan may be ordered. If constipation persists despite simple treatments, testing may also include anal manometry to measure pressures in the anus and rectum, electrophysiological studies, and magnetic resonance proctography.
In general however, constipation is treated by improving factors such as hydration, exercise, and dietary fibre. Laxatives may be used. Constipation that persists may require enemas or suppositories. Sometimes, use of the fingers or hand (manual evacuation) is required. Although peristalsis in the colon delivers material to the rectum, laxatives such as bisacodyl or senna that induce peristalsis in the large bowel do not appear to initiate peristalsis in the rectum. They induce a sensation of rectal fullness and contraction that frequently leads to defecation, but without the distinct waves of activity characteristic of peristalsis.
Inflammation
Proctitis is inflammation of the anus and the rectum.
Ulcerative colitis, one form of inflammatory bowel disease that causes ulcers that affect the rectum. This may be episodic, over a person's lifetime. These may cause blood to be visible in the stool. , the cause is unknown.
Cancer
Rectal cancer, a subgroup of colorectal cancer specific to the rectum.
Other diseases
Other diseases of the rectum include:
Rectal prolapse, referring to the prolapse of the rectum into the anus or external area. This is commonly caused by a weakened pelvic floor after childbirth
In the context of mesenteric ischemia, the upper rectum is sometimes referred to as Sudeck's point and is of clinical importance as a watershed region between the inferior mesenteric artery circulation and the internal iliac artery circulation via the middle rectal artery and thus prone to ischemia. Sudeck's point is often referred to along with Griffith's point at the splenic flexure as a watershed region.
Society and culture
Sexual stimulation
Due to the proximity of the anterior wall of the rectum to the vagina in females or to the prostate in males, and the shared nerves thereof, the rectum is an erogenous zone and its stimulation or penetration can result in sexual arousal.
History
Etymology
English rectum is derived from the Latin intestinum rectum 'straight gut', a calque of Ancient Greek ἀπευθυσμένον ἔντερον, derived from ἀπευθύνειν, to make straight, and ἔντερον, gut, attested in the writings of Greek physician Galen. During his anatomic investigations on animal corpses, Galen observed the rectum to be straight instead of curved as in humans. The expressions ἀπευθυσμένον ἔντερον and intestinum rectum are therefore not appropriate descriptions of the rectum in humans. Apeuthysmenon is the Latinization of ἀπευθυσμένον and euthyenteron has a similar meaning (εὐθύς 'straight). Much of the knowledge of the anatomy of the rectum comes from detailed descriptions provided by Andreas Vesalius in 1543.
| Biology and health sciences | Gastrointestinal tract | Biology |
19388627 | https://en.wikipedia.org/wiki/Anteater | Anteater | Anteaters are the four extant mammal species in the suborder Vermilingua (meaning "worm tongue"), commonly known for eating ants and termites. The individual species have other names in English and other languages. Together with sloths, they are within the order Pilosa. The name "anteater" is also commonly applied to the aardvark, numbat, echidnas, and pangolins, although they are not closely related to them.
Extant species are the giant anteater Myrmecophaga tridactyla, about long including the tail; the silky anteater Cyclopes didactylus, about long; the southern tamandua or collared anteater Tamandua tetradactyla, about long; and the northern tamandua Tamandua mexicana of similar dimensions.
Etymology
The name anteater refers to the species' diet, which consists mainly of ants and termites. Anteater has also been used as a common name for a number of animals that are not in Vermilingua, including the echidnas, numbat, pangolins, and aardvark. Anteaters are also known as antbears, although this is more commonly used as a name for the aardvark. The word tamandua comes from Portuguese, which itself borrowed it from the Tupí tamanduá, meaning "ant hunter". In Portuguese, is used to refer to all anteaters; in Spanish, only the two species in the genus Tamandua are known by this name, with the giant anteater and silky anteater being called and , respectively. All four species are also known by a number of indigenous names.
Taxonomy
Evolutionary history
Anteaters are part of the Xenarthra superorder, a once diverse group of mammals that occupied South America while it was geographically isolated from the invasion of animals from North America, with the other two remaining animals in the family being the sloths and the armadillos.
At one time, anteaters were assumed to be related to aardvarks and pangolins because of their physical similarities to those animals, but these similarities have since been determined to be not a sign of a common ancestor, but of convergent evolution. All have evolved powerful digging forearms, long tongues, and toothless, tube-like snouts to subsist by raiding termite mounds.
Taxonomy
The anteaters are more closely related to the sloths than they are to any other group of mammals. Their next closest relations are armadillos. There are four extant species in three genera. There are several extinct genera as well.
Suborder Vermilingua (anteaters)
Family Cyclopedidae
Genus Cyclopes
Silky anteater (C. didactylus)
Genus †Palaeomyrmidon (Rovereto 1914)
Family Myrmecophagidae
Genus Myrmecophaga
Giant anteater (M. tridactyla)
Genus †Neotamandua (Rovereto 1914)
Genus Tamandua
Northern tamandua (T. mexicana)
Southern tamandua (T. tetradactyla)
Genus †Protamandua (Ameghino 1904)
Morphology
All anteaters have extremely elongated snouts equipped with a thin and long tongue that is coated with sticky saliva produced by enlarged submaxillary glands. The mouth is small and has no teeth. The frontal feet have large claws on the third digit, used to break into the mounds of termites and ants, and the remaining digits are usually slightly smaller or lacking entirely. The entire body is covered with dense fur. The tail is long, in some cases as long as the rest of the body, covered with varying amounts of fur, and prehensile in all species except for the giant anteater. Anteaters are known to experience color abnormalities, including albinism in giant anteaters and albinism, leucism, and melanism in the southern tamandua.
The giant anteater can be distinguished from the other species on the basis of its large size, with an average total body length of around and an average mass of . The body is mainly covered with long, dark brown or black fur, with a prominent triangular white-edged black band from the shoulders down to chest and continuing to the mid-body. The forelegs are mostly white, marked with black at the wrists and just above the claws. The tail is almost as long as the body and covered with long, coarse hairs. Giant anteaters have the largest degree of rostral elongation relative to their size of any other ant-eating mammal.
The tamanduas are medium-sized species smaller than the giant anteater, with a total body length of around and a mass of . They can further be distinguished by their shorter snout, their relatively shorter claws, proportionately longer ears, and mostly fur-less, prehensile tail. They also differ in their coloration; most individuals are golden brown to gray, with a black "vest" on the back and belly joined by two black bands running across the shoulders. Some tamanduas may lack the vest partially or entirely, instead having a uniformly yellow, brown, or black coat.
The silky anteater is the smallest species in the order, with an average total body length of and an average mass of . It has extremely dense, silky, gray to golden-brown fur across its body, sometimes tinged silver on the back. Some South American populations have a chocolate brown stripe down the middle of the back, most prominent in the Amazon basin. The tail is extremely prehensile, and the limbs display adaptations to help it grab items while climbing. Unlike the other anteaters and many other unrelated obligate ant-eating mammals, the silky anteater's face is only slightly longer than expected for an animal of its size and shows a strong downward tilt.
Distribution and habitat
Anteaters are endemic to the New World, where they are found on the mainland from southern Mexico to northern Argentina, as well as some of the Caribbean islands. Like other xenarthans, anteaters originally evolved in South America, and began spreading to Central and North America as part of the Great American Interchange after the formation of the Isthmus of Panama around 3 million years ago. Some species of anteaters may have had greater ranges during the early Pleistocene than they have currently; for example, fossils of the giant anteater have been found as far north as Sonora, Mexico, and the reduction in its range is probably due to changes in habitat due to deglaciation in North America in the later Pleistocene.
Currently, the giant anteater is known from Central America south east of the Andes to northern Argentina, Bolivia, and Paraguay. West of the Andes, it is only known from Colombia and possibly Ecuador. It has been extirpated from much of its Central American range, and has also suffered local extinctions in the southern end of its distribution. The northern tamandua is found from southern Mexico south to the western Andes of Colombia, Venezuela, Peru, and Ecuador, while the southern tamandua inhabits South America east of the Andes, from as far north as Colombia, Trinidad, and the Guianas south to northern Uruguay and northern Argentina. Both species of tamandua co-occur in some parts of their range. The silky anteater occurs from Veracruz and Oaxaca in Mexico south to Colombia and Ecuador west of the Andes and to Brazil and Bolivia east of the Andes. An additional disjunct population also exists in northwestern Brazil.
Anteater habitats include dry tropical forests, rainforests, grasslands, and savannas. The silky anteater is specialized to an arboreal environment, but the more opportunistic tamanduas find their food both on the ground and in trees, typically in dry forests near streams and lakes. The almost entirely terrestrial giant anteater lives in savannas. The two anteaters of the genus Tamandua, the southern and the northern tamanduas are much smaller than the giant anteater, and differ essentially from it in their habits, being mainly arboreal. They inhabit the dense primeval forests of South and Central America. The silky anteater (Cyclopes didactylus) is a native of the hottest parts of South and Central America and exclusively arboreal in its habits.
Behavior and ecology
Anteaters are mostly solitary mammals prepared to defend their territories. They do not normally enter a territory of another anteater of the same sex, but males often enter the territory of associated females. When a territorial dispute occurs, they vocalize, swat, and can sometimes sit on or even ride the back of their opponents.
Anteaters have poor sight but an excellent sense of smell, and most species depend on the latter for foraging, feeding, and defence. Their hearing is thought to be good.
With a body temperature fluctuating between , anteaters, like other xenarthrans, have among the lowest body temperatures of any mammal, and can tolerate greater fluctuations in body temperature than most mammals. Their daily energy intake from food is only slightly greater than their energy need for daily activities, and anteaters probably coordinate their body temperatures so they keep cool during periods of rest, and heat up during foraging.
Reproduction
Adult males are slightly larger and more muscular than females, and have wider heads and necks. Visual sex determination can, however, be difficult, since the penis and testes are located internally between the rectum and urinary bladder in males and females have a single pair of mammae near the armpits. Fertilization occurs by contact transfer without intromission, similar to some lizards. Polygynous mating usually results in a single offspring; twins are possible but rare. The large foreclaws prevent mothers from grasping their newborns and they therefore have to carry the offspring until they are self-sufficient.
Foraging and diet
Anteaters are specialized to feed on small insects, with each anteater species having its own insect preferences: small species are specialized on arboreal insects living on small branches, while large species can penetrate the hard covering of the nests of terrestrial insects. To avoid the jaws, sting, and other defences of the invertebrates, anteaters have adopted the feeding strategy of licking up large numbers of ants and termites as quickly as possible – an anteater normally spends about a minute at a nest before moving on to another – and a giant anteater has to visit up to 200 nests per day to consume the thousands of insects it needs to satisfy its caloric requirements.
The anteater's tongue is covered with thousands of tiny hooks called filiform papillae which are used to hold the insects together with large amounts of saliva. Swallowing and the movement of the tongue are aided by side-to-side movements of the jaws. The tongue is attached to the sternum and moves very quickly, flicking 150 times per minute. The anteater's stomach, similar to a bird's gizzard, has hardened folds and uses strong contractions to grind the insects, a digestive process assisted by small amounts of ingested sand and dirt.
Predators
A number of mammals and birds are known to prey on anteaters. Jaguars are known to feed upon both giant anteaters and the southern tamandua, with the latter species representing a significant portion of the jaguar's diet in some areas. Tamanduas are additionally predated upon by ocelots, other felids, foxes, and caimans, and may be vulnerable to predation by harpy eagles near their nests. Silky anteaters have been observed being attacked by hawks.
Diseases and parasites
Anteaters are known to host a wide variety of parasites, including ticks, fleas, parasitic worms, and acanthocephalans. The most common ticks found on anteaters are from the family Ixodidae, and especially the genus Amblyomma: 29 species of ixodids are known from anteaters, 25 of which belong to Amblyomma. Anteaters are the primary host for at least four species of ticks: A. nodosum, A. calcaratum, A. goeldi, and A. pictum. Parasitic worms collected from anteaters include those in the class Cestoda and nematodes in the families Spiruridae, Physalopteridae, Trichostrongylidae, and Ascarididae. Parasitization by the nematode Physaloptera magnipapilla results in anemia and gastritis in the giant anteater. The giant anteater is the type host of a species of nematode, Aspidodera serrata, while the silky anteater is the type host of the coccidian Eimeria cyclopei. Other parasites that affect anteaters are protozoans, bacteria, parabasalids, and viruses.
Diseases that anteaters suffer from include physiological diseases like Sertoli cell tumors, physical injuries such as burns and fractures, metabolic and nutritional disorders like soft tissue mineralization and hypervitaminosis D, and infectious diseases like gastritis, osteomyelitis, and dermatitis. Anteaters may serve as vectors for the transmission of several diseases between species. Ticks from anteaters are known to carry Rickettsia bacteria, which cause spotted fever in humans. Anteaters have also been infected with SARS-CoV-2, the virus that causes COVID-19, Leishmania, the protozoan that causes leishmaniasis, and canine distemper-causing Morbillivirus, contracting the last disease from a maned wolf in captivity. Anteaters, like other xenarthans, display several adaptations that lead to very low rates of cancer among them, such as programmed cell death at very low levels of DNA damage.
Conservation
The silky anteater and both of the tamanduas are classified as being of least concern by the IUCN due to their large ranges, presumed large populations, and the lack of significant enough population declines. The giant anteater is classified as being vulnerable due to high levels of habitat loss and degradation, an ongoing population decline of greater than 30% in the last 21 years, and a number of threats such as hunting and wildfires. Additionally, the population of the silky anteater from northeastern Brazil has been assessed separately by the IUCN and classified as being data deficient, although its population is currently thought to be decreasing due habitat loss and illegal capture for the wildlife trade.
| Biology and health sciences | Pilosa | null |
19389113 | https://en.wikipedia.org/wiki/Gondwana | Gondwana | Gondwana () was a large landmass, sometimes referred to as a supercontinent. The remnants of Gondwana make up around two-thirds of today's continental area, including South America, Africa, Antarctica, Australia, Zealandia, Arabia, and the Indian subcontinent.
Gondwana was formed by the accretion of several cratons (large stable blocks of the Earth's crust), beginning with the East African Orogeny, the collision of India and Madagascar with East Africa, and culminating in with the overlapping Brasiliano and Kuunga orogenies, the collision of South America with Africa, and the addition of Australia and Antarctica, respectively. Eventually, Gondwana became the largest piece of continental crust of the Paleozoic Era, covering an area of some , about one-fifth of the Earth's surface. It fused with Laurasia during the Carboniferous to form Pangaea. It began to separate from northern Pangea (Laurasia) during the Triassic, and started to fragment during the Early Jurassic (around 180 million years ago). The final stages of break-up, involving the separation of Antarctica from South America (forming the Drake Passage) and Australia, occurred during the Paleogene (from around (Ma)). Gondwana was not considered a supercontinent by the earliest definition, since the landmasses of Baltica, Laurentia, and Siberia were separated from it. To differentiate it from the Indian region of the same name (see ), it is also commonly called Gondwanaland.
Regions that were part of Gondwana shared floral and faunal elements that persist to the present day.
Name
The continent of Gondwana was named by the Austrian scientist Eduard Suess, after the region in central India of the same name, which is derived from Sanskrit for "forest of the Gonds". The name had been previously used in a geological context, first by H. B. Medlicott in 1872, from which the Gondwana sedimentary sequences (Permian-Triassic) are also described.
Some scientists prefer the term "Gondwanaland" for the supercontinent to make a clear distinction between the region and the supercontinent.
Formation
The assembly of Gondwana was a protracted process during the Neoproterozoic and Paleozoic, which remains incompletely understood because of the lack of paleo-magnetic data. Several orogenies, collectively known as the Pan-African orogeny, caused the continental fragments of a much older supercontinent, Rodinia, to amalgamate. One of those orogenic belts, the Mozambique Belt, formed and was originally interpreted as the suture between East (India, Madagascar, Antarctica, Australia) and West Gondwana (Africa and South America). Three orogenies were recognised during the 1990s as a result of data sets compiled on behalf of oil and mining companies: the East African Orogeny () and Kuunga orogeny (including the Malagasy orogeny in southern Madagascar) (), the collision between East Gondwana and East Africa in two steps, and the Brasiliano orogeny (), the successive collision between South American and African cratons.
The last stages of Gondwanan assembly overlapped with the opening of the Iapetus Ocean between Laurentia and western Gondwana. During this interval, the Cambrian explosion occurred. Laurentia was docked against the western shores of a united Gondwana for a brief period near the Precambrian and Cambrian boundary, forming the short-lived and still disputed supercontinent Pannotia.
The Mozambique Ocean separated the Congo–Tanzania–Bangweulu Block of central Africa from Neoproterozoic India (India, the Antongil Block in far eastern Madagascar, the Seychelles, and the Napier and Rayner Complexes in East Antarctica). The Azania continent (much of central Madagascar, the Horn of Africa and parts of Yemen and Arabia) was an island in the Mozambique Ocean.
The continents of Australia and East Antarctica were still separated from India, eastern Africa, and Kalahari by , when most of western Gondwana had already been amalgamated. By 550 Ma, India had reached its Gondwanan position, which initiated the Kuunga orogeny (also known as the Pinjarra orogeny). Meanwhile, on the other side of the newly forming Africa, Kalahari collided with Congo and Rio de la Plata which closed the Adamastor Ocean. 540–530 Ma, the closure of the Mozambique Ocean brought India next to Australia–East Antarctica, and both North China and South China were in proximity to Australia.
As the rest of Gondwana formed, a complex series of orogenic events assembled the eastern parts of Gondwana (eastern Africa, Arabian-Nubian Shield, Seychelles, Madagascar, India, Sri Lanka, East Antarctica, Australia) . First, the Arabian-Nubian Shield collided with eastern Africa (in the Kenya-Tanzania region) in the East African Orogeny . Then Australia and East Antarctica were merged with the remaining Gondwana in the Kuunga Orogeny.
The later Malagasy orogeny at about 550–515 Mya affected Madagascar, eastern East Africa and southern India. In it, Neoproterozoic India collided with the already combined Azania and Congo–Tanzania–Bangweulu Block, suturing along the Mozambique Belt.
The Terra Australis Orogen developed along Gondwana's western, southern, and eastern margins. Proto-Gondwanan Cambrian arc belts from this margin have been found in eastern Australia, Tasmania, New Zealand, and Antarctica. Though these belts formed a continuous arc chain, the direction of subduction was different between the Australian-Tasmanian and New Zealand-Antarctica arc segments.
Peri-Gondwana development: Paleozoic rifts and accretions
Many terranes were accreted to Eurasia during Gondwana's existence, but the Cambrian or Precambrian origin of many of these terranes remains uncertain. For example, some Paleozoic terranes and microcontinents that now make up Central Asia, often called the "Kazakh" and "Mongolian terranes", were progressively amalgamated into the continent Kazakhstania in the late Silurian. Whether these blocks originated on the shores of Gondwana is not known.
In the Early Paleozoic, the Armorican terrane, which today form large parts of France, was part of Peri-Gondwana; the Rheic Ocean closed in front of it and the Paleo-Tethys Ocean opened behind it. Precambrian rocks from the Iberian Peninsula suggest that it, too, formed part of core Gondwana before its detachment as an orocline in the Variscan orogeny close to the Carboniferous–Permian boundary.
South-east Asia was made of Gondwanan and Cathaysian continental fragments that were assembled during the Mid-Paleozoic and Cenozoic. This process can be divided into three phases of rifting along Gondwana's northern margin: first, in the Devonian, North and South China, together with Tarim and Quidam (north-western China) rifted, opening the Paleo-Tethys behind them. These terranes accreted to Asia during Late Devonian and Permian. Second, in the Late Carboniferous to Early Permian, Cimmerian terranes opened Meso-Tethys Ocean; Sibumasu and Qiangtang were added to south-east Asia during Late Permian and Early Jurassic. Third, in the Late Triassic to Late Jurassic, Lhasa, Burma, Woyla terranes opened the Neo-Tethys Ocean; Lhasa collided with Asia during the Early Cretaceous, and Burma and Woyla during the Late Cretaceous.
Gondwana's long, northern margin remained a mostly passive margin throughout the Paleozoic. The Early Permian opening of the Neo-Tethys Ocean along this margin produced a long series of terranes, many of which were and still are being deformed in the Himalayan orogeny. These terranes are, from Turkey to north-eastern India: the Taurides in southern Turkey; the Lesser Caucasus Terrane in Georgia; the Sanand, Alborz, and Lut terranes in Iran; the Mangysglak Terrane in the Caspian Sea; the Afghan Terrane; the Karakorum Terrane in northern Pakistan; and the Lhasa and Qiangtang terranes in Tibet. The Permian–Triassic widening of the Neo-Tethys pushed all these terranes across the Equator and over to Eurasia.
Southwestern accretions
During the Neoproterozoic to Paleozoic phase of the Terra Australis Orogen, a series of terranes were rafted from the proto-Andean margin when the Iapetus Ocean opened, to be added back to Gondwana during the closure of that ocean. During the Paleozoic, some blocks which helped to form parts of the Southern Cone of South America, include a piece transferred from Laurentia when the west edge of Gondwana scraped against southeast Laurentia in the Ordovician. This is the Cuyania or Precordillera terrane of the Famatinian orogeny in northwest Argentina which may have continued the line of the Appalachians southwards. Chilenia terrane accreted later against Cuyania. The collision of the Patagonian terrane with the southwestern Gondwanan occurred in the late Paleozoic. Subduction-related igneous rocks from beneath the North Patagonian Massif have been dated at 320–330 million years old, indicating that the subduction process initiated in the early Carboniferous. This was relatively short-lived (lasting about 20 million years), and initial contact of the two landmasses occurred in the mid-Carboniferous, with broader collision during the early Permian. In the Devonian, an island arc named Chaitenia accreted to Patagonia in what is now south-central Chile.
Gondwana as part of Pangaea: Late Paleozoic to Early Mesozoic
Gondwana and Laurasia formed the Pangaea supercontinent during the Carboniferous. Pangaea began to break up in the Mid-Jurassic when the Central Atlantic opened.
In the western end of Pangaea, the collision between Gondwana and Laurasia closed the Rheic and Paleo-Tethys oceans. The obliquity of this closure resulted in the docking of some northern terranes in the Marathon, Ouachita, Alleghanian, and Variscan orogenies, respectively. Southern terranes, such as Chortis and Oaxaca, on the other hand, remained largely unaffected by the collision along the southern shores of Laurentia. Some Peri-Gondwanan terranes, such as Yucatán and Florida, were buffered from collisions by major promontories. Other terranes, such as Carolina and Meguma, were directly involved in the collision. The final collision resulted in the Variscan-Appalachian Mountains, stretching from present-day Mexico to southern Europe. Meanwhile, Baltica collided with Siberia and Kazakhstania which resulted in the Uralian orogeny and Laurasia. Pangaea was finally amalgamated in the Late Carboniferous-Early Permian, but the oblique forces continued until Pangaea began to rift in the Triassic.
In the eastern end, collisions occurred slightly later. The North China, South China, and Indochina blocks rifted from Gondwana during the middle Paleozoic and opened the Proto-Tethys Ocean. North China docked with Mongolia and Siberia during the Carboniferous–Permian, followed by South China. The Cimmerian blocks then rifted from Gondwana to form the Paleo-Tethys and Neo-Tethys oceans in the Late Carboniferous, and docked with Asia during the Triassic and Jurassic. Western Pangaea began to rift while the eastern end was still being assembled.
The formation of Pangaea and its mountains had a tremendous impact on global climate and sea levels, which resulted in glaciations and continent-wide sedimentation. In North America, the base of the Absaroka sequence coincides with the Alleghanian and Ouachita orogenies and are indicative of a large-scale change in the mode of deposition far away from the Pangaean orogenies. Ultimately, these changes contributed to the Permian–Triassic extinction event and left large deposits of hydrocarbons, coal, evaporite, and metals.
The breakup of Pangaea began with the Central Atlantic magmatic province (CAMP) between South America, Africa, North America, and Europe. CAMP covered more than seven million square kilometres over a few million years, reached its peak at , and coincided with the Triassic–Jurassic extinction event. The reformed Gondwanan continent was not precisely the same as that which had existed before Pangaea formed; for example, most of Florida and southern Georgia and Alabama is underlain by rocks that were originally part of Gondwana, but this region stayed attached to North America when the Central Atlantic opened.
Break-up
Mesozoic
Antarctica, the centre of the supercontinent, shared boundaries with all other Gondwana continents and the fragmentation of Gondwana propagated clockwise around it. The break-up was the result of the eruption of the Karoo-Ferrar igneous province, one of the Earth's most extensive large igneous provinces (LIP) , but the oldest magnetic anomalies between South America, Africa, and Antarctica are found in what is now the southern Weddell Sea where initial break-up occurred during the Jurassic .
Opening of western Indian Ocean
Gondwana began to break up in the early Jurassic following the extensive and fast emplacement of the Karoo-Ferrar flood basalts . Before the Karoo plume initiated rifting between Africa and Antarctica, it separated a series of smaller continental blocks from Gondwana's southern, Proto-Pacific margin (along what is now the Transantarctic Mountains): the Antarctic Peninsula, Marie Byrd Land, Zealandia, and Thurston Island; the Falkland Islands and Ellsworth–Whitmore Mountains (in Antarctica) were rotated 90° in opposite directions; and South America south of the Gastre Fault (often referred to as Patagonia) was pushed westward. The history of the Africa-Antarctica break-up can be studied in great detail in the fracture zones and magnetic anomalies flanking the Southwest Indian Ridge.
The Madagascar block and the Mascarene Plateau, stretching from the Seychelles to Réunion, were broken off India, causing Madagascar and Insular India to be separate landmasses: elements of this break-up nearly coincide with the Cretaceous–Paleogene extinction event. The India–Madagascar–Seychelles separations appear to coincide with the eruption of the Deccan basalts, whose eruption site may survive as the Réunion hotspot. The Seychelles and the Maldives are now separated by the Central Indian Ridge.
During the initial break-up in the Early Jurassic, a marine transgression swept over the Horn of Africa covering Triassic planation surfaces with sandstone, limestone, shale, marls and evaporites.
Opening of eastern Indian Ocean
East Gondwana, comprising Antarctica, Madagascar, India, and Australia, began to separate from Africa. East Gondwana then began to break up when India moved northwest from Australia-Antarctica. The Indian plate and the Australian plate are now separated by the Capricorn plate and its diffuse boundaries. During the opening of the Indian Ocean, the Kerguelen hotspot first formed the Kerguelen Plateau on the Antarctic plate and then the Ninety East Ridge on the Indian plate at . The Kerguelen Plateau and the Broken Ridge, the southern end of the Ninety East Ridge, are now separated by the Southeast Indian Ridge.
Separation between Australia and East Antarctica began with seafloor spreading occurring . A shallow seaway developed over the South Tasman Rise during the Early Cenozoic and as oceanic crust started to separate the continents during the Eocene global ocean temperature dropped significantly. A dramatic shift from arc- to rift magmatism separated Zealandia, including New Zealand, the Campbell Plateau, Chatham Rise, Lord Howe Rise, Norfolk Ridge, and New Caledonia, from West Antarctica .
Opening of South Atlantic Ocean
The opening of the South Atlantic Ocean divided West Gondwana (South America and Africa), but there is considerable debate over the exact timing of this break-up. Rifting propagated from south to north along Triassic–Early Jurassic lineaments, but intra-continental rifts also began to develop within both continents in Jurassic–Cretaceous sedimentary basins, subdividing each continent into three sub-plates. Rifting began at Falkland latitudes, forcing Patagonia to move relative to the still static remainder of South America and Africa, and this westward movement lasted until the Early Cretaceous . From there rifting propagated northward during the Late Jurassic or Early Cretaceous most likely forcing dextral movements between sub-plates on either side. South of the Walvis Ridge and Rio Grande Rise the Paraná and Etendeka magmatics resulted in further ocean-floor spreading and the development of rifts systems on both continents, including the Central African Rift System and the Central African Shear Zone which lasted until . At Brazilian latitudes spreading is more difficult to assess because of the lack of palaeo-magnetic data, but rifting occurred in Nigeria at the Benue Trough . North of the Equator the rifting began after and continued until . Dinosaur footprints representing identical species assemblages are known from opposite sides of the South Atlantic (Brazil and Cameroon) dating to around , suggesting that some form of land connection still existed between Africa and South America as recently as the early Aptian.
Early Andean orogeny
The first phases of Andean orogeny in the Jurassic and Early Cretaceous were characterised by extensional tectonics, rifting, the development of back-arc basins and the emplacement of large batholiths. This development is presumed to have been linked to the subduction of cold oceanic lithosphere. During the mid to Late Cretaceous (), the Andean orogeny changed significantly in character. Warmer and younger oceanic lithosphere is believed to have started to be subducted beneath South America around this time. Such kind of subduction is held responsible not only for the intense contractional deformation that different lithologies were subject to, but also the uplift and erosion known to have occurred from the Late Cretaceous onward. Plate tectonic reorganisation since the mid-Cretaceous might also have been linked to the opening of the South Atlantic Ocean. Another change related to mid-Cretaceous plate tectonic rearrangement was the change of subduction direction of the oceanic lithosphere that went from having south-east motion to having a north-east motion about 90 million years ago. While subduction direction changed, it remained oblique (and not perpendicular) to the coast of South America, and the direction change affected several subduction zone-parallel faults including Atacama, Domeyko and Liquiñe-Ofqui.
Cenozoic
Insular India began to collide with Asia circa , forming the Indian subcontinent, since which more than of crust has been absorbed by the Himalayan-Tibetan orogen. During the Cenozoic, the orogen resulted in the construction of the Tibetan Plateau between the Tethyan Himalayas in the south and the Kunlun and Qilian mountains in the north.
Later, South America was connected to North America via the Isthmus of Panama, cutting off a circulation of warm water and thereby making the Arctic colder, as well as allowing the Great American Interchange.
The break-up of Gondwana can be said to continue in eastern Africa at the Afar triple junction, which separates the Arabian, African, and Somali plates, resulting in rifting in the Red Sea and East African Rift.
Australia–Antarctica separation
In the Early Cenozoic, Australia was still connected to Antarctica 35–40° south of its current location and both continents were largely unglaciated. A rift between the two developed but remained an embayment until the Eocene-Oligocene boundary when the Circumpolar Current developed and the glaciation of Antarctica began.
Australia was warm and wet during the Paleocene and dominated by rainforests. The opening of the Tasman Gateway at the Eocene-Oligocene boundary () resulted in abrupt cooling but the Oligocene became a period of high rainfall with swamps in southeastern Australia. During the Miocene, a warm and humid climate developed with pockets of rainforests in central Australia, but before the end of the period, colder and drier climate severely reduced this rainforest. A brief period of increased rainfall in the Pliocene was followed by drier climate which favoured grassland. Since then, the fluctuation between wet interglacial periods and dry glacial periods has developed into the present arid regime. Australia has thus experienced various climate changes over a 15-million-year period with a gradual decrease in precipitation.
The Tasman Gateway between Australia and Antarctica began to open . Palaeontological evidence indicates the Antarctic Circumpolar Current (ACC) was established in the Late Oligocene with the full opening of the Drake Passage and the deepening of the Tasman Gateway. The oldest oceanic crust in the Drake Passage, however, is -old which indicates that the spreading between the Antarctic and South American plates began near the Eocene-Oligocene boundary. Deep sea environments in Tierra del Fuego and the North Scotia Ridge during the Eocene and Oligocene indicate a "Proto-ACC" opened during this period. Later, , a series of events severally restricted the Proto-ACC: change to shallow marine conditions along the North Scotia Ridge; closure of the Fuegan Seaway, the deep sea that existed in Tierra del Fuego; and uplift of the Patagonian Cordillera. This, together with the reactivated Iceland plume, contributed to global warming. During the Miocene, the Drake Passage began to widen, and as water flow between South America and the Antarctic Peninsula increased, the renewed ACC resulted in cooler global climate.
Since the Eocene, the northward movement of the Australian Plate has resulted in an arc-continent collision with the Philippine and Caroline plates and the uplift of the New Guinea Highlands. From the Oligocene to the late Miocene, the climate in Australia, dominated by warm and humid rainforests before this collision, began to alternate between open forest and rainforest before the continent became the arid or semiarid landscape it is today.
Biogeography
The adjective "Gondwanan" is in common use in biogeography when referring to patterns of distribution of living organisms, typically when the organisms are restricted to two or more of the now-discontinuous regions that were once part of Gondwana, including the Antarctic flora. For example, the plant family Proteaceae, known from all continents in the Southern Hemisphere, has a "Gondwanan distribution" and is often described as an archaic, or relict, lineage. The distributions in the Proteaceae is, nevertheless, the result of both Gondwanan rafting and later oceanic dispersal.
Post-Cambrian diversification
During the Silurian, Gondwana extended from the Equator (Australia) to the South Pole (North Africa and South America) whilst Laurasia was located on the Equator opposite to Australia. A short-lived Late Ordovician glaciation was followed by a Silurian Hot House period. The End-Ordovician extinction, which resulted in 27% of marine invertebrate families and 57% of genera going extinct, occurred during this shift from Ice House to Hot House.
By the end of the Ordovician, Cooksonia, a slender, ground-covering plant, became the first known vascular plant to establish itself on land. This first colonisation occurred exclusively around the Equator on landmasses then limited to Laurasia and, in Gondwana, to Australia. In the late Silurian, two distinctive lineages, zosterophylls and rhyniophytes, had colonised the tropics. The former evolved into the lycopods that were to dominate the Gondwanan vegetation over a long period, whilst the latter evolved into horsetails and gymnosperms. Most of Gondwana was located far from the Equator during this period and remained a lifeless and barren landscape.
West Gondwana drifted north during the Devonian, bringing Gondwana and Laurasia close together. Global cooling contributed to the Late Devonian extinction (19% of marine families and 50% of genera went extinct) and glaciation occurred in South America. Before Pangaea had formed, terrestrial plants, such as pteridophytes, began to diversify rapidly resulting in the colonisation of Gondwana. The Baragwanathia Flora, found only in the Yea Beds of Victoria, Australia, occurs in two strata separated by or 30 Ma; the upper assemblage is more diverse and includes Baragwanathia, the first primitive herbaceous lycopod to evolve from the zosterophylls. During the Devonian, giant club mosses replaced the Baragwanathia Flora, introducing the first trees, and by the Late Devonian this first forest was accompanied by the progymnosperms, including the first large trees Archaeopteris. The Late Devonian extinction probably also resulted in osteolepiform fishes evolving into the amphibian tetrapods, the earliest land vertebrates, in Greenland and Russia. The only traces of this evolution in Gondwana are amphibian footprints and a single jaw from Australia.
The closure of the Rheic Ocean and the formation of Pangaea in the Carboniferous resulted in the rerouting of ocean currents that initiated an Ice House period. As Gondwana began to rotate clockwise, Australia shifted south to more temperate latitudes. An ice cap initially covered most of southern Africa and South America but spread to eventually cover most of the supercontinent, except northernmost Africa-South America. Giant lycopod and horsetail forests continued to evolve in tropical Laurasia together with a diversified assemblage of true insects. In Gondwana, in contrast, ice and, in Australia, volcanism decimated the Devonian flora to a low-diversity seed fern flora – the pteridophytes were increasingly replaced by the gymnosperms which were to dominate until the Mid-Cretaceous. Australia, however, was still located near the Equator during the Early Carboniferous, and during this period, temnospondyl and lepospondyl amphibians and the first amniote reptilians evolved, all closely related to the Laurasian fauna, but spreading ice eventually drove these animals away from Gondwana entirely.
The Gondwana ice sheet melted, and sea levels dropped during the Permian and Triassic global warming. During this period, the extinct glossopterids colonised Gondwana and reached peak diversity in the Late Permian when coal-forming forests covered all of Gondwana. The period also saw the evolution of Voltziales, one of the few plant orders to survive the Permian–Triassic extinction (57% of marine families and 83% of genera went extinct) and which came to dominate in the Late Permian and from whom true conifers evolved. Tall lycopods and horsetails dominated the wetlands of Gondwana in the Early Permian. Insects co-evolved with glossopterids across Gondwana and diversified with more than 200 species in 21 orders by the Late Permian, many known from South Africa and Australia. Beetles and cockroaches remained minor elements in this fauna. Tetrapod fossils from the Early Permian have only been found in Laurasia but they became common in Gondwana later during the Permian. The arrival of the therapsids resulted in the first plant-vertebrate-insect ecosystem.
Modern diversification
During the Mid- to Late Triassic, hot-house conditions coincided with a peak in biodiversity – the end-Permian extinction was enormous and so was the radiation that followed. Two families of conifers, Podocarpaceae and Araucariaceae, dominated Gondwana in the Early Triassic, but Dicroidium, an extinct genus of fork-leaved seed ferns, dominated woodlands and forests of Gondwana during most of the Triassic. Conifers evolved and radiated during the period, with six of eight extant families already present before the end of it. Bennettitales and Pentoxylales, two now extinct orders of gymnospermous plants, evolved in the Late Triassic and became important in the Jurassic and Cretaceous. It is possible that gymnosperm biodiversity surpassed later angiosperm biodiversity and that the evolution of angiosperms began during the Triassic but, if so, in Laurasia rather than in Gondwana. Two Gondwanan classes, lycophytes and sphenophytes, saw a gradual decline during the Triassic while ferns, though never dominant, managed to diversify.
The brief period of icehouse conditions during the Triassic–Jurassic extinction event had a dramatic impact on dinosaurs but left plants largely unaffected. The Jurassic was mostly one of hot-house conditions and, while vertebrates managed to diversify in this environment, plants have left little evidence of such development, apart from Cheiroleidiacean conifers and Caytoniales and other groups of seed ferns. In terms of biomass, the Jurassic flora was dominated by conifer families and other gymnosperms that had evolved during the Triassic. The Pteridophytes that had dominated during the Paleozoic were now marginalised, except for ferns. In contrast to Laurentia, very few insect fossils have been found in Gondwana, to a considerable extent because of widespread deserts and volcanism. While plants had a cosmopolitan distribution, dinosaurs evolved and diversified in a pattern that reflects the Jurassic break-up of Pangaea.
The Cretaceous saw the arrival of the angiosperms, or flowering plants, a group that probably evolved in western Gondwana (South America–Africa). From there the angiosperms diversified in two stages: the monocots and magnoliids evolved in the Early Cretaceous, followed by the hammamelid dicots. By the Mid-Cretaceous, angiosperms constituted half of the flora in northeastern Australia. There is, however, no obvious connection between this spectacular angiosperm radiation and any known extinction event nor with vertebrate/insect evolution. Insect orders associated with pollination, such as beetles, flies, butterflies and moths, wasps, bees, ants, radiated continuously from the Permian-Triassic, long before the arrival of the angiosperms. Well-preserved insect fossils have been found in the lake deposits of the Santana Formation in Brazil, the Koonwarra Lake fauna in Australia, and the Orapa diamond mine in Botswana.
Dinosaurs continued to prosper but, as the angiosperm diversified, conifers, bennettitaleans and pentoxylaleans disappeared from Gondwana 115 Ma together with the specialised herbivorous ornithischians, whilst generalist browsers, such as several families of sauropodomorph Saurischia, prevailed. The Cretaceous–Paleogene extinction event killed off all dinosaurs except birds, but plant evolution in Gondwana was hardly affected. Gondwanatheria is an extinct group of non-therian mammals with a Gondwanan distribution (South America, Africa, Madagascar, India, Zealandia and Antarctica) during the Late Cretaceous and Palaeogene. Xenarthra and Afrotheria, two placental clades, are of Gondwanan origin and probably began to evolve separately when Africa and South America separated.
The laurel forests of Australia, New Caledonia, and New Zealand have a number of species related to those of the laurissilva of Valdivia, through the connection of the Antarctic flora. These include gymnosperms and the deciduous species of Nothofagus, as well as the New Zealand laurel, Corynocarpus laevigatus, and Laurelia novae-zelandiae. New Caledonia and New Zealand became separated from Australia by continental drift 85 million years ago. The islands still retain plants that originated in Gondwana and spread to the Southern Hemisphere continents later.
| Physical sciences | Paleogeography | Earth science |
19389837 | https://en.wikipedia.org/wiki/Relativistic%20quantum%20mechanics | Relativistic quantum mechanics | In physics, relativistic quantum mechanics (RQM) is any Poincaré covariant formulation of quantum mechanics (QM). This theory is applicable to massive particles propagating at all velocities up to those comparable to the speed of light c, and can accommodate massless particles. The theory has application in high energy physics, particle physics and accelerator physics, as well as atomic physics, chemistry and condensed matter physics. Non-relativistic quantum mechanics refers to the mathematical formulation of quantum mechanics applied in the context of Galilean relativity, more specifically quantizing the equations of classical mechanics by replacing dynamical variables by operators. Relativistic quantum mechanics (RQM) is quantum mechanics applied with special relativity. Although the earlier formulations, like the Schrödinger picture and Heisenberg picture were originally formulated in a non-relativistic background, a few of them (e.g. the Dirac or path-integral formalism) also work with special relativity.
Key features common to all RQMs include: the prediction of antimatter, spin magnetic moments of elementary spin fermions, fine structure, and quantum dynamics of charged particles in electromagnetic fields. The key result is the Dirac equation, from which these predictions emerge automatically. By contrast, in non-relativistic quantum mechanics, terms have to be introduced artificially into the Hamiltonian operator to achieve agreement with experimental observations.
The most successful (and most widely used) RQM is relativistic quantum field theory (QFT), in which elementary particles are interpreted as field quanta. A unique consequence of QFT that has been tested against other RQMs is the failure of conservation of particle number, for example in matter creation and annihilation.
Paul Dirac's work between 1927 and 1933 shaped the synthesis of special relativity and quantum mechanics. His work was instrumental, as he formulated the Dirac equation and also originated quantum electrodynamics, both of which were successful in combining the two theories.
In this article, the equations are written in familiar 3D vector calculus notation and use hats for operators (not necessarily in the literature), and where space and time components can be collected, tensor index notation is shown also (frequently used in the literature), in addition the Einstein summation convention is used. SI units are used here; Gaussian units and natural units are common alternatives. All equations are in the position representation; for the momentum representation the equations have to be Fourier transformed – see position and momentum space.
Combining special relativity and quantum mechanics
One approach is to modify the Schrödinger picture to be consistent with special relativity.
A postulate of quantum mechanics is that the time evolution of any quantum system is given by the Schrödinger equation:
using a suitable Hamiltonian operator corresponding to the system. The solution is a complex-valued wavefunction , a function of the 3D position vector of the particle at time , describing the behavior of the system.
Every particle has a non-negative spin quantum number . The number is an integer, odd for fermions and even for bosons. Each has z-projection quantum numbers; . This is an additional discrete variable the wavefunction requires; .
Historically, in the early 1920s Pauli, Kronig, Uhlenbeck and Goudsmit were the first to propose the concept of spin. The inclusion of spin in the wavefunction incorporates the Pauli exclusion principle (1925) and the more general spin–statistics theorem (1939) due to Fierz, rederived by Pauli a year later. This is the explanation for a diverse range of subatomic particle behavior and phenomena: from the electronic configurations of atoms, nuclei (and therefore all elements on the periodic table and their chemistry), to the quark configurations and colour charge (hence the properties of baryons and mesons).
A fundamental prediction of special relativity is the relativistic energy–momentum relation; for a particle of rest mass , and in a particular frame of reference with energy and 3-momentum with magnitude in terms of the dot product , it is:
These equations are used together with the energy and momentum operators, which are respectively:
to construct a relativistic wave equation (RWE): a partial differential equation consistent with the energy–momentum relation, and is solved for to predict the quantum dynamics of the particle. For space and time to be placed on equal footing, as in relativity, the orders of space and time partial derivatives should be equal, and ideally as low as possible, so that no initial values of the derivatives need to be specified. This is important for probability interpretations, exemplified below. The lowest possible order of any differential equation is the first (zeroth order derivatives would not form a differential equation).
The Heisenberg picture is another formulation of QM, in which case the wavefunction is time-independent, and the operators contain the time dependence, governed by the equation of motion:
This equation is also true in RQM, provided the Heisenberg operators are modified to be consistent with SR.
Historically, around 1926, Schrödinger and Heisenberg show that wave mechanics and matrix mechanics are equivalent, later furthered by Dirac using transformation theory.
A more modern approach to RWEs, first introduced during the time RWEs were developing for particles of any spin, is to apply representations of the Lorentz group.
Space and time
In classical mechanics and non-relativistic QM, time is an absolute quantity all observers and particles can always agree on, "ticking away" in the background independent of space. Thus in non-relativistic QM one has for a many particle system .
In relativistic mechanics, the spatial coordinates and coordinate time are not absolute; any two observers moving relative to each other can measure different locations and times of events. The position and time coordinates combine naturally into a four-dimensional spacetime position corresponding to events, and the energy and 3-momentum combine naturally into the four-momentum of a dynamic particle, as measured in some reference frame, change according to a Lorentz transformation as one measures in a different frame boosted and/or rotated relative the original frame in consideration. The derivative operators, and hence the energy and 3-momentum operators, are also non-invariant and change under Lorentz transformations.
Under a proper orthochronous Lorentz transformation in Minkowski space, all one-particle quantum states locally transform under some representation of the Lorentz group:
where is a finite-dimensional representation, in other words a square matrix . Again, is thought of as a column vector containing components with the allowed values of . The quantum numbers and as well as other labels, continuous or discrete, representing other quantum numbers are suppressed. One value of may occur more than once depending on the representation.
Non-relativistic and relativistic Hamiltonians
The classical Hamiltonian for a particle in a potential is the kinetic energy plus the potential energy , with the corresponding quantum operator in the Schrödinger picture:
and substituting this into the above Schrödinger equation gives a non-relativistic QM equation for the wavefunction: the procedure is a straightforward substitution of a simple expression. By contrast this is not as easy in RQM; the energy–momentum equation is quadratic in energy and momentum leading to difficulties. Naively setting:
is not helpful for several reasons. The square root of the operators cannot be used as it stands; it would have to be expanded in a power series before the momentum operator, raised to a power in each term, could act on . As a result of the power series, the space and time derivatives are completely asymmetric: infinite-order in space derivatives but only first order in the time derivative, which is inelegant and unwieldy. Again, there is the problem of the non-invariance of the energy operator, equated to the square root which is also not invariant. Another problem, less obvious and more severe, is that it can be shown to be nonlocal and can even violate causality: if the particle is initially localized at a point so that is finite and zero elsewhere, then at any later time the equation predicts delocalization everywhere, even for which means the particle could arrive at a point before a pulse of light could. This would have to be remedied by the additional constraint .
There is also the problem of incorporating spin in the Hamiltonian, which isn't a prediction of the non-relativistic Schrödinger theory. Particles with spin have a corresponding spin magnetic moment quantized in units of , the Bohr magneton:
where is the (spin) g-factor for the particle, and the spin operator, so they interact with electromagnetic fields. For a particle in an externally applied magnetic field , the interaction term
has to be added to the above non-relativistic Hamiltonian. On the contrary; a relativistic Hamiltonian introduces spin automatically as a requirement of enforcing the relativistic energy-momentum relation.
Relativistic Hamiltonians are analogous to those of non-relativistic QM in the following respect; there are terms including rest mass and interaction terms with externally applied fields, similar to the classical potential energy term, as well as momentum terms like the classical kinetic energy term. A key difference is that relativistic Hamiltonians contain spin operators in the form of matrices, in which the matrix multiplication runs over the spin index , so in general a relativistic Hamiltonian:
is a function of space, time, and the momentum and spin operators.
The Klein–Gordon and Dirac equations for free particles
Substituting the energy and momentum operators directly into the energy–momentum relation may at first sight seem appealing, to obtain the Klein–Gordon equation:
and was discovered by many people because of the straightforward way of obtaining it, notably by Schrödinger in 1925 before he found the non-relativistic equation named after him, and by Klein and Gordon in 1927, who included electromagnetic interactions in the equation. This is relativistically invariant, yet this equation alone isn't a sufficient foundation for RQM for at least two reasons: one is that negative-energy states are solutions, another is the density (given below), and this equation as it stands is only applicable to spinless particles. This equation can be factored into the form:
where and are not simply numbers or vectors, but 4 × 4 Hermitian matrices that are required to anticommute for :
and square to the identity matrix:
so that terms with mixed second-order derivatives cancel while the second-order derivatives purely in space and time remain. The first factor:
is the Dirac equation. The other factor is also the Dirac equation, but for a particle of negative mass. Each factor is relativistically invariant. The reasoning can be done the other way round: propose the Hamiltonian in the above form, as Dirac did in 1928, then pre-multiply the equation by the other factor of operators , and comparison with the KG equation determines the constraints on and . The positive mass equation can continue to be used without loss of continuity. The matrices multiplying suggest it isn't a scalar wavefunction as permitted in the KG equation, but must instead be a four-component entity. The Dirac equation still predicts negative energy solutions, so Dirac postulated that negative energy states are always occupied, because according to the Pauli principle, electronic transitions from positive to negative energy levels in atoms would be forbidden. See Dirac sea for details.
Densities and currents
In non-relativistic quantum mechanics, the square modulus of the wavefunction gives the probability density function . This is the Copenhagen interpretation, circa 1927. In RQM, while is a wavefunction, the probability interpretation is not the same as in non-relativistic QM. Some RWEs do not predict a probability density or probability current (really meaning probability current density) because they are not positive-definite functions of space and time. The Dirac equation does:
where the dagger denotes the Hermitian adjoint (authors usually write for the Dirac adjoint) and is the probability four-current, while the Klein–Gordon equation does not:
where is the four-gradient. Since the initial values of both and may be freely chosen, the density can be negative.
Instead, what appears look at first sight a "probability density" and "probability current" has to be reinterpreted as charge density and current density when multiplied by electric charge. Then, the wavefunction is not a wavefunction at all, but reinterpreted as a field. The density and current of electric charge always satisfy a continuity equation:
as charge is a conserved quantity. Probability density and current also satisfy a continuity equation because probability is conserved, however this is only possible in the absence of interactions.
Spin and electromagnetically interacting particles
Including interactions in RWEs is generally difficult. Minimal coupling is a simple way to include the electromagnetic interaction. For one charged particle of electric charge in an electromagnetic field, given by the magnetic vector potential defined by the magnetic field , and electric scalar potential , this is:
where is the four-momentum that has a corresponding 4-momentum operator, and the four-potential. In the following, the non-relativistic limit refers to the limiting cases:
that is, the total energy of the particle is approximately the rest energy for small electric potentials, and the momentum is approximately the classical momentum.
Spin 0
In RQM, the KG equation admits the minimal coupling prescription;
In the case where the charge is zero, the equation reduces trivially to the free KG equation so nonzero charge is assumed below. This is a scalar equation that is invariant under the irreducible one-dimensional scalar representation of the Lorentz group. This means that all of its solutions will belong to a direct sum of representations. Solutions that do not belong to the irreducible representation will have two or more independent components. Such solutions cannot in general describe particles with nonzero spin since spin components are not independent. Other constraint will have to be imposed for that, e.g. the Dirac equation for spin , see below. Thus if a system satisfies the KG equation only, it can only be interpreted as a system with zero spin.
The electromagnetic field is treated classically according to Maxwell's equations and the particle is described by a wavefunction, the solution to the KG equation. The equation is, as it stands, not always very useful, because massive spinless particles, such as the π-mesons, experience the much stronger strong interaction in addition to the electromagnetic interaction. It does, however, correctly describe charged spinless bosons in the absence of other interactions.
The KG equation is applicable to spinless charged bosons in an external electromagnetic potential. As such, the equation cannot be applied to the description of atoms, since the electron is a spin particle. In the non-relativistic limit the equation reduces to the Schrödinger equation for a spinless charged particle in an electromagnetic field:
Spin
Non relativistically, spin was phenomenologically introduced in the Pauli equation by Pauli in 1927 for particles in an electromagnetic field:
by means of the 2 × 2 Pauli matrices, and is not just a scalar wavefunction as in the non-relativistic Schrödinger equation, but a two-component spinor field:
where the subscripts ↑ and ↓ refer to the "spin up" () and "spin down" () states.
In RQM, the Dirac equation can also incorporate minimal coupling, rewritten from above;
and was the first equation to accurately predict spin, a consequence of the 4 × 4 gamma matrices . There is a 4 × 4 identity matrix pre-multiplying the energy operator (including the potential energy term), conventionally not written for simplicity and clarity (i.e. treated like the number 1). Here is a four-component spinor field, which is conventionally split into two two-component spinors in the form:
The 2-spinor corresponds to a particle with 4-momentum and charge and two spin states (, as before). The other 2-spinor corresponds to a similar particle with the same mass and spin states, but negative 4-momentum and negative charge , that is, negative energy states, time-reversed momentum, and negated charge. This was the first interpretation and prediction of a particle and corresponding antiparticle. See Dirac spinor and bispinor for further description of these spinors. In the non-relativistic limit the Dirac equation reduces to the Pauli equation (see Dirac equation for how). When applied a one-electron atom or ion, setting and to the appropriate electrostatic potential, additional relativistic terms include the spin–orbit interaction, electron gyromagnetic ratio, and Darwin term. In ordinary QM these terms have to be put in by hand and treated using perturbation theory. The positive energies do account accurately for the fine structure.
Within RQM, for massless particles the Dirac equation reduces to:
the first of which is the Weyl equation, a considerable simplification applicable for massless neutrinos. This time there is a 2 × 2 identity matrix pre-multiplying the energy operator conventionally not written. In RQM it is useful to take this as the zeroth Pauli matrix which couples to the energy operator (time derivative), just as the other three matrices couple to the momentum operator (spatial derivatives).
The Pauli and gamma matrices were introduced here, in theoretical physics, rather than pure mathematics itself. They have applications to quaternions and to the SO(2) and SO(3) Lie groups, because they satisfy the important commutator [ , ] and anticommutator [ , ]+ relations respectively:
where is the three-dimensional Levi-Civita symbol. The gamma matrices form bases in Clifford algebra, and have a connection to the components of the flat spacetime Minkowski metric in the anticommutation relation:
(This can be extended to curved spacetime by introducing vierbeins, but is not the subject of special relativity).
In 1929, the Breit equation was found to describe two or more electromagnetically interacting massive spin fermions to first-order relativistic corrections; one of the first attempts to describe such a relativistic quantum many-particle system. This is, however, still only an approximation, and the Hamiltonian includes numerous long and complicated sums.
Helicity and chirality
The helicity operator is defined by;
where p is the momentum operator, S the spin operator for a particle of spin s, E is the total energy of the particle, and m0 its rest mass. Helicity indicates the orientations of the spin and translational momentum vectors. Helicity is frame-dependent because of the 3-momentum in the definition, and is quantized due to spin quantization, which has discrete positive values for parallel alignment, and negative values for antiparallel alignment.
An automatic occurrence in the Dirac equation (and the Weyl equation) is the projection of the spin operator on the 3-momentum (times c), , which is the helicity (for the spin case) times .
For massless particles the helicity simplifies to:
Higher spins
The Dirac equation can only describe particles of spin . Beyond the Dirac equation, RWEs have been applied to free particles of various spins. In 1936, Dirac extended his equation to all fermions, three years later Fierz and Pauli rederived the same equation. The Bargmann–Wigner equations were found in 1948 using Lorentz group theory, applicable for all free particles with any spin. Considering the factorization of the KG equation above, and more rigorously by Lorentz group theory, it becomes apparent to introduce spin in the form of matrices.
The wavefunctions are multicomponent spinor fields, which can be represented as column vectors of functions of space and time:
where the expression on the right is the Hermitian conjugate. For a massive particle of spin , there are components for the particle, and another for the corresponding antiparticle (there are possible values in each case), altogether forming a -component spinor field:
with the + subscript indicating the particle and − subscript for the antiparticle. However, for massless particles of spin s, there are only ever two-component spinor fields; one is for the particle in one helicity state corresponding to +s and the other for the antiparticle in the opposite helicity state corresponding to −s:
According to the relativistic energy-momentum relation, all massless particles travel at the speed of light, so particles traveling at the speed of light are also described by two-component spinors. Historically, Élie Cartan found the most general form of spinors in 1913, prior to the spinors revealed in the RWEs following the year 1927.
For equations describing higher-spin particles, the inclusion of interactions is nowhere near as simple minimal coupling, they lead to incorrect predictions and self-inconsistencies. For spin greater than , the RWE is not fixed by the particle's mass, spin, and electric charge; the electromagnetic moments (electric dipole moments and magnetic dipole moments) allowed by the spin quantum number are arbitrary. (Theoretically, magnetic charge would contribute also). For example, the spin case only allows a magnetic dipole, but for spin 1 particles magnetic quadrupoles and electric dipoles are also possible. For more on this topic, see multipole expansion and (for example) Cédric Lorcé (2009).
Velocity operator
The Schrödinger/Pauli velocity operator can be defined for a massive particle using the classical definition , and substituting quantum operators in the usual way:
which has eigenvalues that take any value. In RQM, the Dirac theory, it is:
which must have eigenvalues between ±c. See Foldy–Wouthuysen transformation for more theoretical background.
Relativistic quantum Lagrangians
The Hamiltonian operators in the Schrödinger picture are one approach to forming the differential equations for . An equivalent alternative is to determine a Lagrangian (really meaning Lagrangian density), then generate the differential equation by the field-theoretic Euler–Lagrange equation:
For some RWEs, a Lagrangian can be found by inspection. For example, the Dirac Lagrangian is:
and Klein–Gordon Lagrangian is:
This is not possible for all RWEs; and is one reason the Lorentz group theoretic approach is important and appealing: fundamental invariance and symmetries in space and time can be used to derive RWEs using appropriate group representations. The Lagrangian approach with field interpretation of is the subject of QFT rather than RQM: Feynman's path integral formulation uses invariant Lagrangians rather than Hamiltonian operators, since the latter can become extremely complicated, see (for example) Weinberg (1995).
Relativistic quantum angular momentum
In non-relativistic QM, the angular momentum operator is formed from the classical pseudovector definition . In RQM, the position and momentum operators are inserted directly where they appear in the orbital relativistic angular momentum tensor defined from the four-dimensional position and momentum of the particle, equivalently a bivector in the exterior algebra formalism:
which are six components altogether: three are the non-relativistic 3-orbital angular momenta; , , , and the other three , , are boosts of the centre of mass of the rotating object. An additional relativistic-quantum term has to be added for particles with spin. For a particle of rest mass , the total angular momentum tensor is:
where the star denotes the Hodge dual, and
is the Pauli–Lubanski pseudovector. For more on relativistic spin, see (for example) Troshin & Tyurin (1994).
Thomas precession and spin–orbit interactions
In 1926, the Thomas precession is discovered: relativistic corrections to the spin of elementary particles with application in the spin–orbit interaction of atoms and rotation of macroscopic objects. In 1939 Wigner derived the Thomas precession.
In classical electromagnetism and special relativity, an electron moving with a velocity through an electric field but not a magnetic field , will in its own frame of reference experience a Lorentz-transformed magnetic field :
In the non-relativistic limit :
so the non-relativistic spin interaction Hamiltonian becomes:
where the first term is already the non-relativistic magnetic moment interaction, and the second term the relativistic correction of order , but this disagrees with experimental atomic spectra by a factor of . It was pointed out by L. Thomas that there is a second relativistic effect: An electric field component perpendicular to the electron velocity causes an additional acceleration of the electron perpendicular to its instantaneous velocity, so the electron moves in a curved path. The electron moves in a rotating frame of reference, and this additional precession of the electron is called the Thomas precession. It can be shown that the net result of this effect is that the spin–orbit interaction is reduced by half, as if the magnetic field experienced by the electron has only one-half the value, and the relativistic correction in the Hamiltonian is:
In the case of RQM, the factor of is predicted by the Dirac equation.
History
The events which led to and established RQM, and the continuation beyond into quantum electrodynamics (QED), are summarized below [see, for example, R. Resnick and R. Eisberg (1985), and P.W Atkins (1974)]. More than half a century of experimental and theoretical research from the 1890s through to the 1950s in the new and mysterious quantum theory as it was up and coming revealed that a number of phenomena cannot be explained by QM alone. SR, found at the turn of the 20th century, was found to be a necessary component, leading to unification: RQM. Theoretical predictions and experiments mainly focused on the newly found atomic physics, nuclear physics, and particle physics; by considering spectroscopy, diffraction and scattering of particles, and the electrons and nuclei within atoms and molecules. Numerous results are attributed to the effects of spin.
Relativistic description of particles in quantum phenomena
Albert Einstein in 1905 explained of the photoelectric effect; a particle description of light as photons. In 1916, Sommerfeld explains fine structure; the splitting of the spectral lines of atoms due to first order relativistic corrections. The Compton effect of 1923 provided more evidence that special relativity does apply; in this case to a particle description of photon–electron scattering. de Broglie extends wave–particle duality to matter: the de Broglie relations, which are consistent with special relativity and quantum mechanics. By 1927, Davisson and Germer and separately G. Thomson successfully diffract electrons, providing experimental evidence of wave-particle duality.
Experiments
1897 J. J. Thomson discovers the electron and measures its mass-to-charge ratio. Discovery of the Zeeman effect: the splitting a spectral line into several components in the presence of a static magnetic field.
1908 Millikan measures the charge on the electron and finds experimental evidence of its quantization, in the oil drop experiment.
1911 Alpha particle scattering in the Geiger–Marsden experiment, led by Rutherford, showed that atoms possess an internal structure: the atomic nucleus.
1913 The Stark effect is discovered: splitting of spectral lines due to a static electric field (compare with the Zeeman effect).
1922 Stern–Gerlach experiment: experimental evidence of spin and its quantization.
1924 Stoner studies splitting of energy levels in magnetic fields.
1932 Experimental discovery of the neutron by Chadwick, and positrons by Anderson, confirming the theoretical prediction of positrons.
1958 Discovery of the Mössbauer effect: resonant and recoil-free emission and absorption of gamma radiation by atomic nuclei bound in a solid, useful for accurate measurements of gravitational redshift and time dilation, and in the analysis of nuclear electromagnetic moments in hyperfine interactions.
Quantum non-locality and relativistic locality
In 1935, Einstein, Rosen, Podolsky published a paper concerning quantum entanglement of particles, questioning quantum nonlocality and the apparent violation of causality upheld in SR: particles can appear to interact instantaneously at arbitrary distances. This was a misconception since information is not and cannot be transferred in the entangled states; rather the information transmission is in the process of measurement by two observers (one observer has to send a signal to the other, which cannot exceed c). QM does not violate SR. In 1959, Bohm and Aharonov publish a paper on the Aharonov–Bohm effect, questioning the status of electromagnetic potentials in QM. The EM field tensor and EM 4-potential formulations are both applicable in SR, but in QM the potentials enter the Hamiltonian (see above) and influence the motion of charged particles even in regions where the fields are zero. In 1964, Bell's theorem was published in a paper on the EPR paradox, showing that QM cannot be derived from local hidden-variable theories if locality is to be maintained.
The Lamb shift
In 1947, the Lamb shift was discovered: a small difference in the 2S and 2P levels of hydrogen, due to the interaction between the electron and vacuum. Lamb and Retherford experimentally measure stimulated radio-frequency transitions the 2S and 2P hydrogen levels by microwave radiation. An explanation of the Lamb shift is presented by Bethe. Papers on the effect were published in the early 1950s.
Development of quantum electrodynamics
1927 Dirac establishes the field of QED, also coining the term "quantum electrodynamics".
1943 Tomonaga begins work on renormalization, influential in QED.
1947 Schwinger calculates the anomalous magnetic moment of the electron. Kusch measures of the anomalous magnetic electron moment, confirming one of QED's great predictions.
| Physical sciences | Quantum mechanics | Physics |
17191020 | https://en.wikipedia.org/wiki/Nanjing%20Man | Nanjing Man | Nanjing Man is a specimen of Homo erectus (possibly Homo pekinensis) found in China. Large fragments of one male and one female skull and a molar tooth were discovered in 1993 in Hulu Cave () on the Tangshan (汤山) hills in Jiangning District, Nanjing. The specimens were found in the Hulu limestone cave at a depth of 60–97 cm by Liu Luhong, a local worker. Dating the fossils yielded an estimated age of 580,000 to 620,000 years old.
Discovery
In 1992, Mu Xi-nan (穆西南), Xu Hankui (许汉奎), Mu Daocheng (穆道成), and Zhong Shilan (钟石兰) with the Nanjing Institute of Geology and Paleontology (NIGP) identified Hulu Cave near the Tangshan Subdistrict in Jiangning District, Nanjing (roughly east of the city center of Nanjing) as a mammalian fossil bearing site, and organised further excavations with the Institute of Vertebrate Paleontology and Paleoanthropology (IVPP) headquartered in Beijing. In March 1993, local labourer Liu Luhong discovered two partial skull fragments (Nanjing 1 and 2), the first retaining most of the face, and an upper molar (Nanjing 3).
The mammal assemblage indicated Huludong was roughly contemporaneous with the Zhoukoudian cave site near Beijing, home of the Peking Man (the reason why the IVPP had joined the excavations).
Age determination
Researchers used mass spectrometric U-series dating to identify the age of the skulls. Best estimates date the skull to be at least 580,000 years old. This research, done in 2001 estimates the age of the skulls to be 270,000 years older than previous estimates, executed with the use of different dating methods like electron spin resonance dating and alpha-counting U-series. However, by using mass spectrometric U-series dating, the age for the tooth found on the Nanjing site was estimated to be only 400,000 years old. Researchers proposed that the enamel used to date the tooth may not have the same uranium uptake as the skulls, leading to the discrepancy in estimated age. Another study from 1999 estimated one skull to be at least 500,000 years old, while they date the other skull being between 250,000 and 500,000 years old using TIMS dating.
Impact of the Nanjing fossils
Homo erectus occupation of Eastern Asia was an established idea well before the discovery of Homo erectus from Nanjing. Nanjing Man is one of several middle Pleistocene dated Homo erectus fossil finds in eastern China, the most well-known of which is Peking Man. However dating the Nanjing Man fossils between 580,000 YA and 620,000 YA pushed the estimate for Homo erectus colonisation of eastern Asia almost 270,000 years earlier.
The Nanjing Man fossil discovery coincided with the paleoanthropological debate on the population dynamics of modern humans and their relation to other species of the genus Homo. The extended occupation of East Asia by Homo erectus suggested by the dating of the Nanjing fossils supports the hypothesis that Homo erectus lived in Asia before pre-modern Homo sapiens existed. A scientific consensus on the dispersal of Homo sapiens throughout the globe was reached in the early 21st century. However, the influence of East Asian Homo erectus on modern human ancestry remains unclear.
Morphological features of the Nanjing Man fossils such as cranial capacity and the size of various cranial metrics differ significantly from other Chinese hominins. Despite this, morphometric and morphological features fall well within the range expected for Homo erectus. A high diversity in cranial morphological features in Chinese Homo erectus has been identified in a number of studies
Present location
The skull fragments collected at Hulu Cave are currently displayed the Nanjing Homo erectus fossil museum, along with other educational information about Nanjing man and the colonisation of China by Homo erectus.
| Biology and health sciences | Homo | Biology |
7997387 | https://en.wikipedia.org/wiki/Nagoya%20Municipal%20Subway | Nagoya Municipal Subway | The , also referred to as simply the Nagoya Subway, is a rapid transit system serving Nagoya, the capital of Aichi Prefecture in Japan. It consists of six lines that cover of route and serve 87 stations. Approximately 90% of the subway's total track length is underground.
The subway system is owned and operated by the Nagoya City Transportation Bureau and, like other large Japanese cities including Tokyo and Osaka, is heavily complemented by suburban rail, together forming an extensive network of 47 lines in and around Greater Nagoya. Of them, the subway lines represent 38% of Greater Nagoya's total rail ridership of 3 million passengers a day.
In 2002, the system introduced Hatchii as its official mascot.
Lines and infrastructure
The six lines that comprise the Nagoya subway network are, for the most part, independent. However, Meikō Line services partially interline with the Meijō Line, and the operations of both lines are combined. Therefore, there are in fact five distinct services on the subway. They are mostly self-contained, but two of its lines have through services onto lines owned and operated by Meitetsu, the largest private railway operator in the region. One of these, the Kamiida Line, is essentially an extension of the Meitetsu Komaki Line to which it connects.
The first two subway lines, the Higashiyama and Meijō/Meikō Lines, run on standard gauge track and use 600 volt DC electrification from a third rail. They are three of the eleven subway lines in Japan which use both third-rail electrification and standard gauge track (the Ginza and Marunouchi lines in Tokyo are the only other two lines to use third rail at that voltage; five of the eight lines of the Osaka Metro and the Blue Line in Yokohama all use 750 V DC third rail). Subsequent lines were built to narrow gauge and employ 1,500 volt DC electrification from overhead lines, in common with most other rapid transit lines in the country.
As with other railway lines in Japan, tickets can be purchased from ticket vending machines in stations. Since February 2011, this has largely been supplemented by Manaca, a rechargeable smart card. In 2012, Manaca replaced Tranpass, the predecessor integrated ticketing system, which was also able to be used at subway stations and for other connected transportation systems in the region.
On January 4, 2023, four stations were renamed:
Nakamura Kuyakusho → Taiko-dori
Shiyakusho (City Hall) → Nagoyajo (Nagoya Castle)
Temma-cho → Atsuta Jingu Temma-cho
Jingu Nishi → Atsuta Jingu Nishi
List of Nagoya Municipal Subway lines
| Technology | Japan | null |
7999492 | https://en.wikipedia.org/wiki/Sediment%20transport | Sediment transport | Sediment transport is the movement of solid particles (sediment), typically due to a combination of gravity acting on the sediment, and the movement of the fluid in which the sediment is entrained. Sediment transport occurs in natural systems where the particles are clastic rocks (sand, gravel, boulders, etc.), mud, or clay; the fluid is air, water, or ice; and the force of gravity acts to move the particles along the sloping surface on which they are resting. Sediment transport due to fluid motion occurs in rivers, oceans, lakes, seas, and other bodies of water due to currents and tides. Transport is also caused by glaciers as they flow, and on terrestrial surfaces under the influence of wind. Sediment transport due only to gravity can occur on sloping surfaces in general, including hillslopes, scarps, cliffs, and the continental shelf—continental slope boundary.
Sediment transport is important in the fields of sedimentary geology, geomorphology, civil engineering, hydraulic engineering and environmental engineering (see applications, below). Knowledge of sediment transport is most often used to determine whether erosion or deposition will occur, the magnitude of this erosion or deposition, and the time and distance over which it will occur.
Environments
Aeolian
Aeolian or eolian (depending on the parsing of æ) is the term for sediment transport by wind. This process results in the formation of ripples and sand dunes. Typically, the size of the transported sediment is fine sand (<1 mm) and smaller, because air is a fluid with low density and viscosity, and can therefore not exert very much shear on its bed.
Bedforms are generated by aeolian sediment transport in the terrestrial near-surface environment. Ripples and dunes form as a natural self-organizing response to sediment transport.
Aeolian sediment transport is common on beaches and in the arid regions of the world, because it is in these environments that vegetation does not prevent the presence and motion of fields of sand.
Wind-blown very fine-grained dust is capable of entering the upper atmosphere and moving across the globe. Dust from the Sahara deposits on the Canary Islands and islands in the Caribbean, and dust from the Gobi Desert has deposited on the western United States. This sediment is important to the soil budget and ecology of several islands.
Deposits of fine-grained wind-blown glacial sediment are called loess.
Fluvial
Coastal
Coastal sediment transport takes place in near-shore environments due to the motions of waves and currents. At the mouths of rivers, coastal sediment and fluvial sediment transport processes mesh to create river deltas.
Coastal sediment transport results in the formation of characteristic coastal landforms such as beaches, barrier islands, and capes.
Glacial
As glaciers move over their beds, they entrain and move material of all sizes. Glaciers can carry the largest sediment, and areas of glacial deposition often contain a large number of glacial erratics, many of which are several metres in diameter. Glaciers also pulverize rock into "glacial flour", which is so fine that it is often carried away by winds to create loess deposits thousands of kilometres afield. Sediment entrained in glaciers often moves approximately along the glacial flowlines, causing it to appear at the surface in the ablation zone.
Hillslope
In hillslope sediment transport, a variety of processes move regolith downslope. These include:
Soil creep
Tree throw
Movement of soil by burrowing animals
Slumping and landsliding of the hillslope
These processes generally combine to give the hillslope a profile that looks like a solution to the diffusion equation, where the diffusivity is a parameter that relates to the ease of sediment transport on the particular hillslope. For this reason, the tops of hills generally have a parabolic concave-up profile, which grades into a convex-up profile around valleys.
As hillslopes steepen, however, they become more prone to episodic landslides and other mass wasting events. Therefore, hillslope processes are better described by a nonlinear diffusion equation in which classic diffusion dominates for shallow slopes and erosion rates go to infinity as the hillslope reaches a critical angle of repose.
Debris flow
Large masses of material are moved in debris flows, hyperconcentrated mixtures of mud, clasts that range up to boulder-size, and water. Debris flows move as granular flows down steep mountain valleys and washes. Because they transport sediment as a granular mixture, their transport mechanisms and capacities scale differently from those of fluvial systems.
Applications
Sediment transport is applied to solve many environmental, geotechnical, and geological problems. Measuring or quantifying sediment transport or erosion is therefore important for coastal engineering. Several sediment erosion devices have been designed in order to quantify sediment erosion (e.g., Particle Erosion Simulator (PES)). One such device, also referred to as the BEAST (Benthic Environmental Assessment Sediment Tool) has been calibrated in order to quantify rates of sediment erosion.
Movement of sediment is important in providing habitat for fish and other organisms in rivers. Therefore, managers of highly regulated rivers, which are often sediment-starved due to dams, are often advised to stage short floods to refresh the bed material and rebuild bars. This is also important, for example, in the Grand Canyon of the Colorado River, to rebuild shoreline habitats also used as campsites.
Sediment discharge into a reservoir formed by a dam forms a reservoir delta. This delta will fill the basin, and eventually, either the reservoir will need to be dredged or the dam will need to be removed. Knowledge of sediment transport can be used to properly plan to extend the life of a dam.
Geologists can use inverse solutions of transport relationships to understand flow depth, velocity, and direction, from sedimentary rocks and young deposits of alluvial materials.
Flow in culverts, over dams, and around bridge piers can cause erosion of the bed. This erosion can damage the environment and expose or unsettle the foundations of the structure. Therefore, good knowledge of the mechanics of sediment transport in a built environment are important for civil and hydraulic engineers.
When suspended sediment transport is increased due to human activities, causing environmental problems including the filling of channels, it is called siltation after the grain-size fraction dominating the process.
Initiation of motion
Stress balance
For a fluid to begin transporting sediment that is currently at rest on a surface, the boundary (or bed) shear stress exerted by the fluid must exceed the critical shear stress for the initiation of motion of grains at the bed. This basic criterion for the initiation of motion can be written as:
.
This is typically represented by a comparison between a dimensionless shear stress and a dimensionless critical shear stress . The nondimensionalization is in order to compare the driving forces of particle motion (shear stress) to the resisting forces that would make it stationary (particle density and size). This dimensionless shear stress, , is called the Shields parameter and is defined as:
.
And the new equation to solve becomes:
.
The equations included here describe sediment transport for clastic, or granular sediment. They do not work for clays and muds because these types of floccular sediments do not fit the geometric simplifications in these equations, and also interact thorough electrostatic forces. The equations were also designed for fluvial sediment transport of particles carried along in a liquid flow, such as that in a river, canal, or other open channel.
Only one size of particle is considered in this equation. However, river beds are often formed by a mixture of sediment of various sizes. In case of partial motion where only a part of the sediment mixture moves, the river bed becomes enriched in large gravel as the smaller sediments are washed away. The smaller sediments present under this layer of large gravel have a lower possibility of movement and total sediment transport decreases. This is called armouring effect. Other forms of armouring of sediment or decreasing rates of sediment erosion can be caused by carpets of microbial mats, under conditions of high organic loading.
Critical shear stress
The Shields diagram empirically shows how the dimensionless critical shear stress (i.e. the dimensionless shear stress required for the initiation of motion) is a function of a particular form of the particle Reynolds number, or Reynolds number related to the particle. This allows the criterion for the initiation of motion to be rewritten in terms of a solution for a specific version of the particle Reynolds number, called .
This can then be solved by using the empirically derived Shields curve to find as a function of a specific form of the particle Reynolds number called the boundary Reynolds number. The mathematical solution of the equation was given by Dey.
Particle Reynolds number
In general, a particle Reynolds number has the form:
Where is a characteristic particle velocity, is the grain diameter (a characteristic particle size), and is the kinematic viscosity, which is given by the dynamic viscosity, , divided by the fluid density, .
The specific particle Reynolds number of interest is called the boundary Reynolds number, and it is formed by replacing the velocity term in the particle Reynolds number by the shear velocity, , which is a way of rewriting shear stress in terms of velocity.
where is the bed shear stress (described below), and is the von Kármán constant, where
.
The particle Reynolds number is therefore given by:
Bed shear stress
The boundary Reynolds number can be used with the Shields diagram to empirically solve the equation
,
which solves the right-hand side of the equation
.
In order to solve the left-hand side, expanded as
,
the bed shear stress needs to be found, . There are several ways to solve for the bed shear stress. The simplest approach is to assume the flow is steady and uniform, using the reach-averaged depth and slope. because it is difficult to measure shear stress in situ, this method is also one of the most-commonly used. The method is known as the depth-slope product.
Depth-slope product
For a river undergoing approximately steady, uniform equilibrium flow, of approximately constant depth h and slope angle θ over the reach of interest, and whose width is much greater than its depth, the bed shear stress is given by some momentum considerations stating that the gravity force component in the flow direction equals exactly the friction force. For a wide channel, it yields:
For shallow slope angles, which are found in almost all natural lowland streams, the small-angle formula shows that is approximately equal to , which is given by , the slope. Rewritten with this:
Shear velocity, velocity, and friction factor
For the steady case, by extrapolating the depth-slope product and the equation for shear velocity:
,
The depth-slope product can be rewritten as:
.
is related to the mean flow velocity, , through the generalized Darcy–Weisbach friction factor, , which is equal to the Darcy-Weisbach friction factor divided by 8 (for mathematical convenience). Inserting this friction factor,
.
Unsteady flow
For all flows that cannot be simplified as a single-slope infinite channel (as in the depth-slope product, above), the bed shear stress can be locally found by applying the Saint-Venant equations for continuity, which consider accelerations within the flow.
Example
Set-up
The criterion for the initiation of motion, established earlier, states that
.
In this equation,
, and therefore
.
is a function of boundary Reynolds number, a specific type of particle Reynolds number.
.
For a particular particle Reynolds number, will be an empirical constant given by the Shields Curve or by another set of empirical data (depending on whether or not the grain size is uniform).
Therefore, the final equation to solve is:
.
Solution
Some assumptions allow the solution of the above equation.
The first assumption is that a good approximation of reach-averaged shear stress is given by the depth-slope product. The equation then can be rewritten as:
.
Moving and re-combining the terms produces:
where R is the submerged specific gravity of the sediment.
The second assumption is that the particle Reynolds number is high. This typically applies to particles of gravel-size or larger in a stream, and means the critical shear stress is constant. The Shields curve shows that for a bed with a uniform grain size,
.
Later researchers have shown this value is closer to
for more uniformly sorted beds. Therefore the replacement
is used to insert both values at the end.
The equation now reads:
This final expression shows the product of the channel depth and slope is equal to the Shield's criterion times the submerged specific gravity of the particles times the particle diameter.
For a typical situation, such as quartz-rich sediment in water , the submerged specific gravity is equal to 1.65.
Plugging this into the equation above,
.
For the Shield's criterion of . 0.06 * 1.65 = 0.099, which is well within standard margins of error of 0.1. Therefore, for a uniform bed,
.
For these situations, the product of the depth and slope of the flow should be 10% of the diameter of the median grain diameter.
The mixed-grain-size bed value is , which is supported by more recent research as being more broadly applicable because most natural streams have mixed grain sizes. If this value is used, and D is changed to D_50 ("50" for the 50th percentile, or the median grain size, as an appropriate value for a mixed-grain-size bed), the equation becomes:
Which means that the depth times the slope should be about 5% of the median grain diameter in the case of a mixed-grain-size bed.
Modes of entrainment
The sediments entrained in a flow can be transported along the bed as bed load in the form of sliding and rolling grains, or in suspension as suspended load advected by the main flow. Some sediment materials may also come from the upstream reaches and be carried downstream in the form of wash load.
Rouse number
The location in the flow in which a particle is entrained is determined by the Rouse number, which is determined by the density ρs and diameter d of the sediment particle, and the density ρ and kinematic viscosity ν of the fluid, determine in which part of the flow the sediment particle will be carried.
Here, the Rouse number is given by P. The term in the numerator is the (downwards) sediment the sediment settling velocity ws, which is discussed below. The upwards velocity on the grain is given as a product of the von Kármán constant, κ = 0.4, and the shear velocity, u∗.
The following table gives the approximate required Rouse numbers for transport as bed load, suspended load, and wash load.
Settling velocity
The settling velocity (also called the "fall velocity" or "terminal velocity") is a function of the particle Reynolds number. Generally, for small particles (laminar approximation), it can be calculated with Stokes' Law. For larger particles (turbulent particle Reynolds numbers), fall velocity is calculated with the turbulent drag law. Dietrich (1982) compiled a large amount of published data to which he empirically fit settling velocity curves. Ferguson and Church (2006) analytically combined the expressions for Stokes flow and a turbulent drag law into a single equation that works for all sizes of sediment, and successfully tested it against the data of Dietrich. Their equation is
.
In this equation ws is the sediment settling velocity, g is acceleration due to gravity, and D is mean sediment diameter. is the kinematic viscosity of water, which is approximately 1.0 x 10−6 m2/s for water at 20 °C.
and are constants related to the shape and smoothness of the grains.
The expression for fall velocity can be simplified so that it can be solved only in terms of D. We use the sieve diameters for natural grains, , and values given above for and . From these parameters, the fall velocity is given by the expression:
Alternatively, settling velocity for a particle of sediment can be derived using Stokes Law assuming quiescent (or still) fluid in steady state. The resulting formulation for settling velocity is,
where is the gravitational constant; is the density of the sediment; is the density of water; is the sediment particle diameter (commonly assumed to be the median particle diameter, often referred to as in field studies); and is the molecular viscosity of water. The Stokes settling velocity can be thought of as the terminal velocity resulting from balancing a particle's buoyant force (proportional to the cross-sectional area) with the gravitational force (proportional to the mass). Small particles will have a slower settling velocity than heavier particles, as seen in the figure. This has implications for many aspects of sediment transport, for example, how far downstream a particle might be advected in a river.
Hjulström–Sundborg diagram
In 1935, Filip Hjulström created the Hjulström curve, a graph which shows the relationship between the size of sediment and the velocity required to erode (lift it), transport it, or deposit it. The graph is logarithmic.
Åke Sundborg later modified the Hjulström curve to show separate curves for the movement threshold corresponding to several water depths, as is necessary if the flow velocity rather than the boundary shear stress (as in the Shields diagram) is used for the flow strength.
This curve has no more than a historical value nowadays, although its simplicity is still attractive. Among the drawbacks of this curve are that it does not take the water depth into account and more importantly, that it does not show that sedimentation is caused by flow velocity deceleration and erosion is caused by flow acceleration. The dimensionless Shields diagram is now unanimously accepted for initiation of sediment motion in rivers.
Transport rate
Formulas to calculate sediment transport rate exist for sediment moving in several different parts of the flow. These formulas are often segregated into bed load, suspended load, and wash load. They may sometimes also be segregated into bed material load and wash load.
Bed load
Bed load moves by rolling, sliding, and hopping (or saltating) over the bed, and moves at a small fraction of the fluid flow velocity. Bed load is generally thought to constitute 5–10% of the total sediment load in a stream, making it less important in terms of mass balance. However, the bed material load (the bed load plus the portion of the suspended load which comprises material derived from the bed) is often dominated by bed load, especially in gravel-bed rivers. This bed material load is the only part of the sediment load that actively interacts with the bed. As the bed load is an important component of that, it plays a major role in controlling the morphology of the channel.
Bed load transport rates are usually expressed as being related to excess dimensionless shear stress raised to some power. Excess dimensionless shear stress is a nondimensional measure of bed shear stress about the threshold for motion.
,
Bed load transport rates may also be given by a ratio of bed shear stress to critical shear stress, which is equivalent in both the dimensional and nondimensional cases. This ratio is called the "transport stage" and is an important in that it shows bed shear stress as a multiple of the value of the criterion for the initiation of motion.
When used for sediment transport formulae, this ratio is typically raised to a power.
The majority of the published relations for bedload transport are given in dry sediment weight per unit channel width, ("breadth"):
.
Due to the difficulty of estimating bed load transport rates, these equations are typically only suitable for the situations for which they were designed.
Notable bed load transport formulae
Meyer-Peter Müller and derivatives
The transport formula of Meyer-Peter and Müller, originally developed in 1948, was designed for well-sorted fine gravel at a transport stage of about 8. The formula uses the above nondimensionalization for shear stress,
,
and Hans Einstein's nondimensionalization for sediment volumetric discharge per unit width
.
Their formula reads:
.
Their experimentally determined value for is 0.047, and is the third commonly used value for this (in addition to Parker's 0.03 and Shields' 0.06).
Because of its broad use, some revisions to the formula have taken place over the years that show that the coefficient on the left ("8" above) is a function of the transport stage:
The variations in the coefficient were later generalized as a function of dimensionless shear stress:
Wilcock and Crowe
In 2003, Peter Wilcock and Joanna Crowe (now Joanna Curran) published a sediment transport formula that works with multiple grain sizes across the sand and gravel range. Their formula works with surface grain size distributions, as opposed to older models which use subsurface grain size distributions (and thereby implicitly infer a surface grain sorting).
Their expression is more complicated than the basic sediment transport rules (such as that of Meyer-Peter and Müller) because it takes into account multiple grain sizes: this requires consideration of reference shear stresses for each grain size, the fraction of the total sediment supply that falls into each grain size class, and a "hiding function".
The "hiding function" takes into account the fact that, while small grains are inherently more mobile than large grains, on a mixed-grain-size bed, they may be trapped in deep pockets between large grains. Likewise, a large grain on a bed of small particles will be stuck in a much smaller pocket than if it were on a bed of grains of the same size. In gravel-bed rivers, this can cause "equal mobility", in which small grains can move just as easily as large ones. As sand is added to the system, it moves away from the "equal mobility" portion of the hiding function to one in which grain size again matters.
Their model is based on the transport stage, or ratio of bed shear stress to critical shear stress for the initiation of grain motion. Because their formula works with several grain sizes simultaneously, they define the critical shear stress for each grain size class, , to be equal to a "reference shear stress", .
They express their equations in terms of a dimensionless transport parameter, (with the "" indicating nondimensionality and the "" indicating that it is a function of grain size):
is the volumetric bed load transport rate of size class per unit channel width . is the proportion of size class that is present on the bed.
They came up with two equations, depending on the transport stage, . For :
and for :
.
This equation asymptotically reaches a constant value of as becomes large.
Wilcock and Kenworthy
In 2002, Peter Wilcock and T. A. Kenworthy, following Peter Wilcock (1998), published a sediment bed-load transport formula that works with only two sediments fractions, i.e. sand and gravel fractions. A mixed-sized sediment bed-load transport model using only two fractions offers practical advantages in terms of both computational and conceptual modeling by taking into account the nonlinear effects of sand presence in gravel beds on bed-load transport rate of both fractions. In fact, in the two-fraction bed load formula appears a new ingredient with respect to that of Meyer-Peter and Müller that is the proportion of fraction on the bed surface where the subscript represents either the sand (s) or gravel (g) fraction. The proportion , as a function of sand content , physically represents the relative influence of the mechanisms controlling sand and gravel transport, associated with the change from a clast-supported to matrix-supported gravel bed. Moreover, since spans between 0 and 1, phenomena that vary with include the relative size effects producing "hiding" of fine grains and "exposure" of coarse grains.
The "hiding" effect takes into account the fact that, while small grains are inherently more mobile than large grains, on a mixed-grain-size bed, they may be trapped in deep pockets between large grains. Likewise, a large grain on a bed of small particles will be stuck in a much smaller pocket than if it were on a bed of grains of the same size, which the Meyer-Peter and Müller formula refers to. In gravel-bed rivers, this can cause "equal mobility", in which small grains can move just as easily as large ones. As sand is added to the system, it moves away from the "equal mobility" portion of the hiding function to one in which grain size again matters.
Their model is based on the transport stage, i.e. , or ratio of bed shear stress to critical shear stress for the initiation of grain motion. Because their formula works with only two fractions simultaneously, they define the critical shear stress for each of the two grain size classes, , where represents either the sand (s) or gravel (g) fraction. The critical shear stress that represents the incipient motion for each of the two fractions is consistent with established values in the limit of pure sand and gravel beds and shows a sharp change with increasing sand content over the transition from a clast- to matrix-supported bed.
They express their equations in terms of a dimensionless transport parameter, (with the "" indicating nondimensionality and the "" indicating that it is a function of grain size):
is the volumetric bed load transport rate of size class per unit channel width . is the proportion of size class that is present on the bed.
They came up with two equations, depending on the transport stage, . For :
and for :
.
This equation asymptotically reaches a constant value of as becomes large and the symbols have the following values:
In order to apply the above formulation, it is necessary to specify the characteristic grain sizes for the sand portion and for the gravel portion of the surface layer, the fractions and of sand and gravel, respectively in the surface layer, the submerged specific gravity of the sediment R and shear velocity associated with skin friction .
Kuhnle et al.
For the case in which sand fraction is transported by the current over and through an immobile gravel bed, Kuhnle et al.(2013), following the theoretical analysis done by Pellachini (2011), provides a new relationship for the bed load transport of the sand fraction when gravel particles remain at rest. It is worth mentioning that Kuhnle et al. (2013) applied the Wilcock and Kenworthy (2002) formula to their experimental data and found out that predicted bed load rates of sand fraction were about 10 times greater than measured and approached 1 as the sand elevation became near the top of the gravel layer. They, also, hypothesized that the mismatch between predicted and measured sand bed load rates is due to the fact that the bed shear stress used for the Wilcock and Kenworthy (2002) formula was larger than that available for transport within the gravel bed because of the sheltering effect of the gravel particles.
To overcome this mismatch, following Pellachini (2011), they assumed that the variability of the bed shear stress available for the sand to be transported by the current would be some function of the so-called "Roughness Geometry Function" (RGF), which represents the gravel bed elevations distribution. Therefore, the sand bed load formula follows as:
where
the subscript refers to the sand fraction, s represents the ratio where is the sand fraction density, is the RGF as a function of the sand level within the gravel bed, is the bed shear stress available for sand transport and is the critical shear stress for incipient motion of the sand fraction, which was calculated graphically using the updated Shields-type relation of Miller et al.(1977).
Suspended load
Suspended load is carried in the lower to middle parts of the flow, and moves at a large fraction of the mean flow velocity in the stream.
A common characterization of suspended sediment concentration in a flow is given by the Rouse Profile. This characterization works for the situation in which sediment concentration at one particular elevation above the bed can be quantified. It is given by the expression:
Here, is the elevation above the bed, is the concentration of suspended sediment at that elevation, is the flow depth, is the Rouse number, and relates the eddy viscosity for momentum to the eddy diffusivity for sediment, which is approximately equal to one.
Experimental work has shown that ranges from 0.93 to 1.10 for sands and silts.
The Rouse profile characterizes sediment concentrations because the Rouse number includes both turbulent mixing and settling under the weight of the particles. Turbulent mixing results in the net motion of particles from regions of high concentrations to low concentrations. Because particles settle downward, for all cases where the particles are not neutrally buoyant or sufficiently light that this settling velocity is negligible, there is a net negative concentration gradient as one goes upward in the flow. The Rouse Profile therefore gives the concentration profile that provides a balance between turbulent mixing (net upwards) of sediment and the downwards settling velocity of each particle.
Bed material load
Bed material load comprises the bed load and the portion of the suspended load that is sourced from the bed.
Three common bed material transport relations are the "Ackers-White", "Engelund-Hansen", "Yang" formulae. The first is for sand to granule-size gravel, and the second and third are for sand though Yang later expanded his formula to include fine gravel. That all of these formulae cover the sand-size range and two of them are exclusively for sand is that the sediment in sand-bed rivers is commonly moved simultaneously as bed and suspended load.
Engelund–Hansen
The bed material load formula of Engelund and Hansen is the only one to not include some kind of critical value for the initiation of sediment transport. It reads:
where is the Einstein nondimensionalization for sediment volumetric discharge per unit width, is a friction factor, and is the Shields stress. The Engelund–Hansen formula is one of the few sediment transport formulae in which a threshold "critical shear stress" is absent.
Wash load
Wash load is carried within the water column as part of the flow, and therefore moves with the mean velocity of main stream. Wash load concentrations are approximately uniform in the water column. This is described by the endmember case in which the Rouse number is equal to 0 (i.e. the settling velocity is far less than the turbulent mixing velocity), which leads to a prediction of a perfectly uniform vertical concentration profile of material.
Total load
Some authors have attempted formulations for the total sediment load carried in water. These formulas are designed largely for sand, as (depending on flow conditions) sand often can be carried as both bed load and suspended load in the same stream or shoreface.
Bed load sediment mitigation at intake structures
Riverside intake structures used in water supply, canal diversions, and water cooling can experience entrainment of bed load (sand-size) sediments. These entrained sediments produce multiple deleterious effects such as reduction or blockage of intake capacity, feedwater pump impeller damage or vibration, and result in sediment deposition in downstream pipelines and canals. Structures that modify local near-field secondary currents are useful to mitigate these effects and limit or prevent bed load sediment entry.
| Physical sciences | Sedimentology | Earth science |
15461460 | https://en.wikipedia.org/wiki/Passenger%20train | Passenger train | A passenger train is a train used to transport people along a railroad line, as opposed to a freight train that carries goods. These trains may consist of unpowered passenger railroad cars (also known as coaches or carriages) hauled by one or more locomotives, or may be self-propelled; self propelled passenger trains are known as multiple units or railcars. Passenger trains stop at stations or depots, where passengers may board and disembark. In most cases, passenger trains operate on a fixed schedule and have priority over freight trains.
Car design and the general safety of passenger trains have dramatically evolved over time, making travel by rail remarkably safe. Some passenger trains, both long-distance and short-distance, use bi-level (double-decker) cars to carry more passengers per train. Sleeper trains include sleeping cars with beds. Passenger trains hauled by locomotives are more expensive to operate than multiple units, but have a higher passenger capacity.
Many prestigious passenger train services have been bestowed a special name, some of which have become famous in literature and fiction.
History
The first occasion on which a railway locomotive pulled a train carrying passengers was in the United Kingdom in 1804, at Penydarren Ironworks in Wales, when 70 employees of the ironworks were transported 9 miles by an engine designed by Richard Trevithick. The first passenger train in regular service was a horse drawn train on the Swansea and Mumbles Railway which opened in 1807. In 1808, Trevithick ran a passenger-carrying exhibition train called Catch Me Who Can on a small loop of track in London. The exhibition, which ran for two weeks, charged passengers for rides.
The first steam train carrying passengers on a public railway was hauled by Locomotion No. 1 on the Stockton and Darlington Railway in 1825, traveling at speeds up to 15 miles per hour.
Travel by passenger trains in the United States began in the 1830s and became popular in the 1850s and '60s. The first electric passenger train was exhibited at the Berlin Industrial Exposition 1879. The first successful commercial electric passenger train, the Gross-Lichterfelde Tramway, ran a year later in Lichterfelde.
Long-distance trains
Long-distance trains travel between many cities or regions of a country, and sometimes cross several countries. They often have a dining car or restaurant car to allow passengers to have a meal during the course of their journey. Trains travelling overnight may also have sleeping cars. Currently, much of travel on these distances of over is done by air in many countries but in others long-distance travel by rail is a popular or the only cheap way to travel long distances.
High-speed rail
One notable and growing long-distance train category is high-speed rail, which generally runs at speeds above and often operates on a dedicated track that is surveyed and prepared to accommodate high speeds. The first successful example of a high-speed passenger rail system was Japan's Shinkansen, colloquially known as the "bullet train", which commenced operation in October 1964. Other examples include Italy's Le Frecce, France's TGV (, ), Germany's ICE (Inter-City Express), and Spain's AVE () in Europe.
In most cases, high-speed rail travel is time- and cost-competitive with air travel when distances do not exceed , as airport check-in and boarding procedures can add at least two hours to the overall transit time. Also, rail operating costs over these distances may be lower when the amount of jet fuel consumed by an airliner during takeoff and climbout is taken into consideration. Air travel becomes more cost-competitive as the travel distance increases because the fuel accounts for less of the overall operating cost of the airliner.
Some high-speed rail systems employ tilting technology to improve stability in curves. Examples of tilting trains are the Advanced Passenger Train (APT), the Pendolino, the N700 Series Shinkansen, Amtrak's Acela and the Spanish Talgo. Tilting is a dynamic form of superelevation, allowing both low- and high-speed traffic to use the same trackage (though not simultaneously), as well as producing a more comfortable ride for passengers.
Inter-city trains
"Inter-city" is a general term for any rail service that uses trains with limited stops to provide fast long-distance travel. Inter-city services can be divided into three major groups:
InterCity: using high-speed trains to connect cities in Europe, bypassing all intermediate stations, thus linking major population hubs in the fastest time possible
Express: calling at some intermediate stations between cities, serving larger urban communities
Regional: calling at all intermediate stations between cities, serving smaller communities along the route
The distinction between the three types of inter-city rail service may be unclear; trains can run as InterCity services between major cities, then revert to an express (or even regional) train service to reach communities at the furthest points of the journey. This practice allows less populous communities to be served in the most cost-effective way, at the expense of a longer journey time for those wishing to travel to the terminus station.
Higher-speed rail
Higher-speed rail services operate at top speeds that are higher than conventional inter-city trains but below high-speed rail services. These services are provided after improvements to the conventional rail infrastructure to support trains that can operate safely at higher speeds.
Short-distance trains
Commuter trains
Many cities and their surrounding areas are served by commuter trains (also known as suburban trains, or S-Bahn in the German-speaking world), which serve commuters who live outside of the city they work in, or vice versa. More specifically, in the United States commuter rail service is defined as, "short-haul rail passenger transportation in metropolitan and suburban areas usually having reduced fare, multiple ride, and commuter tickets and morning and evening peak period operations". Trains are very efficient for transporting large numbers of people at once, compared to road transport. While automobiles may be delayed by traffic congestion, trains operate on dedicated rights-of-way which allow them to bypass such congestion.
With the use of bilevel cars, which are tall enough to have two levels of seating, commuter rail services can haul as many as 150 commuters per train car, and over 1,000 per train: much more than the capacity of automobiles and buses.
Railcar
In British and Australian usage, a "railcar" is a self-propelled railway vehicle designed to transport passengers. The term is usually used in reference to a train consisting of a single passenger car (carriage, coach) with a driver's cab at one or both ends. Some railways, e.g. the Great Western Railway, used the term "railmotor". If the railcar is able to pull a full train, it is more likely to be called a "motor coach" or a "motor car". The term "railcar" is sometimes also used as an alternative name for the small types of multiple unit that consist of more than one coach.
Rapid transit
Rapid transit trains are trains that operate in urban areas on exclusive rights-of-way in that pedestrians and road vehicles may not access them. In Europe, rapid transit is widely known as a metro.
Light rail
Light rails are electrically powered urban passenger trains that run along an exclusive rights-of-way at ground level, raised structures, tunnels, or in streets. Light rail systems generally use lighter equipment that operate at slower speeds to allow for more flexibility in integrating systems into urban environments.
Tram
Trams (also known as streetcars in North America) are a type of passenger train that runs a tramway track on or alongside public urban streets, often including segments of right-of-way for passengers and vehicles.
Heritage trains
Heritage trains are often operated by volunteers, often railfans, as a tourist attraction or as a museum railway. Usually, the trains are formed from historic vehicles retired from national commercial operation that have retained or assumed the character, appearance, and operating practices of railways in their time. Sometimes lines that operate in isolation also provide transport facilities for local people. Much of the equipment used on these trains' systems is original or at least aims to replicate both the look and the operating practices of historic/former railways companies.
Environmental impact
Passenger rail is one of the modes of travel with the lowest carbon dioxide emissions. Rail travel emits much less carbon dioxide per mile than air travel (2–27%) or car travel (2–24%).
| Technology | Rail and cable transport | null |
15464966 | https://en.wikipedia.org/wiki/Pathogenic%20bacteria | Pathogenic bacteria | Pathogenic bacteria are bacteria that can cause disease. This article focuses on the bacteria that are pathogenic to humans. Most species of bacteria are harmless and many are beneficial but others can cause infectious diseases. The number of these pathogenic species in humans is estimated to be fewer than a hundred. By contrast, several thousand species are part of the gut flora present in the digestive tract.
The body is continually exposed to many species of bacteria, including beneficial commensals, which grow on the skin and mucous membranes, and saprophytes, which grow mainly in the soil and in decaying matter. The blood and tissue fluids contain nutrients sufficient to sustain the growth of many bacteria. The body has defence mechanisms that enable it to resist microbial invasion of its tissues and give it a natural immunity or innate resistance against many microorganisms.
Pathogenic bacteria are specially adapted and endowed with mechanisms for overcoming the normal body defences, and can invade parts of the body, such as the blood, where bacteria are not normally found. Some pathogens invade only the surface epithelium, skin or mucous membrane, but many travel more deeply, spreading through the tissues and disseminating by the lymphatic and blood streams. In some rare cases a pathogenic microbe can infect an entirely healthy person, but infection usually occurs only if the body's defence mechanisms are damaged by some local trauma or an underlying debilitating disease, such as wounding, intoxication, chilling, fatigue, and malnutrition. In many cases, it is important to differentiate infection and colonization, which is when the bacteria are causing little or no harm.
Caused by Mycobacterium tuberculosis bacteria, one of the diseases with the highest disease burden is tuberculosis, which killed 1.4 million people in 2019, mostly in sub-Saharan Africa. Pathogenic bacteria contribute to other globally important diseases, such as pneumonia, which can be caused by bacteria such as Staphylococcus, Streptococcus and Pseudomonas, and foodborne illnesses, which can be caused by bacteria such as Shigella, Campylobacter, and Salmonella. Pathogenic bacteria also cause infections such as tetanus, typhoid fever, diphtheria, syphilis, and leprosy.
Pathogenic bacteria are also the cause of high infant mortality rates in developing countries. A GBD study estimated the global death rates from (33) bacterial pathogens, finding such infections contributed to one in 8 deaths (or ~7.7 million deaths), which the second largest cause of death globally in 2019.
Most pathogenic bacteria can be grown in cultures and identified by Gram stain and other methods. Bacteria grown in this way are often tested to find which antibiotics will be an effective treatment for the infection. For hitherto unknown pathogens, Koch's postulates are the standard to establish a causative relationship between a microbe and a disease.
Diseases
Each species has specific effect and causes symptoms in people who are infected. Some people who are infected with a pathogenic bacteria do not have symptoms. Immunocompromised individuals are more susceptible to pathogenic bacteria.
Pathogenic susceptibility
Some pathogenic bacteria cause disease under certain conditions, such as entry through the skin via a cut, through sexual activity or through compromised immune function.
Some species of Streptococcus and Staphylococcus are part of the normal skin microbiota and typically reside on healthy skin or in the nasopharyngeal region. Yet these species can potentially initiate skin infections. Streptococcal infections include sepsis, pneumonia, and meningitis. These infections can become serious creating a systemic inflammatory response resulting in massive vasodilation, shock, and death.
Other bacteria are opportunistic pathogens and cause disease mainly in people with immunosuppression or cystic fibrosis. Examples of these opportunistic pathogens include Pseudomonas aeruginosa, Burkholderia cenocepacia, and Mycobacterium avium.
Intracellular
Obligate intracellular parasites (e.g. Chlamydophila, Ehrlichia, Rickettsia) are only able to grow and replicate inside other cells. Infections due to obligate intracellular bacteria may be asymptomatic, requiring an incubation period. Examples of obligate intracellular bacteria include Rickettsia prowazekii (typhus) and Rickettsia rickettsii, (Rocky Mountain spotted fever).
Chlamydia are intracellular parasites. These pathogens can cause pneumonia or urinary tract infection and may be involved in coronary heart disease.
Other groups of intracellular bacterial pathogens include Salmonella, Neisseria, Brucella, Mycobacterium, Nocardia, Listeria, Francisella, Legionella, and Yersinia pestis. These can exist intracellularly, but can exist outside host cells.
Infections in specific tissue
Bacterial pathogens often cause infection in specific areas of the body. Others are generalists.
Bacterial vaginosis is a condition of the vaginal microbiota in which an excessive growth of Gardnerella vaginalis and other mostly anaerobic bacteria displace the beneficial Lactobacilli species that maintain healthy vaginal microbial populations.
Bacterial meningitis is a bacterial inflammation of the meninges, which are the protective membranes covering the brain and spinal cord.
Bacterial pneumonia is a bacterial infection of the lungs.
Urinary tract infection is predominantly caused by bacteria. Symptoms include the strong and frequent sensation or urge to urinate, pain during urination, and urine that is cloudy. The most frequent cause is Escherichia coli. Urine is typically sterile but contains a variety of salts and waste products. Bacteria can ascend into the bladder or kidney and causing cystitis and nephritis.
Bacterial gastroenteritis is caused by enteric, pathogenic bacteria. These pathogenic species are usually distinct from the usually harmless bacteria of the normal gut flora. But a different strain of the same species may be pathogenic. The distinction is sometimes difficult as in the case of Escherichia.
Bacterial skin infections include:
Impetigo is a highly contagious bacterial skin infection commonly seen in children. It is caused by Staphylococcus aureus, and Streptococcus pyogenes.
Erysipelas is an acute streptococcus bacterial infection of the deeper skin layers that spreads via with lymphatic system.
Cellulitis is a diffuse inflammation of connective tissue with severe inflammation of dermal and subcutaneous layers of the skin. Cellulitis can be caused by normal skin flora or by contagious contact, and usually occurs through open skin, cuts, blisters, cracks in the skin, insect bites, animal bites, burns, surgical wounds, intravenous drug injection, or sites of intravenous catheter insertion. In most cases it is the skin on the face or lower legs that is affected, though cellulitis can occur in other tissues.
Mechanisms of damage
The symptoms of disease appear as pathogenic bacteria damage host tissues or interfere with their function. The bacteria can damage host cells directly or indirectly by provoking an immune response that inadvertently damages host cells, or by releasing toxins.
Direct
Once pathogens attach to host cells, they can cause direct damage as the pathogens use the host cell for nutrients and produce waste products. For example, Streptococcus mutans, a component of dental plaque, metabolizes dietary sugar and produces acid as a waste product. The acid decalcifies the tooth surface to cause dental caries.
Toxin production
Endotoxins are the lipid portions of lipopolysaccharides that are part of the outer membrane of the cell wall of gram-negative bacteria. Endotoxins are released when the bacteria lyses, which is why after antibiotic treatment, symptoms can worsen at first as the bacteria are killed and they release their endotoxins. Exotoxins are secreted into the surrounding medium or released when the bacteria die and the cell wall breaks apart.
Indirect
An excessive or inappropriate immune response triggered by an infection may damage host cells.
Survival in host
Nutrients
Iron is required for humans, as well as the growth of most bacteria. To obtain free iron, some pathogens secrete proteins called siderophores, which take the iron away from iron-transport proteins by binding to the iron even more tightly. Once the iron-siderophore complex is formed, it is taken up by siderophore receptors on the bacterial surface and then that iron is brought into the bacterium.
Bacterial pathogens also require access to carbon and energy sources for growth. To avoid competition with host cells for glucose which is the main energy source used by human cells, many pathogens including the respiratory pathogen Haemophilus influenzae specialise in using other carbon sources such as lactate that are abundant in the human body
Identification
Typically identification is done by growing the organism in a wide range of cultures which can take up to 48 hours. The growth is then visually or genomically identified. The cultured organism is then subjected to various assays to observe reactions to help further identify species and strain.
Treatment
Bacterial infections may be treated with antibiotics, which are classified as bacteriocidal if they kill bacteria or bacteriostatic if they just prevent bacterial growth. There are many types of antibiotics and each class inhibits a process that is different in the pathogen from that found in the host. For example, the antibiotics chloramphenicol and tetracyclin inhibit the bacterial ribosome but not the structurally different eukaryotic ribosome, so they exhibit selective toxicity. Antibiotics are used both in treating human disease and in intensive farming to promote animal growth. Both uses may be contributing to the rapid development of antibiotic resistance in bacterial populations. Phage therapy, using bacteriophages can also be used to treat certain bacterial infections.
Prevention
Infections can be prevented by antiseptic measures such as sterilizing the skin prior to piercing it with the needle of a syringe and by proper care of indwelling catheters. Surgical and dental instruments are also sterilized to prevent infection by bacteria. Disinfectants such as bleach are used to kill bacteria or other pathogens on surfaces to prevent contamination and further reduce the risk of infection. Bacteria in food are killed by cooking to temperatures above 73 °C (163 °F).
List of genera and microscopy features
Many genera contain pathogenic bacterial species. They often possess characteristics that help to classify and organize them into groups. The following is a partial listing.
List of species and clinical characteristics
This is description of the more common genera and species presented with their clinical characteristics and treatments.
Genetic transformation
Of the 59 species listed in the table with their clinical characteristics, 11 species (or 19%) are known to be capable of natural genetic transformation. Natural transformation is a bacterial adaptation for transferring DNA from one cell to another. This process includes the uptake of exogenous DNA from a donor cell by a recipient cell and its incorporation into the recipient cell's genome by recombination. Transformation appears to be an adaptation for repairing damage in the recipient cell's DNA. Among pathogenic bacteria, transformation capability likely serves as an adaptation that facilitates survival and infectivity. The pathogenic bacteria able to carry out natural genetic transformation (of those listed in the table) are Campylobacter jejuni, Enterococcus faecalis, Haemophilus influenzae, Helicobacter pylori, Klebsiella pneumoniae, Legionella pneumophila, Neisseria gonorrhoeae, Neisseria meningitidis, Staphylococcus aureus, Streptococcus pneumoniae and Vibrio cholerae.
| Biology and health sciences | Concepts | Health |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.